Skip to content

Commit

Permalink
reknit examples
Browse files Browse the repository at this point in the history
  • Loading branch information
t-kalinowski committed Jul 16, 2024
1 parent 56ff4fe commit 62098ff
Show file tree
Hide file tree
Showing 24 changed files with 664 additions and 748 deletions.
2 changes: 1 addition & 1 deletion vignettes/examples/index.Rmd
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Keras examples
output: rmarkdown::html_vignette
date: 'Last Modified: 2023-11-30; Last Rendered: 2024-05-21'
date: 'Last Modified: 2023-11-30; Last Rendered: 2024-07-16'
vignette: >
%\VignetteIndexEntry{Keras examples}
%\VignetteEngine{knitr::rmarkdown}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@ transformer
## │ transformer_encoder │ (None, None, 256) │ 3,155,456 │ positional_embed… │
## │ (TransformerEncode… │ │ │ │
## ├─────────────────────┼───────────────────┼────────────┼───────────────────┤
## │ functional_5 │ ([38;5;45mNone[0m, [38;5;45mNone[0m, │ [38;5;34m12,959,640[0m │ decoder_inputs[[38;5;34m0[0m… │
## │ functional_3 │ ([38;5;45mNone[0m, [38;5;45mNone[0m, │ [38;5;34m12,959,640[0m │ decoder_inputs[[38;5;34m0[0m… │
## │ (Functional) │ 15000) │ │ transformer_enco… │
## └─────────────────────┴───────────────────┴────────────┴───────────────────┘
##  Total params: 19,960,216 (76.14 MB)
Expand All @@ -583,7 +583,7 @@ transformer |> fit(train_ds, epochs = epochs,
```

```
## 1297/1297 - 58s - 44ms/step - accuracy: 0.7709 - loss: 1.5752 - val_accuracy: 0.7731 - val_loss: 1.4209
## 1297/1297 - 43s - 33ms/step - accuracy: 0.7229 - loss: 1.9745 - val_accuracy: 0.7338 - val_loss: 1.7418
```


Expand Down
18 changes: 9 additions & 9 deletions vignettes/examples/nlp/text_classification_from_scratch.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,7 @@ summary(model)
```

```
## [1mModel: "functional_1"[0m
## [1mModel: "functional"[0m
## ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
## ┃ Layer (type)  ┃ Output Shape  ┃  Param # ┃
## ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
Expand Down Expand Up @@ -402,11 +402,11 @@ model |> fit(train_ds, validation_data = val_ds, epochs = epochs)

```
## Epoch 1/3
## 625/625 - 6s - 10ms/step - accuracy: 0.6909 - loss: 0.5300 - val_accuracy: 0.8658 - val_loss: 0.3229
## 625/625 - 5s - 8ms/step - accuracy: 0.6903 - loss: 0.5292 - val_accuracy: 0.8612 - val_loss: 0.3235
## Epoch 2/3
## 625/625 - 2s - 3ms/step - accuracy: 0.9047 - loss: 0.2412 - val_accuracy: 0.8742 - val_loss: 0.3202
## 625/625 - 1s - 2ms/step - accuracy: 0.9054 - loss: 0.2398 - val_accuracy: 0.8698 - val_loss: 0.3342
## Epoch 3/3
## 625/625 - 2s - 3ms/step - accuracy: 0.9573 - loss: 0.1237 - val_accuracy: 0.8704 - val_loss: 0.3551
## 625/625 - 1s - 2ms/step - accuracy: 0.9553 - loss: 0.1253 - val_accuracy: 0.8744 - val_loss: 0.3402
```

## Evaluate the model on the test set
Expand All @@ -417,15 +417,15 @@ model |> evaluate(test_ds)
```

```
## 782/782 - 1s - 2ms/step - accuracy: 0.8594 - loss: 0.3818
## 782/782 - 1s - 2ms/step - accuracy: 0.8630 - loss: 0.3716
```

```
## $accuracy
## [1] 0.85936
## [1] 0.86296
##
## $loss
## [1] 0.381799
## [1] 0.3716201
```

## Make an end-to-end model
Expand Down Expand Up @@ -455,12 +455,12 @@ end_to_end_model |> evaluate(raw_test_ds)
```

```
## 782/782 - 3s - 4ms/step - accuracy: 0.8594 - loss: 0.0000e+00
## 782/782 - 3s - 4ms/step - accuracy: 0.8630 - loss: 0.0000e+00
```

```
## $accuracy
## [1] 0.85936
## [1] 0.86296
##
## $loss
## [1] 0
Expand Down
74 changes: 37 additions & 37 deletions vignettes/examples/structured_data/imbalanced_classification.Rmd

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -201,20 +201,20 @@ cat("Target: "); str(y)

```
## Input: List of 13
## $ age :<tf.Tensor: shape=(), dtype=int32, numpy=59>
## $ sex :<tf.Tensor: shape=(), dtype=int32, numpy=1>
## $ age :<tf.Tensor: shape=(), dtype=int32, numpy=63>
## $ sex :<tf.Tensor: shape=(), dtype=int32, numpy=0>
## $ cp :<tf.Tensor: shape=(), dtype=int32, numpy=4>
## $ trestbps:<tf.Tensor: shape=(), dtype=int32, numpy=164>
## $ chol :<tf.Tensor: shape=(), dtype=int32, numpy=176>
## $ fbs :<tf.Tensor: shape=(), dtype=int32, numpy=1>
## $ restecg :<tf.Tensor: shape=(), dtype=int32, numpy=2>
## $ thalach :<tf.Tensor: shape=(), dtype=int32, numpy=90>
## $ exang :<tf.Tensor: shape=(), dtype=int32, numpy=0>
## $ oldpeak :<tf.Tensor: shape=(), dtype=float32, numpy=1.0>
## $ trestbps:<tf.Tensor: shape=(), dtype=int32, numpy=124>
## $ chol :<tf.Tensor: shape=(), dtype=int32, numpy=197>
## $ fbs :<tf.Tensor: shape=(), dtype=int32, numpy=0>
## $ restecg :<tf.Tensor: shape=(), dtype=int32, numpy=0>
## $ thalach :<tf.Tensor: shape=(), dtype=int32, numpy=136>
## $ exang :<tf.Tensor: shape=(), dtype=int32, numpy=1>
## $ oldpeak :<tf.Tensor: shape=(), dtype=float32, numpy=0.0>
## $ slope :<tf.Tensor: shape=(), dtype=int32, numpy=2>
## $ ca :<tf.Tensor: shape=(), dtype=int32, numpy=2>
## $ thal :<tf.Tensor: shape=(), dtype=string, numpy=b'fixed'>
## Target: <tf.Tensor: shape=(), dtype=int32, numpy=1>
## $ ca :<tf.Tensor: shape=(), dtype=int32, numpy=0>
## $ thal :<tf.Tensor: shape=(), dtype=string, numpy=b'normal'>
## Target: <tf.Tensor: shape=(), dtype=int32, numpy=0>
```

Let's batch the datasets:
Expand Down Expand Up @@ -371,7 +371,7 @@ preprocessed_x

```
## tf.Tensor(
## [[0. 0. 1. ... 0. 0. 0.]
## [[0. 0. 0. ... 0. 1. 0.]
## [0. 0. 0. ... 0. 0. 0.]
## [0. 0. 0. ... 0. 0. 0.]
## ...
Expand Down Expand Up @@ -463,45 +463,45 @@ training_model |> fit(

```
## Epoch 1/20
## 8/8 - 3s - 326ms/step - accuracy: 0.4315 - loss: 0.7427 - val_accuracy: 0.5333 - val_loss: 0.7048
## 8/8 - 2s - 270ms/step - accuracy: 0.4357 - loss: 0.7353 - val_accuracy: 0.5000 - val_loss: 0.7068
## Epoch 2/20
## 8/8 - 0s - 13ms/step - accuracy: 0.5311 - loss: 0.6984 - val_accuracy: 0.6500 - val_loss: 0.6438
## 8/8 - 0s - 13ms/step - accuracy: 0.5851 - loss: 0.6719 - val_accuracy: 0.6000 - val_loss: 0.6625
## Epoch 3/20
## 8/8 - 0s - 13ms/step - accuracy: 0.6473 - loss: 0.6316 - val_accuracy: 0.6833 - val_loss: 0.5937
## 8/8 - 0s - 13ms/step - accuracy: 0.6100 - loss: 0.6419 - val_accuracy: 0.7000 - val_loss: 0.6241
## Epoch 4/20
## 8/8 - 0s - 13ms/step - accuracy: 0.6639 - loss: 0.6134 - val_accuracy: 0.7500 - val_loss: 0.5524
## 8/8 - 0s - 13ms/step - accuracy: 0.6639 - loss: 0.5998 - val_accuracy: 0.7000 - val_loss: 0.5919
## Epoch 5/20
## 8/8 - 0s - 12ms/step - accuracy: 0.7178 - loss: 0.5820 - val_accuracy: 0.7667 - val_loss: 0.5176
## 8/8 - 0s - 13ms/step - accuracy: 0.7593 - loss: 0.5628 - val_accuracy: 0.7000 - val_loss: 0.5648
## Epoch 6/20
## 8/8 - 0s - 13ms/step - accuracy: 0.7718 - loss: 0.5573 - val_accuracy: 0.7500 - val_loss: 0.4897
## 8/8 - 0s - 13ms/step - accuracy: 0.7635 - loss: 0.5405 - val_accuracy: 0.7000 - val_loss: 0.5414
## Epoch 7/20
## 8/8 - 0s - 12ms/step - accuracy: 0.7718 - loss: 0.5200 - val_accuracy: 0.8167 - val_loss: 0.4640
## 8/8 - 0s - 13ms/step - accuracy: 0.7759 - loss: 0.4975 - val_accuracy: 0.7167 - val_loss: 0.5204
## Epoch 8/20
## 8/8 - 0s - 14ms/step - accuracy: 0.7759 - loss: 0.5068 - val_accuracy: 0.8167 - val_loss: 0.4388
## 8/8 - 0s - 13ms/step - accuracy: 0.7842 - loss: 0.4926 - val_accuracy: 0.7167 - val_loss: 0.5008
## Epoch 9/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8174 - loss: 0.4724 - val_accuracy: 0.8333 - val_loss: 0.4162
## 8/8 - 0s - 13ms/step - accuracy: 0.8133 - loss: 0.4600 - val_accuracy: 0.7167 - val_loss: 0.4849
## Epoch 10/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8050 - loss: 0.4545 - val_accuracy: 0.8167 - val_loss: 0.3960
## 8/8 - 0s - 14ms/step - accuracy: 0.8008 - loss: 0.4498 - val_accuracy: 0.7167 - val_loss: 0.4730
## Epoch 11/20
## 8/8 - 0s - 13ms/step - accuracy: 0.8091 - loss: 0.4514 - val_accuracy: 0.8500 - val_loss: 0.3786
## 8/8 - 0s - 13ms/step - accuracy: 0.8299 - loss: 0.4408 - val_accuracy: 0.7333 - val_loss: 0.4608
## Epoch 12/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8423 - loss: 0.4291 - val_accuracy: 0.8500 - val_loss: 0.3647
## 8/8 - 0s - 13ms/step - accuracy: 0.8008 - loss: 0.4297 - val_accuracy: 0.7667 - val_loss: 0.4508
## Epoch 13/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8465 - loss: 0.4028 - val_accuracy: 0.8667 - val_loss: 0.3499
## 8/8 - 0s - 13ms/step - accuracy: 0.8299 - loss: 0.3921 - val_accuracy: 0.7833 - val_loss: 0.4404
## Epoch 14/20
## 8/8 - 0s - 13ms/step - accuracy: 0.8340 - loss: 0.4037 - val_accuracy: 0.8667 - val_loss: 0.3369
## 8/8 - 0s - 13ms/step - accuracy: 0.8506 - loss: 0.3890 - val_accuracy: 0.8000 - val_loss: 0.4324
## Epoch 15/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8548 - loss: 0.3928 - val_accuracy: 0.8667 - val_loss: 0.3289
## 8/8 - 0s - 13ms/step - accuracy: 0.8382 - loss: 0.3783 - val_accuracy: 0.8333 - val_loss: 0.4243
## Epoch 16/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8589 - loss: 0.3745 - val_accuracy: 0.8667 - val_loss: 0.3206
## 8/8 - 0s - 14ms/step - accuracy: 0.8465 - loss: 0.3651 - val_accuracy: 0.8333 - val_loss: 0.4160
## Epoch 17/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8257 - loss: 0.3820 - val_accuracy: 0.8667 - val_loss: 0.3129
## 8/8 - 0s - 13ms/step - accuracy: 0.8340 - loss: 0.3545 - val_accuracy: 0.8333 - val_loss: 0.4076
## Epoch 18/20
## 8/8 - 0s - 14ms/step - accuracy: 0.8631 - loss: 0.3650 - val_accuracy: 0.8667 - val_loss: 0.3079
## 8/8 - 0s - 13ms/step - accuracy: 0.8589 - loss: 0.3493 - val_accuracy: 0.8500 - val_loss: 0.4017
## Epoch 19/20
## 8/8 - 0s - 12ms/step - accuracy: 0.8382 - loss: 0.3635 - val_accuracy: 0.8667 - val_loss: 0.3024
## 8/8 - 0s - 14ms/step - accuracy: 0.8506 - loss: 0.3227 - val_accuracy: 0.8500 - val_loss: 0.3967
## Epoch 20/20
## 8/8 - 0s - 13ms/step - accuracy: 0.8631 - loss: 0.3524 - val_accuracy: 0.8833 - val_loss: 0.2970
## 8/8 - 0s - 13ms/step - accuracy: 0.8299 - loss: 0.3377 - val_accuracy: 0.8500 - val_loss: 0.3936
```

We quickly get to 80% validation accuracy.
Expand Down Expand Up @@ -534,7 +534,7 @@ predictions <- inference_model |> predict(input_dict)
```

```
## 1/1 - 0s - 257ms/step
## 1/1 - 0s - 273ms/step
```

``` r
Expand All @@ -545,6 +545,6 @@ glue::glue(r"---(
```

```
## This particular patient had a 49.7% probability
## This particular patient had a 42% probability
## of having a heart disease, as evaluated by our model.
```
Loading

0 comments on commit 62098ff

Please sign in to comment.