diff --git a/dev/articles/callbacks.html b/dev/articles/callbacks.html index 7e365d55..2e516983 100644 --- a/dev/articles/callbacks.html +++ b/dev/articles/callbacks.html @@ -225,7 +225,7 @@

Writing a Custom Logger## load_state_dict: function (state_dict) ## on_before_valid: function () ## on_batch_end: function () -## Parent env: <environment: 0x562cc20f6b70> +## Parent env: <environment: 0x55ad1a336908> ## Locked objects: FALSE ## Locked class: FALSE ## Portable: TRUE diff --git a/dev/articles/get_started.html b/dev/articles/get_started.html index dc76eb24..9e296b86 100644 --- a/dev/articles/get_started.html +++ b/dev/articles/get_started.html @@ -235,7 +235,7 @@

Loss #> clone: function (deep = FALSE, ..., replace_values = TRUE) #> Private: #> .__clone_r6__: function (deep = FALSE) -#> Parent env: <environment: 0x55614f56e4e0> +#> Parent env: <environment: 0x55efdf4e0040> #> Locked objects: FALSE #> Locked class: FALSE #> Portable: TRUE diff --git a/dev/articles/internals_pipeop_torch.html b/dev/articles/internals_pipeop_torch.html index bfcd9a7c..a4a378a9 100644 --- a/dev/articles/internals_pipeop_torch.html +++ b/dev/articles/internals_pipeop_torch.html @@ -104,8 +104,8 @@

A torch Primerinput = torch_randn(2, 3) input #> torch_tensor -#> -0.1123 -1.0291 -1.2675 -#> -0.0152 -0.4650 0.9909 +#> -0.4067 -0.0502 -1.6532 +#> -0.1811 -1.5030 -1.8336 #> [ CPUFloatType{2,3} ]

A nn_module is constructed from a nn_module_generator. nn_linear is one of the @@ -117,8 +117,8 @@

A torch Primeroutput = module_1(input) output #> torch_tensor -#> -0.3891 0.4440 -0.1373 0.4749 -#> -0.3452 -0.4354 0.3701 -0.2034 +#> 0.1462 1.0915 -0.2286 0.4098 +#> 0.0748 0.7198 -0.8370 1.0574 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

A neural network with one (4-unit) hidden layer and two outputs needs the following ingredients

@@ -134,8 +134,8 @@

A torch Primeroutput = softmax(output) output #> torch_tensor -#> 0.3527 0.3659 0.2814 -#> 0.4060 0.3514 0.2427 +#> 0.4469 0.3663 0.1868 +#> 0.4343 0.3757 0.1900 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

We will now continue with showing how such a neural network can be represented in mlr3torch.

@@ -170,8 +170,8 @@

Neural Networks as Graphsoutput = po_module_1$train(list(input))[[1]] output #> torch_tensor -#> -0.3891 0.4440 -0.1373 0.4749 -#> -0.3452 -0.4354 0.3701 -0.2034 +#> 0.1462 1.0915 -0.2286 0.4098 +#> 0.0748 0.7198 -0.8370 1.0574 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

Note we only use the $train(), since torch modules do not have anything that maps to the state (it is filled by @@ -196,8 +196,8 @@

Neural Networks as Graphsoutput = module_graph$train(input)[[1]] output #> torch_tensor -#> 0.3527 0.3659 0.2814 -#> 0.4060 0.3514 0.2427 +#> 0.4469 0.3663 0.1868 +#> 0.4343 0.3757 0.1900 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]

While this object allows to easily perform a forward pass, it does not inherit from nn_module, which is useful for various @@ -245,8 +245,8 @@

Neural Networks as Graphs
 graph_module(input)
 #> torch_tensor
-#>  0.3527  0.3659  0.2814
-#>  0.4060  0.3514  0.2427
+#>  0.4469  0.3663  0.1868
+#>  0.4343  0.3757  0.1900
 #> [ CPUFloatType{2,3} ][ grad_fn = <SoftmaxBackward0> ]
@@ -363,8 +363,8 @@

small_module(input) #> torch_tensor -#> -0.2777 -0.3662 0.8637 0.7430 -#> 0.6441 0.1711 0.0952 -0.3052 +#> -0.6074 1.0815 0.8258 -0.6674 +#> -0.6846 0.9005 0.2698 -0.3013 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ]

@@ -429,9 +429,9 @@

Using ModelDescriptor to small_module(batch$x[[1]]) #> torch_tensor -#> 0.9669 -1.7198 -3.3115 -0.4646 -#> 0.7914 -1.8055 -3.1089 -0.2551 -#> 0.8902 -1.6273 -3.0412 -0.3913 +#> 1.4533 -0.8890 1.1345 0.5390 +#> 1.3696 -0.7698 1.2584 0.3222 +#> 1.3247 -0.7846 1.0718 0.5011 #> [ CPUFloatType{3,4} ][ grad_fn = <AddmmBackward0> ]

The first linear layer that takes “Sepal” input ("linear1") creates a 2x4 tensor (batch size 2, 4 units), @@ -690,14 +689,14 @@

Building more interesting NNsiris_module$graph$pipeops$linear1$.result #> $output #> torch_tensor -#> -3.4269 -0.4929 -2.5176 -3.5474 -#> -3.0501 -0.5905 -2.6176 -3.4752 +#> 2.0518 -1.9597 -0.8946 -4.4291 +#> 1.8982 -1.9851 -0.9103 -4.0154 #> [ CPUFloatType{2,4} ][ grad_fn = <AddmmBackward0> ] iris_module$graph$pipeops$linear3$.result #> $output #> torch_tensor -#> 0.0318 -0.3180 0.0101 0.3317 0.4602 -#> 0.0318 -0.3180 0.0101 0.3317 0.4602 +#> 0.7504 -0.1794 -0.3353 0.4175 -0.0614 +#> 0.7504 -0.1794 -0.3353 0.4175 -0.0614 #> [ CPUFloatType{2,5} ][ grad_fn = <AddmmBackward0> ]

We observe that the po("nn_merge_cat") concatenates these, as expected:

@@ -705,8 +704,8 @@

Building more interesting NNsiris_module$graph$pipeops$nn_merge_cat$.result #> $output #> torch_tensor -#> -3.4269 -0.4929 -2.5176 -3.5474 0.0318 -0.3180 0.0101 0.3317 0.4602 -#> -3.0501 -0.5905 -2.6176 -3.4752 0.0318 -0.3180 0.0101 0.3317 0.4602 +#> 2.0518 -1.9597 -0.8946 -4.4291 0.7504 -0.1794 -0.3353 0.4175 -0.0614 +#> 1.8982 -1.9851 -0.9103 -4.0154 0.7504 -0.1794 -0.3353 0.4175 -0.0614 #> [ CPUFloatType{2,9} ][ grad_fn = <CatBackward0> ] diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png index fb17d057..26097ed8 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-37-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png index 1994fea0..fcbb1383 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-42-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png index ed7d24dc..5007a0a5 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-46-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png index ff46e5df..ffc988ab 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-48-1.png differ diff --git a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png index 1a22254a..0de93953 100644 Binary files a/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png and b/dev/articles/internals_pipeop_torch_files/figure-html/unnamed-chunk-50-1.png differ diff --git a/dev/articles/lazy_tensor.html b/dev/articles/lazy_tensor.html index b821908c..c3bdc94a 100644 --- a/dev/articles/lazy_tensor.html +++ b/dev/articles/lazy_tensor.html @@ -386,7 +386,7 @@

Digging Into Internals#> <DataDescriptor: 1 ops> #> * dataset_shapes: [x: (NA,1)] #> * input_map: (x) -> Graph -#> * pointer: nop.761ca5.x.output +#> * pointer: nop.9b46df.x.output #> * shape: [(NA,1)]

The printed output of the data descriptor informs us about: