Skip to content

Commit

Permalink
auto-generating sphinx docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jan 14, 2025
1 parent dad4feb commit 16d36d0
Show file tree
Hide file tree
Showing 34 changed files with 581 additions and 581 deletions.
Binary file not shown.
Binary file modified main/_downloads/150528e38f6816824f1e81ed67476a9f/export.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
16 changes: 8 additions & 8 deletions main/_sources/sg_execution_times.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

Computation times
=================
**02:25.782** total execution time for 11 files **from all galleries**:
**02:30.472** total execution time for 11 files **from all galleries**:

.. container::

Expand All @@ -33,22 +33,22 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_tutorials_tensorclass_fashion.py` (``reference/generated/tutorials/tensorclass_fashion.py``)
- 01:00.236
- 01:02.557
- 0.0
* - :ref:`sphx_glr_tutorials_data_fashion.py` (``reference/generated/tutorials/data_fashion.py``)
- 00:54.475
- 00:56.848
- 0.0
* - :ref:`sphx_glr_tutorials_tensordict_module.py` (``reference/generated/tutorials/tensordict_module.py``)
- 00:16.965
- 00:16.921
- 0.0
* - :ref:`sphx_glr_tutorials_streamed_tensordict.py` (``reference/generated/tutorials/streamed_tensordict.py``)
- 00:11.020
- 00:11.021
- 0.0
* - :ref:`sphx_glr_tutorials_tensorclass_imagenet.py` (``reference/generated/tutorials/tensorclass_imagenet.py``)
- 00:01.632
- 00:01.659
- 0.0
* - :ref:`sphx_glr_tutorials_export.py` (``reference/generated/tutorials/export.py``)
- 00:01.425
- 00:01.437
- 0.0
* - :ref:`sphx_glr_tutorials_tensordict_keys.py` (``reference/generated/tutorials/tensordict_keys.py``)
- 00:00.009
Expand All @@ -57,7 +57,7 @@ Computation times
- 00:00.008
- 0.0
* - :ref:`sphx_glr_tutorials_tensordict_slicing.py` (``reference/generated/tutorials/tensordict_slicing.py``)
- 00:00.004
- 00:00.005
- 0.0
* - :ref:`sphx_glr_tutorials_tensordict_memory.py` (``reference/generated/tutorials/tensordict_memory.py``)
- 00:00.004
Expand Down
226 changes: 113 additions & 113 deletions main/_sources/tutorials/data_fashion.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -423,164 +423,164 @@ adjust how we unpack the data to the more explicit key-based retrieval offered b
is_shared=False)
Epoch 1
-------------------------
loss: 2.294307 [ 0/60000]
loss: 2.281618 [ 6400/60000]
loss: 2.252174 [12800/60000]
loss: 2.252936 [19200/60000]
loss: 2.225810 [25600/60000]
loss: 2.196889 [32000/60000]
loss: 2.205427 [38400/60000]
loss: 2.159450 [44800/60000]
loss: 2.156655 [51200/60000]
loss: 2.119258 [57600/60000]
loss: 2.300518 [ 0/60000]
loss: 2.285205 [ 6400/60000]
loss: 2.263829 [12800/60000]
loss: 2.261878 [19200/60000]
loss: 2.245409 [25600/60000]
loss: 2.211373 [32000/60000]
loss: 2.224526 [38400/60000]
loss: 2.188293 [44800/60000]
loss: 2.190315 [51200/60000]
loss: 2.155622 [57600/60000]
Test Error:
Accuracy: 46.3%, Avg loss: 2.110689
Accuracy: 43.8%, Avg loss: 2.148978
Epoch 2
-------------------------
loss: 2.125491 [ 0/60000]
loss: 2.111995 [ 6400/60000]
loss: 2.033796 [12800/60000]
loss: 2.059668 [19200/60000]
loss: 1.989268 [25600/60000]
loss: 1.936978 [32000/60000]
loss: 1.959576 [38400/60000]
loss: 1.862608 [44800/60000]
loss: 1.872021 [51200/60000]
loss: 1.797924 [57600/60000]
loss: 2.163145 [ 0/60000]
loss: 2.145517 [ 6400/60000]
loss: 2.091258 [12800/60000]
loss: 2.107945 [19200/60000]
loss: 2.054260 [25600/60000]
loss: 1.992637 [32000/60000]
loss: 2.025965 [38400/60000]
loss: 1.946126 [44800/60000]
loss: 1.960029 [51200/60000]
loss: 1.879287 [57600/60000]
Test Error:
Accuracy: 53.7%, Avg loss: 1.789212
Accuracy: 58.1%, Avg loss: 1.879164
Epoch 3
-------------------------
loss: 1.834876 [ 0/60000]
loss: 1.799562 [ 6400/60000]
loss: 1.658334 [12800/60000]
loss: 1.716297 [19200/60000]
loss: 1.600255 [25600/60000]
loss: 1.571958 [32000/60000]
loss: 1.586597 [38400/60000]
loss: 1.484927 [44800/60000]
loss: 1.517752 [51200/60000]
loss: 1.421714 [57600/60000]
loss: 1.918228 [ 0/60000]
loss: 1.876745 [ 6400/60000]
loss: 1.766939 [12800/60000]
loss: 1.805345 [19200/60000]
loss: 1.696165 [25600/60000]
loss: 1.646921 [32000/60000]
loss: 1.673857 [38400/60000]
loss: 1.578202 [44800/60000]
loss: 1.611578 [51200/60000]
loss: 1.495029 [57600/60000]
Test Error:
Accuracy: 59.6%, Avg loss: 1.435836
Accuracy: 62.9%, Avg loss: 1.516030
Epoch 4
-------------------------
loss: 1.509605 [ 0/60000]
loss: 1.479262 [ 6400/60000]
loss: 1.314702 [12800/60000]
loss: 1.404015 [19200/60000]
loss: 1.285816 [25600/60000]
loss: 1.299244 [32000/60000]
loss: 1.307629 [38400/60000]
loss: 1.230591 [44800/60000]
loss: 1.270229 [51200/60000]
loss: 1.182587 [57600/60000]
loss: 1.586920 [ 0/60000]
loss: 1.543589 [ 6400/60000]
loss: 1.404667 [12800/60000]
loss: 1.463542 [19200/60000]
loss: 1.351404 [25600/60000]
loss: 1.343882 [32000/60000]
loss: 1.359334 [38400/60000]
loss: 1.291013 [44800/60000]
loss: 1.330518 [51200/60000]
loss: 1.217513 [57600/60000]
Test Error:
Accuracy: 62.5%, Avg loss: 1.203891
Accuracy: 64.4%, Avg loss: 1.247850
Epoch 5
-------------------------
loss: 1.280522 [ 0/60000]
loss: 1.268702 [ 6400/60000]
loss: 1.090299 [12800/60000]
loss: 1.209925 [19200/60000]
loss: 1.084458 [25600/60000]
loss: 1.124142 [32000/60000]
loss: 1.139163 [38400/60000]
loss: 1.074788 [44800/60000]
loss: 1.117219 [51200/60000]
loss: 1.040546 [57600/60000]
loss: 1.324667 [ 0/60000]
loss: 1.303330 [ 6400/60000]
loss: 1.149066 [12800/60000]
loss: 1.236917 [19200/60000]
loss: 1.118096 [25600/60000]
loss: 1.140212 [32000/60000]
loss: 1.161438 [38400/60000]
loss: 1.108778 [44800/60000]
loss: 1.151741 [51200/60000]
loss: 1.051913 [57600/60000]
Test Error:
Accuracy: 64.2%, Avg loss: 1.058886
Accuracy: 65.3%, Avg loss: 1.078721
TensorDict training done! time: 8.6153 s
TensorDict training done! time: 8.6764 s
Epoch 1
-------------------------
loss: 2.308265 [ 0/60000]
loss: 2.293171 [ 6400/60000]
loss: 2.271529 [12800/60000]
loss: 2.267321 [19200/60000]
loss: 2.235325 [25600/60000]
loss: 2.219271 [32000/60000]
loss: 2.215106 [38400/60000]
loss: 2.186461 [44800/60000]
loss: 2.184157 [51200/60000]
loss: 2.144828 [57600/60000]
loss: 2.304549 [ 0/60000]
loss: 2.291780 [ 6400/60000]
loss: 2.272309 [12800/60000]
loss: 2.269588 [19200/60000]
loss: 2.252003 [25600/60000]
loss: 2.226665 [32000/60000]
loss: 2.237633 [38400/60000]
loss: 2.203157 [44800/60000]
loss: 2.197140 [51200/60000]
loss: 2.167986 [57600/60000]
Test Error:
Accuracy: 47.1%, Avg loss: 2.140672
Accuracy: 43.5%, Avg loss: 2.161423
Epoch 2
-------------------------
loss: 2.149040 [ 0/60000]
loss: 2.142530 [ 6400/60000]
loss: 2.075416 [12800/60000]
loss: 2.099092 [19200/60000]
loss: 2.033061 [25600/60000]
loss: 1.982826 [32000/60000]
loss: 1.996649 [38400/60000]
loss: 1.917044 [44800/60000]
loss: 1.925382 [51200/60000]
loss: 1.846381 [57600/60000]
loss: 2.167836 [ 0/60000]
loss: 2.157421 [ 6400/60000]
loss: 2.102585 [12800/60000]
loss: 2.120195 [19200/60000]
loss: 2.073900 [25600/60000]
loss: 2.012530 [32000/60000]
loss: 2.048544 [38400/60000]
loss: 1.967728 [44800/60000]
loss: 1.972507 [51200/60000]
loss: 1.902655 [57600/60000]
Test Error:
Accuracy: 51.5%, Avg loss: 1.846959
Accuracy: 54.1%, Avg loss: 1.901883
Epoch 3
-------------------------
loss: 1.878904 [ 0/60000]
loss: 1.854604 [ 6400/60000]
loss: 1.725572 [12800/60000]
loss: 1.773118 [19200/60000]
loss: 1.660257 [25600/60000]
loss: 1.622359 [32000/60000]
loss: 1.631321 [38400/60000]
loss: 1.536023 [44800/60000]
loss: 1.566329 [51200/60000]
loss: 1.465487 [57600/60000]
loss: 1.931819 [ 0/60000]
loss: 1.901266 [ 6400/60000]
loss: 1.789387 [12800/60000]
loss: 1.828578 [19200/60000]
loss: 1.726820 [25600/60000]
loss: 1.672925 [32000/60000]
loss: 1.702863 [38400/60000]
loss: 1.598882 [44800/60000]
loss: 1.624720 [51200/60000]
loss: 1.515657 [57600/60000]
Test Error:
Accuracy: 59.7%, Avg loss: 1.482586
Accuracy: 60.5%, Avg loss: 1.535713
Epoch 4
-------------------------
loss: 1.546394 [ 0/60000]
loss: 1.523421 [ 6400/60000]
loss: 1.364943 [12800/60000]
loss: 1.442173 [19200/60000]
loss: 1.334812 [25600/60000]
loss: 1.330345 [32000/60000]
loss: 1.337560 [38400/60000]
loss: 1.261183 [44800/60000]
loss: 1.298624 [51200/60000]
loss: 1.215231 [57600/60000]
loss: 1.601846 [ 0/60000]
loss: 1.563907 [ 6400/60000]
loss: 1.416328 [12800/60000]
loss: 1.481683 [19200/60000]
loss: 1.372279 [25600/60000]
loss: 1.364084 [32000/60000]
loss: 1.382103 [38400/60000]
loss: 1.300883 [44800/60000]
loss: 1.334529 [51200/60000]
loss: 1.234432 [57600/60000]
Test Error:
Accuracy: 63.4%, Avg loss: 1.232315
Accuracy: 63.0%, Avg loss: 1.263073
Epoch 5
-------------------------
loss: 1.300956 [ 0/60000]
loss: 1.294875 [ 6400/60000]
loss: 1.123130 [12800/60000]
loss: 1.234035 [19200/60000]
loss: 1.123085 [25600/60000]
loss: 1.139949 [32000/60000]
loss: 1.157534 [38400/60000]
loss: 1.090237 [44800/60000]
loss: 1.131326 [51200/60000]
loss: 1.065410 [57600/60000]
loss: 1.340522 [ 0/60000]
loss: 1.319725 [ 6400/60000]
loss: 1.155922 [12800/60000]
loss: 1.256405 [19200/60000]
loss: 1.137213 [25600/60000]
loss: 1.164255 [32000/60000]
loss: 1.188119 [38400/60000]
loss: 1.118206 [44800/60000]
loss: 1.153232 [51200/60000]
loss: 1.074800 [57600/60000]
Test Error:
Accuracy: 65.1%, Avg loss: 1.075023
Accuracy: 64.8%, Avg loss: 1.095957
Training done! time: 33.5233 s
Training done! time: 35.6213 s
.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 54.475 seconds)
**Total running time of the script:** (0 minutes 56.848 seconds)


.. _sphx_glr_download_tutorials_data_fashion.py:
Expand Down
18 changes: 9 additions & 9 deletions main/_sources/tutorials/export.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ Let us run this model and see what the output looks like:

.. code-block:: none
(tensor([[0.0000, 0.0961, 0.0000, 0.0000]], grad_fn=<ReluBackward0>), tensor([[ 0.4136, -0.0047, 0.4026, 0.1457]], grad_fn=<AddmmBackward0>), tensor([[ 0.4136, -0.0047]], grad_fn=<SplitBackward0>), tensor([[1.2712, 1.0940]], grad_fn=<ClampMinBackward0>), tensor([[ 0.4136, -0.0047]], grad_fn=<SplitBackward0>))
(tensor([[0.0000, 1.0554, 0.3683, 0.3595]], grad_fn=<ReluBackward0>), tensor([[0.2891, 0.4447, 0.4128, 0.3370]], grad_fn=<AddmmBackward0>), tensor([[0.2891, 0.4447]], grad_fn=<SplitBackward0>), tensor([[1.2785, 1.2246]], grad_fn=<ClampMinBackward0>), tensor([[0.2891, 0.4447]], grad_fn=<SplitBackward0>))
Expand Down Expand Up @@ -266,8 +266,8 @@ This module can be run exactly like our original module (with a lower overhead):

.. code-block:: none
Time for TDModule: 700.47 micro-seconds
Time for exported module: 357.15 micro-seconds
Time for TDModule: 676.39 micro-seconds
Time for exported module: 361.92 micro-seconds
Expand Down Expand Up @@ -297,7 +297,7 @@ and the FX graph:
relu: "f32[1, 4]" = torch.ops.aten.relu.default(linear); linear = None
linear_1: "f32[1, 4]" = torch.ops.aten.linear.default(relu, p_l__args___0_module_2_module_weight, p_l__args___0_module_2_module_bias); p_l__args___0_module_2_module_weight = p_l__args___0_module_2_module_bias = None
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:130 in forward, code: loc, scale = tensor.chunk(2, -1)
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:131 in forward, code: loc, scale = tensor.chunk(2, -1)
chunk = torch.ops.aten.chunk.default(linear_1, 2, -1)
getitem: "f32[1, 2]" = chunk[0]
getitem_1: "f32[1, 2]" = chunk[1]; chunk = None
Expand All @@ -307,7 +307,7 @@ and the FX graph:
softplus: "f32[1, 2]" = torch.ops.aten.softplus.default(add); add = None
add_1: "f32[1, 2]" = torch.ops.aten.add.Tensor(softplus, 0.01); softplus = None
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:131 in forward, code: scale = self.scale_mapping(scale).clamp_min(self.scale_lb)
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:132 in forward, code: scale = self.scale_mapping(scale).clamp_min(self.scale_lb)
clamp_min: "f32[1, 2]" = torch.ops.aten.clamp_min.default(add_1, 0.0001); add_1 = None
# File: /pytorch/tensordict/env/lib/python3.10/site-packages/torch/distributions/utils.py:57 in broadcast_all, code: return torch.broadcast_tensors(*values)
Expand All @@ -323,7 +323,7 @@ and the FX graph:
relu: "f32[1, 4]" = torch.ops.aten.relu.default(linear); linear = None
linear_1: "f32[1, 4]" = torch.ops.aten.linear.default(relu, p_l__args___0_module_2_module_weight, p_l__args___0_module_2_module_bias); p_l__args___0_module_2_module_weight = p_l__args___0_module_2_module_bias = None
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:130 in forward, code: loc, scale = tensor.chunk(2, -1)
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:131 in forward, code: loc, scale = tensor.chunk(2, -1)
chunk = torch.ops.aten.chunk.default(linear_1, 2, -1)
getitem: "f32[1, 2]" = chunk[0]
getitem_1: "f32[1, 2]" = chunk[1]; chunk = None
Expand All @@ -333,7 +333,7 @@ and the FX graph:
softplus: "f32[1, 2]" = torch.ops.aten.softplus.default(add); add = None
add_1: "f32[1, 2]" = torch.ops.aten.add.Tensor(softplus, 0.01); softplus = None
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:131 in forward, code: scale = self.scale_mapping(scale).clamp_min(self.scale_lb)
# File: /pytorch/tensordict/tensordict/nn/distributions/continuous.py:132 in forward, code: scale = self.scale_mapping(scale).clamp_min(self.scale_lb)
clamp_min: "f32[1, 2]" = torch.ops.aten.clamp_min.default(add_1, 0.0001); add_1 = None
# File: /pytorch/tensordict/env/lib/python3.10/site-packages/torch/distributions/utils.py:57 in broadcast_all, code: return torch.broadcast_tensors(*values)
Expand Down Expand Up @@ -450,7 +450,7 @@ distribution:

.. code-block:: none
tensor([[ 0.4136, -0.0047]], grad_fn=<SplitBackward0>)
tensor([[0.2891, 0.4447]], grad_fn=<SplitBackward0>)
Expand Down Expand Up @@ -657,7 +657,7 @@ Next steps and further reading

.. rst-class:: sphx-glr-timing

**Total running time of the script:** (0 minutes 1.425 seconds)
**Total running time of the script:** (0 minutes 1.437 seconds)


.. _sphx_glr_download_tutorials_export.py:
Expand Down
Loading

0 comments on commit 16d36d0

Please sign in to comment.