Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jul 15, 2024
1 parent 7ba3c4e commit 7d3021a
Show file tree
Hide file tree
Showing 182 changed files with 12,258 additions and 13,322 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "c338aa9a",
"id": "c9887eb2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "1425faf3",
"id": "97e989fc",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "bf671db4",
"id": "693f57f4",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
"id": "0bef4b2d",
"id": "f05af391",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "511ca8b5",
"id": "7bf7fbaa",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "c43f4f6d",
"id": "7369a4bc",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "b4d1f47d",
"id": "752a2985",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "cb97e0b6",
"id": "34368a58",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "dccff559",
"id": "453759d9",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
"id": "c9b546a5",
"id": "c7acdec3",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "895519f4",
"id": "d42cde5b",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "32e7ef55",
"id": "69464519",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0e2fd20a",
"id": "05d049be",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "496e5d03",
"id": "2a29a41c",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "55e9f783",
"id": "ffaf9618",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "ba1af94e",
"id": "cb906f49",
"metadata": {},
"source": [
"\n",
Expand Down
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1632,26 +1632,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:08, 1071.38it/s]
16%|#6 | 1600/10000 [00:03<00:20, 407.89it/s]
24%|##4 | 2400/10000 [00:04<00:13, 555.14it/s]
32%|###2 | 3200/10000 [00:05<00:10, 672.45it/s]
40%|#### | 4000/10000 [00:05<00:07, 758.59it/s]
48%|####8 | 4800/10000 [00:06<00:06, 823.20it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 863.77it/s]
reward: -2.09 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.69/6.20, grad norm= 37.11, loss_value= 323.10, loss_actor= 14.83, target value: -16.38: 56%|#####6 | 5600/10000 [00:08<00:05, 863.77it/s]
reward: -2.09 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.69/6.20, grad norm= 37.11, loss_value= 323.10, loss_actor= 14.83, target value: -16.38: 64%|######4 | 6400/10000 [00:09<00:05, 650.90it/s]
reward: -0.14 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.66/5.98, grad norm= 175.37, loss_value= 309.10, loss_actor= 13.68, target value: -16.84: 64%|######4 | 6400/10000 [00:10<00:05, 650.90it/s]
reward: -0.14 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.66/5.98, grad norm= 175.37, loss_value= 309.10, loss_actor= 13.68, target value: -16.84: 72%|#######2 | 7200/10000 [00:12<00:05, 492.74it/s]
reward: -3.13 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.46/6.23, grad norm= 93.87, loss_value= 320.85, loss_actor= 15.84, target value: -15.84: 72%|#######2 | 7200/10000 [00:12<00:05, 492.74it/s]
reward: -3.13 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.46/6.23, grad norm= 93.87, loss_value= 320.85, loss_actor= 15.84, target value: -15.84: 80%|######## | 8000/10000 [00:14<00:04, 423.87it/s]
reward: -4.76 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.58/5.59, grad norm= 122.85, loss_value= 264.06, loss_actor= 18.60, target value: -16.97: 80%|######## | 8000/10000 [00:15<00:04, 423.87it/s]
reward: -4.76 (r0 = -3.53), reward eval: reward: 0.00, reward normalized=-2.58/5.59, grad norm= 122.85, loss_value= 264.06, loss_actor= 18.60, target value: -16.97: 88%|########8 | 8800/10000 [00:16<00:03, 388.78it/s]
reward: -5.06 (r0 = -3.53), reward eval: reward: -3.39, reward normalized=-3.02/5.46, grad norm= 64.96, loss_value= 254.81, loss_actor= 16.05, target value: -19.83: 88%|########8 | 8800/10000 [00:19<00:03, 388.78it/s]
reward: -5.06 (r0 = -3.53), reward eval: reward: -3.39, reward normalized=-3.02/5.46, grad norm= 64.96, loss_value= 254.81, loss_actor= 16.05, target value: -19.83: 96%|#########6| 9600/10000 [00:21<00:01, 295.31it/s]
reward: -2.03 (r0 = -3.53), reward eval: reward: -3.39, reward normalized=-2.73/5.39, grad norm= 61.65, loss_value= 302.91, loss_actor= 15.41, target value: -19.70: 96%|#########6| 9600/10000 [00:21<00:01, 295.31it/s]
reward: -2.03 (r0 = -3.53), reward eval: reward: -3.39, reward normalized=-2.73/5.39, grad norm= 61.65, loss_value= 302.91, loss_actor= 15.41, target value: -19.70: : 10400it [00:24, 270.03it/s]
reward: -3.56 (r0 = -3.53), reward eval: reward: -3.39, reward normalized=-3.00/4.17, grad norm= 83.71, loss_value= 141.01, loss_actor= 19.28, target value: -21.77: : 10400it [00:25, 270.03it/s]
8%|8 | 800/10000 [00:00<00:08, 1081.90it/s]
16%|#6 | 1600/10000 [00:03<00:20, 409.91it/s]
24%|##4 | 2400/10000 [00:04<00:13, 556.99it/s]
32%|###2 | 3200/10000 [00:05<00:10, 672.82it/s]
40%|#### | 4000/10000 [00:05<00:07, 758.91it/s]
48%|####8 | 4800/10000 [00:06<00:06, 823.27it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 861.64it/s]
reward: -2.39 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-2.64/6.17, grad norm= 60.66, loss_value= 362.98, loss_actor= 15.39, target value: -16.75: 56%|#####6 | 5600/10000 [00:08<00:05, 861.64it/s]
reward: -2.39 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-2.64/6.17, grad norm= 60.66, loss_value= 362.98, loss_actor= 15.39, target value: -16.75: 64%|######4 | 6400/10000 [00:09<00:05, 640.70it/s]
reward: -0.18 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-1.78/5.68, grad norm= 140.91, loss_value= 263.30, loss_actor= 12.55, target value: -10.25: 64%|######4 | 6400/10000 [00:10<00:05, 640.70it/s]
reward: -0.18 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-1.78/5.68, grad norm= 140.91, loss_value= 263.30, loss_actor= 12.55, target value: -10.25: 72%|#######2 | 7200/10000 [00:12<00:05, 493.25it/s]
reward: -1.33 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-2.30/5.80, grad norm= 88.53, loss_value= 234.86, loss_actor= 10.93, target value: -14.15: 72%|#######2 | 7200/10000 [00:12<00:05, 493.25it/s]
reward: -1.33 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-2.30/5.80, grad norm= 88.53, loss_value= 234.86, loss_actor= 10.93, target value: -14.15: 80%|######## | 8000/10000 [00:14<00:04, 425.82it/s]
reward: -4.81 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-2.33/4.87, grad norm= 66.83, loss_value= 191.09, loss_actor= 17.33, target value: -15.26: 80%|######## | 8000/10000 [00:15<00:04, 425.82it/s]
reward: -4.81 (r0 = -1.80), reward eval: reward: -0.00, reward normalized=-2.33/4.87, grad norm= 66.83, loss_value= 191.09, loss_actor= 17.33, target value: -15.26: 88%|########8 | 8800/10000 [00:16<00:03, 389.80it/s]
reward: -5.27 (r0 = -1.80), reward eval: reward: -5.60, reward normalized=-2.75/5.32, grad norm= 92.09, loss_value= 224.51, loss_actor= 14.97, target value: -18.42: 88%|########8 | 8800/10000 [00:19<00:03, 389.80it/s]
reward: -5.27 (r0 = -1.80), reward eval: reward: -5.60, reward normalized=-2.75/5.32, grad norm= 92.09, loss_value= 224.51, loss_actor= 14.97, target value: -18.42: 96%|#########6| 9600/10000 [00:21<00:01, 288.58it/s]
reward: -4.15 (r0 = -1.80), reward eval: reward: -5.60, reward normalized=-2.69/4.98, grad norm= 116.94, loss_value= 181.23, loss_actor= 15.28, target value: -19.73: 96%|#########6| 9600/10000 [00:22<00:01, 288.58it/s]
reward: -4.15 (r0 = -1.80), reward eval: reward: -5.60, reward normalized=-2.69/4.98, grad norm= 116.94, loss_value= 181.23, loss_actor= 15.28, target value: -19.73: : 10400it [00:24, 267.97it/s]
reward: -4.63 (r0 = -1.80), reward eval: reward: -5.60, reward normalized=-3.42/4.15, grad norm= 91.74, loss_value= 184.88, loss_actor= 23.50, target value: -23.91: : 10400it [00:25, 267.97it/s]
Expand Down Expand Up @@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 40.968 seconds)
**Total running time of the script:** ( 0 minutes 40.982 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/cpp_export.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ minimal ``CMakeLists.txt`` to build it could look as simple as:
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
set_property(TARGET example-app PROPERTY CXX_STANDARD 17)
The last thing we need to build the example application is the LibTorch
distribution. You can always grab the latest stable release from the `download
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -516,9 +516,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 203.1
elapsed time (seconds): 203.8
loss: 5.168
elapsed time (seconds): 112.0
elapsed time (seconds): 113.0
Expand All @@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 23.825 seconds)
**Total running time of the script:** ( 5 minutes 25.353 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
78 changes: 39 additions & 39 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,38 +410,38 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
3%|3 | 17.0M/548M [00:00<00:03, 177MB/s]
6%|6 | 34.2M/548M [00:00<00:03, 179MB/s]
9%|9 | 51.4M/548M [00:00<00:02, 179MB/s]
12%|#2 | 68.5M/548M [00:00<00:02, 179MB/s]
16%|#5 | 85.6M/548M [00:00<00:02, 165MB/s]
19%|#8 | 103M/548M [00:00<00:02, 170MB/s]
22%|##1 | 120M/548M [00:00<00:02, 174MB/s]
25%|##5 | 138M/548M [00:00<00:02, 176MB/s]
28%|##8 | 155M/548M [00:00<00:02, 178MB/s]
31%|###1 | 173M/548M [00:01<00:02, 179MB/s]
35%|###4 | 190M/548M [00:01<00:02, 180MB/s]
38%|###7 | 207M/548M [00:01<00:02, 141MB/s]
41%|#### | 222M/548M [00:01<00:03, 103MB/s]
44%|####3 | 239M/548M [00:01<00:02, 119MB/s]
47%|####6 | 257M/548M [00:01<00:02, 133MB/s]
50%|####9 | 274M/548M [00:01<00:01, 145MB/s]
53%|#####3 | 291M/548M [00:01<00:01, 154MB/s]
56%|#####6 | 309M/548M [00:02<00:01, 162MB/s]
59%|#####9 | 326M/548M [00:02<00:01, 167MB/s]
63%|######2 | 344M/548M [00:02<00:01, 171MB/s]
66%|######5 | 361M/548M [00:02<00:01, 174MB/s]
69%|######8 | 378M/548M [00:02<00:01, 136MB/s]
72%|#######2 | 395M/548M [00:02<00:01, 146MB/s]
75%|#######5 | 412M/548M [00:02<00:00, 155MB/s]
78%|#######8 | 428M/548M [00:02<00:00, 156MB/s]
81%|########1 | 444M/548M [00:02<00:00, 160MB/s]
84%|########4 | 461M/548M [00:03<00:00, 166MB/s]
87%|########7 | 479M/548M [00:03<00:00, 171MB/s]
91%|######### | 496M/548M [00:03<00:00, 174MB/s]
94%|#########3| 514M/548M [00:03<00:00, 176MB/s]
97%|#########6| 531M/548M [00:03<00:00, 178MB/s]
100%|##########| 548M/548M [00:03<00:00, 160MB/s]
3%|2 | 14.9M/548M [00:00<00:03, 156MB/s]
6%|5 | 32.0M/548M [00:00<00:03, 169MB/s]
9%|9 | 49.5M/548M [00:00<00:02, 176MB/s]
12%|#2 | 67.0M/548M [00:00<00:02, 178MB/s]
15%|#5 | 84.5M/548M [00:00<00:02, 180MB/s]
19%|#8 | 102M/548M [00:00<00:02, 181MB/s]
22%|##1 | 119M/548M [00:00<00:02, 176MB/s]
25%|##4 | 136M/548M [00:00<00:03, 137MB/s]
28%|##7 | 153M/548M [00:01<00:02, 148MB/s]
31%|###1 | 171M/548M [00:01<00:02, 158MB/s]
34%|###4 | 189M/548M [00:01<00:02, 165MB/s]
38%|###7 | 206M/548M [00:01<00:02, 170MB/s]
41%|#### | 224M/548M [00:01<00:01, 174MB/s]
44%|####3 | 241M/548M [00:01<00:01, 177MB/s]
47%|####7 | 259M/548M [00:01<00:01, 179MB/s]
50%|##### | 276M/548M [00:01<00:01, 181MB/s]
54%|#####3 | 294M/548M [00:01<00:01, 154MB/s]
57%|#####6 | 311M/548M [00:01<00:01, 162MB/s]
60%|#####9 | 329M/548M [00:02<00:01, 168MB/s]
63%|######3 | 346M/548M [00:02<00:01, 172MB/s]
66%|######6 | 364M/548M [00:02<00:01, 175MB/s]
70%|######9 | 381M/548M [00:02<00:00, 178MB/s]
73%|#######2 | 399M/548M [00:02<00:00, 176MB/s]
76%|#######5 | 416M/548M [00:02<00:00, 174MB/s]
79%|#######8 | 432M/548M [00:02<00:00, 174MB/s]
82%|########1 | 449M/548M [00:02<00:00, 174MB/s]
85%|########5 | 466M/548M [00:02<00:00, 173MB/s]
88%|########8 | 482M/548M [00:03<00:00, 163MB/s]
91%|######### | 498M/548M [00:03<00:00, 161MB/s]
94%|#########4| 516M/548M [00:03<00:00, 168MB/s]
97%|#########7| 534M/548M [00:03<00:00, 173MB/s]
100%|##########| 548M/548M [00:03<00:00, 169MB/s]
Expand Down Expand Up @@ -762,22 +762,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.040166 Content Loss: 4.149802
Style Loss : 4.013772 Content Loss: 4.119182
run [100]:
Style Loss : 1.123524 Content Loss: 3.017745
Style Loss : 1.116340 Content Loss: 3.006091
run [150]:
Style Loss : 0.708131 Content Loss: 2.645621
Style Loss : 0.704913 Content Loss: 2.643375
run [200]:
Style Loss : 0.476169 Content Loss: 2.487843
Style Loss : 0.473614 Content Loss: 2.488759
run [250]:
Style Loss : 0.344700 Content Loss: 2.400110
Style Loss : 0.339164 Content Loss: 2.398316
run [300]:
Style Loss : 0.262838 Content Loss: 2.348528
Style Loss : 0.263078 Content Loss: 2.346137
Expand All @@ -786,7 +786,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 37.573 seconds)
**Total running time of the script:** ( 0 minutes 37.240 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.602 seconds)
**Total running time of the script:** ( 0 minutes 0.641 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 7d3021a

Please sign in to comment.