Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jun 24, 2024
1 parent 79e3a01 commit c86323e
Show file tree
Hide file tree
Showing 191 changed files with 15,116 additions and 12,747 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "1bb88423",
"id": "9ba8ee83",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "30e88c00",
"id": "05cc4f1b",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "928c31a0",
"id": "4ffc4dc0",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
"id": "886238f6",
"id": "146701e4",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "80a8bc41",
"id": "7ddb23c2",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "85d284b6",
"id": "35c87e1d",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "c960d609",
"id": "f165baaf",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "93c7877b",
"id": "db68e714",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "47b6d22d",
"id": "0c9495e6",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
"id": "72922c26",
"id": "765def5b",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "1be05384",
"id": "657d4594",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "5e469891",
"id": "ba7c212e",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "00314307",
"id": "2424e425",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "4dd7fc5b",
"id": "ec6ff4d4",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "50169e7e",
"id": "babb6898",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "9efd889d",
"id": "54aab864",
"metadata": {},
"source": [
"\n",
Expand Down
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1632,26 +1632,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:08, 1078.92it/s]
16%|#6 | 1600/10000 [00:03<00:20, 406.94it/s]
24%|##4 | 2400/10000 [00:04<00:13, 552.94it/s]
32%|###2 | 3200/10000 [00:05<00:10, 669.75it/s]
40%|#### | 4000/10000 [00:06<00:07, 755.87it/s]
48%|####8 | 4800/10000 [00:06<00:06, 820.65it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 869.39it/s]
reward: -2.15 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-3.05/6.49, grad norm= 208.83, loss_value= 391.54, loss_actor= 15.33, target value: -18.46: 56%|#####6 | 5600/10000 [00:08<00:05, 869.39it/s]
reward: -2.15 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-3.05/6.49, grad norm= 208.83, loss_value= 391.54, loss_actor= 15.33, target value: -18.46: 64%|######4 | 6400/10000 [00:09<00:05, 651.86it/s]
reward: -0.14 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-2.83/5.54, grad norm= 210.87, loss_value= 225.26, loss_actor= 13.55, target value: -18.38: 64%|######4 | 6400/10000 [00:10<00:05, 651.86it/s]
reward: -0.14 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-2.83/5.54, grad norm= 210.87, loss_value= 225.26, loss_actor= 13.55, target value: -18.38: 72%|#######2 | 7200/10000 [00:11<00:05, 495.86it/s]
reward: -2.98 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-2.54/5.84, grad norm= 74.33, loss_value= 282.34, loss_actor= 15.96, target value: -16.14: 72%|#######2 | 7200/10000 [00:12<00:05, 495.86it/s]
reward: -2.98 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-2.54/5.84, grad norm= 74.33, loss_value= 282.34, loss_actor= 15.96, target value: -16.14: 80%|######## | 8000/10000 [00:14<00:04, 428.41it/s]
reward: -4.91 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-2.79/5.04, grad norm= 158.19, loss_value= 220.15, loss_actor= 16.40, target value: -18.20: 80%|######## | 8000/10000 [00:15<00:04, 428.41it/s]
reward: -4.91 (r0 = -2.88), reward eval: reward: 0.01, reward normalized=-2.79/5.04, grad norm= 158.19, loss_value= 220.15, loss_actor= 16.40, target value: -18.20: 88%|########8 | 8800/10000 [00:16<00:03, 390.65it/s]
reward: -5.15 (r0 = -2.88), reward eval: reward: -3.49, reward normalized=-2.95/5.03, grad norm= 48.04, loss_value= 214.52, loss_actor= 16.88, target value: -20.10: 88%|########8 | 8800/10000 [00:19<00:03, 390.65it/s]
reward: -5.15 (r0 = -2.88), reward eval: reward: -3.49, reward normalized=-2.95/5.03, grad norm= 48.04, loss_value= 214.52, loss_actor= 16.88, target value: -20.10: 96%|#########6| 9600/10000 [00:21<00:01, 288.85it/s]
reward: -8.57 (r0 = -2.88), reward eval: reward: -3.49, reward normalized=-4.12/6.24, grad norm= 441.42, loss_value= 461.86, loss_actor= 19.48, target value: -29.16: 96%|#########6| 9600/10000 [00:22<00:01, 288.85it/s]
reward: -8.57 (r0 = -2.88), reward eval: reward: -3.49, reward normalized=-4.12/6.24, grad norm= 441.42, loss_value= 461.86, loss_actor= 19.48, target value: -29.16: : 10400it [00:24, 269.91it/s]
reward: -3.80 (r0 = -2.88), reward eval: reward: -3.49, reward normalized=-3.28/4.80, grad norm= 341.35, loss_value= 251.25, loss_actor= 20.02, target value: -23.06: : 10400it [00:25, 269.91it/s]
8%|8 | 800/10000 [00:00<00:08, 1075.72it/s]
16%|#6 | 1600/10000 [00:03<00:20, 405.15it/s]
24%|##4 | 2400/10000 [00:04<00:13, 550.81it/s]
32%|###2 | 3200/10000 [00:05<00:10, 666.48it/s]
40%|#### | 4000/10000 [00:06<00:07, 750.44it/s]
48%|####8 | 4800/10000 [00:06<00:06, 811.02it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 859.06it/s]
reward: -2.16 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.09/6.32, grad norm= 218.52, loss_value= 380.54, loss_actor= 14.07, target value: -11.24: 56%|#####6 | 5600/10000 [00:08<00:05, 859.06it/s]
reward: -2.16 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.09/6.32, grad norm= 218.52, loss_value= 380.54, loss_actor= 14.07, target value: -11.24: 64%|######4 | 6400/10000 [00:09<00:05, 638.38it/s]
reward: -0.11 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.50/6.05, grad norm= 33.89, loss_value= 358.66, loss_actor= 14.75, target value: -15.00: 64%|######4 | 6400/10000 [00:10<00:05, 638.38it/s]
reward: -0.11 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.50/6.05, grad norm= 33.89, loss_value= 358.66, loss_actor= 14.75, target value: -15.00: 72%|#######2 | 7200/10000 [00:12<00:05, 488.30it/s]
reward: -2.20 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-1.80/6.42, grad norm= 179.01, loss_value= 451.17, loss_actor= 11.22, target value: -11.76: 72%|#######2 | 7200/10000 [00:12<00:05, 488.30it/s]
reward: -2.20 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-1.80/6.42, grad norm= 179.01, loss_value= 451.17, loss_actor= 11.22, target value: -11.76: 80%|######## | 8000/10000 [00:14<00:04, 419.11it/s]
reward: -4.57 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.74/5.57, grad norm= 190.83, loss_value= 283.69, loss_actor= 16.99, target value: -17.47: 80%|######## | 8000/10000 [00:15<00:04, 419.11it/s]
reward: -4.57 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.74/5.57, grad norm= 190.83, loss_value= 283.69, loss_actor= 16.99, target value: -17.47: 88%|########8 | 8800/10000 [00:17<00:03, 382.68it/s]
reward: -5.03 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.02/5.53, grad norm= 218.52, loss_value= 326.20, loss_actor= 14.51, target value: -20.00: 88%|########8 | 8800/10000 [00:19<00:03, 382.68it/s]
reward: -5.03 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.02/5.53, grad norm= 218.52, loss_value= 326.20, loss_actor= 14.51, target value: -20.00: 96%|#########6| 9600/10000 [00:21<00:01, 285.28it/s]
reward: -4.58 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-2.86/5.12, grad norm= 213.90, loss_value= 296.15, loss_actor= 13.14, target value: -20.54: 96%|#########6| 9600/10000 [00:22<00:01, 285.28it/s]
reward: -4.58 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-2.86/5.12, grad norm= 213.90, loss_value= 296.15, loss_actor= 13.14, target value: -20.54: : 10400it [00:25, 264.79it/s]
reward: -3.55 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.59/4.43, grad norm= 57.50, loss_value= 177.22, loss_actor= 20.78, target value: -23.86: : 10400it [00:25, 264.79it/s]
Expand Down Expand Up @@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 40.986 seconds)
**Total running time of the script:** ( 0 minutes 41.406 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -516,9 +516,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 204.6
elapsed time (seconds): 205.4
loss: 5.168
elapsed time (seconds): 116.3
elapsed time (seconds): 113.9
Expand All @@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 29.155 seconds)
**Total running time of the script:** ( 5 minutes 27.914 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
64 changes: 32 additions & 32 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,37 +410,37 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
3%|3 | 16.9M/548M [00:00<00:03, 176MB/s]
6%|6 | 34.0M/548M [00:00<00:03, 178MB/s]
9%|9 | 51.0M/548M [00:00<00:02, 178MB/s]
12%|#2 | 68.1M/548M [00:00<00:02, 178MB/s]
16%|#5 | 85.4M/548M [00:00<00:02, 179MB/s]
19%|#8 | 103M/548M [00:00<00:02, 179MB/s]
22%|##1 | 120M/548M [00:00<00:02, 179MB/s]
25%|##4 | 137M/548M [00:00<00:02, 179MB/s]
28%|##8 | 154M/548M [00:00<00:02, 179MB/s]
31%|###1 | 171M/548M [00:01<00:02, 179MB/s]
34%|###4 | 188M/548M [00:01<00:02, 179MB/s]
38%|###7 | 206M/548M [00:01<00:01, 180MB/s]
41%|#### | 223M/548M [00:01<00:01, 177MB/s]
44%|####3 | 240M/548M [00:01<00:01, 178MB/s]
47%|####7 | 258M/548M [00:01<00:01, 179MB/s]
3%|3 | 16.9M/548M [00:00<00:03, 177MB/s]
6%|6 | 34.2M/548M [00:00<00:03, 180MB/s]
9%|9 | 51.5M/548M [00:00<00:02, 180MB/s]
13%|#2 | 68.9M/548M [00:00<00:02, 180MB/s]
16%|#5 | 86.1M/548M [00:00<00:02, 180MB/s]
19%|#8 | 103M/548M [00:00<00:02, 177MB/s]
22%|##1 | 120M/548M [00:00<00:02, 177MB/s]
25%|##5 | 138M/548M [00:00<00:02, 178MB/s]
28%|##8 | 155M/548M [00:00<00:02, 178MB/s]
31%|###1 | 172M/548M [00:01<00:02, 179MB/s]
35%|###4 | 189M/548M [00:01<00:02, 179MB/s]
38%|###7 | 206M/548M [00:01<00:01, 179MB/s]
41%|#### | 224M/548M [00:01<00:01, 179MB/s]
44%|####3 | 241M/548M [00:01<00:01, 180MB/s]
47%|####7 | 258M/548M [00:01<00:01, 180MB/s]
50%|##### | 275M/548M [00:01<00:01, 179MB/s]
53%|#####3 | 292M/548M [00:01<00:01, 180MB/s]
56%|#####6 | 309M/548M [00:01<00:01, 180MB/s]
57%|#####6 | 310M/548M [00:01<00:01, 180MB/s]
60%|#####9 | 327M/548M [00:01<00:01, 180MB/s]
63%|######2 | 344M/548M [00:02<00:01, 180MB/s]
66%|######5 | 361M/548M [00:02<00:01, 180MB/s]
69%|######9 | 378M/548M [00:02<00:00, 180MB/s]
66%|######6 | 362M/548M [00:02<00:01, 181MB/s]
69%|######9 | 379M/548M [00:02<00:00, 181MB/s]
72%|#######2 | 396M/548M [00:02<00:00, 180MB/s]
75%|#######5 | 413M/548M [00:02<00:00, 180MB/s]
79%|#######8 | 430M/548M [00:02<00:00, 181MB/s]
82%|########1 | 448M/548M [00:02<00:00, 181MB/s]
85%|########4 | 465M/548M [00:02<00:00, 181MB/s]
88%|########7 | 482M/548M [00:02<00:00, 181MB/s]
75%|#######5 | 414M/548M [00:02<00:00, 181MB/s]
79%|#######8 | 431M/548M [00:02<00:00, 180MB/s]
82%|########1 | 448M/548M [00:02<00:00, 180MB/s]
85%|########4 | 466M/548M [00:02<00:00, 181MB/s]
88%|########8 | 483M/548M [00:02<00:00, 181MB/s]
91%|#########1| 500M/548M [00:02<00:00, 181MB/s]
94%|#########4| 517M/548M [00:03<00:00, 180MB/s]
97%|#########7| 534M/548M [00:03<00:00, 180MB/s]
94%|#########4| 517M/548M [00:03<00:00, 181MB/s]
98%|#########7| 535M/548M [00:03<00:00, 180MB/s]
100%|##########| 548M/548M [00:03<00:00, 180MB/s]
Expand Down Expand Up @@ -762,22 +762,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.106364 Content Loss: 4.173377
Style Loss : 4.076185 Content Loss: 4.141464
run [100]:
Style Loss : 1.120980 Content Loss: 3.016389
Style Loss : 1.117196 Content Loss: 3.007679
run [150]:
Style Loss : 0.707181 Content Loss: 2.649149
Style Loss : 0.697977 Content Loss: 2.643763
run [200]:
Style Loss : 0.471671 Content Loss: 2.487833
Style Loss : 0.472661 Content Loss: 2.485559
run [250]:
Style Loss : 0.341961 Content Loss: 2.401103
Style Loss : 0.342390 Content Loss: 2.399799
run [300]:
Style Loss : 0.261578 Content Loss: 2.348049
Style Loss : 0.261053 Content Loss: 2.347213
Expand All @@ -786,7 +786,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 37.243 seconds)
**Total running time of the script:** ( 0 minutes 37.280 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.597 seconds)
**Total running time of the script:** ( 0 minutes 0.583 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit c86323e

Please sign in to comment.