Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jun 25, 2024
1 parent c86323e commit 46674a2
Show file tree
Hide file tree
Showing 181 changed files with 12,155 additions and 12,189 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9ba8ee83",
"id": "7990a69c",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "05cc4f1b",
"id": "cd424fde",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "4ffc4dc0",
"id": "41f06fa1",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
"id": "146701e4",
"id": "a8f13a31",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7ddb23c2",
"id": "e9a0317a",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "35c87e1d",
"id": "cb107f47",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "f165baaf",
"id": "9e2a7722",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "db68e714",
"id": "a175d8e7",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "0c9495e6",
"id": "cea1749f",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
"id": "765def5b",
"id": "9b221904",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "657d4594",
"id": "34ea9b0f",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "ba7c212e",
"id": "a5bc0df0",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "2424e425",
"id": "99f4bac3",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "ec6ff4d4",
"id": "17dc5c5a",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "babb6898",
"id": "bb07cc8e",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "54aab864",
"id": "d4ef5611",
"metadata": {},
"source": [
"\n",
Expand Down
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1632,26 +1632,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:08, 1075.72it/s]
16%|#6 | 1600/10000 [00:03<00:20, 405.15it/s]
24%|##4 | 2400/10000 [00:04<00:13, 550.81it/s]
32%|###2 | 3200/10000 [00:05<00:10, 666.48it/s]
40%|#### | 4000/10000 [00:06<00:07, 750.44it/s]
48%|####8 | 4800/10000 [00:06<00:06, 811.02it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 859.06it/s]
reward: -2.16 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.09/6.32, grad norm= 218.52, loss_value= 380.54, loss_actor= 14.07, target value: -11.24: 56%|#####6 | 5600/10000 [00:08<00:05, 859.06it/s]
reward: -2.16 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.09/6.32, grad norm= 218.52, loss_value= 380.54, loss_actor= 14.07, target value: -11.24: 64%|######4 | 6400/10000 [00:09<00:05, 638.38it/s]
reward: -0.11 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.50/6.05, grad norm= 33.89, loss_value= 358.66, loss_actor= 14.75, target value: -15.00: 64%|######4 | 6400/10000 [00:10<00:05, 638.38it/s]
reward: -0.11 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.50/6.05, grad norm= 33.89, loss_value= 358.66, loss_actor= 14.75, target value: -15.00: 72%|#######2 | 7200/10000 [00:12<00:05, 488.30it/s]
reward: -2.20 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-1.80/6.42, grad norm= 179.01, loss_value= 451.17, loss_actor= 11.22, target value: -11.76: 72%|#######2 | 7200/10000 [00:12<00:05, 488.30it/s]
reward: -2.20 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-1.80/6.42, grad norm= 179.01, loss_value= 451.17, loss_actor= 11.22, target value: -11.76: 80%|######## | 8000/10000 [00:14<00:04, 419.11it/s]
reward: -4.57 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.74/5.57, grad norm= 190.83, loss_value= 283.69, loss_actor= 16.99, target value: -17.47: 80%|######## | 8000/10000 [00:15<00:04, 419.11it/s]
reward: -4.57 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.74/5.57, grad norm= 190.83, loss_value= 283.69, loss_actor= 16.99, target value: -17.47: 88%|########8 | 8800/10000 [00:17<00:03, 382.68it/s]
reward: -5.03 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.02/5.53, grad norm= 218.52, loss_value= 326.20, loss_actor= 14.51, target value: -20.00: 88%|########8 | 8800/10000 [00:19<00:03, 382.68it/s]
reward: -5.03 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.02/5.53, grad norm= 218.52, loss_value= 326.20, loss_actor= 14.51, target value: -20.00: 96%|#########6| 9600/10000 [00:21<00:01, 285.28it/s]
reward: -4.58 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-2.86/5.12, grad norm= 213.90, loss_value= 296.15, loss_actor= 13.14, target value: -20.54: 96%|#########6| 9600/10000 [00:22<00:01, 285.28it/s]
reward: -4.58 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-2.86/5.12, grad norm= 213.90, loss_value= 296.15, loss_actor= 13.14, target value: -20.54: : 10400it [00:25, 264.79it/s]
reward: -3.55 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.59/4.43, grad norm= 57.50, loss_value= 177.22, loss_actor= 20.78, target value: -23.86: : 10400it [00:25, 264.79it/s]
8%|8 | 800/10000 [00:00<00:08, 1065.00it/s]
16%|#6 | 1600/10000 [00:03<00:20, 401.41it/s]
24%|##4 | 2400/10000 [00:04<00:13, 543.48it/s]
32%|###2 | 3200/10000 [00:05<00:10, 655.82it/s]
40%|#### | 4000/10000 [00:06<00:08, 737.72it/s]
48%|####8 | 4800/10000 [00:06<00:06, 799.24it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 846.72it/s]
reward: -2.56 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.21/6.37, grad norm= 44.83, loss_value= 395.90, loss_actor= 18.20, target value: -18.76: 56%|#####6 | 5600/10000 [00:08<00:05, 846.72it/s]
reward: -2.56 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.21/6.37, grad norm= 44.83, loss_value= 395.90, loss_actor= 18.20, target value: -18.76: 64%|######4 | 6400/10000 [00:09<00:05, 635.05it/s]
reward: -0.10 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.70/6.05, grad norm= 81.33, loss_value= 343.52, loss_actor= 14.75, target value: -16.15: 64%|######4 | 6400/10000 [00:10<00:05, 635.05it/s]
reward: -0.10 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.70/6.05, grad norm= 81.33, loss_value= 343.52, loss_actor= 14.75, target value: -16.15: 72%|#######2 | 7200/10000 [00:12<00:05, 486.42it/s]
reward: -1.82 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.04/5.69, grad norm= 203.44, loss_value= 302.93, loss_actor= 15.25, target value: -20.36: 72%|#######2 | 7200/10000 [00:13<00:05, 486.42it/s]
reward: -1.82 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.04/5.69, grad norm= 203.44, loss_value= 302.93, loss_actor= 15.25, target value: -20.36: 80%|######## | 8000/10000 [00:14<00:04, 418.73it/s]
reward: -4.83 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.87/5.11, grad norm= 241.11, loss_value= 259.33, loss_actor= 16.64, target value: -19.13: 80%|######## | 8000/10000 [00:15<00:04, 418.73it/s]
reward: -4.83 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.87/5.11, grad norm= 241.11, loss_value= 259.33, loss_actor= 16.64, target value: -19.13: 88%|########8 | 8800/10000 [00:17<00:03, 382.18it/s]
reward: -5.14 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.44/4.86, grad norm= 150.89, loss_value= 188.20, loss_actor= 18.66, target value: -16.22: 88%|########8 | 8800/10000 [00:20<00:03, 382.18it/s]
reward: -5.14 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.44/4.86, grad norm= 150.89, loss_value= 188.20, loss_actor= 18.66, target value: -16.22: 96%|#########6| 9600/10000 [00:21<00:01, 284.56it/s]
reward: -5.13 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.81/5.41, grad norm= 125.30, loss_value= 268.30, loss_actor= 16.89, target value: -19.89: 96%|#########6| 9600/10000 [00:22<00:01, 284.56it/s]
reward: -5.13 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.81/5.41, grad norm= 125.30, loss_value= 268.30, loss_actor= 16.89, target value: -19.89: : 10400it [00:25, 264.83it/s]
reward: -3.58 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-3.82/5.60, grad norm= 87.65, loss_value= 267.84, loss_actor= 23.16, target value: -27.36: : 10400it [00:26, 264.83it/s]
Expand Down Expand Up @@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 41.406 seconds)
**Total running time of the script:** ( 0 minutes 41.549 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -516,9 +516,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 205.4
elapsed time (seconds): 206.3
loss: 5.168
elapsed time (seconds): 113.9
elapsed time (seconds): 116.9
Expand All @@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 27.914 seconds)
**Total running time of the script:** ( 5 minutes 32.168 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
81 changes: 42 additions & 39 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,38 +410,41 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
3%|3 | 16.9M/548M [00:00<00:03, 177MB/s]
6%|6 | 34.2M/548M [00:00<00:03, 180MB/s]
9%|9 | 51.5M/548M [00:00<00:02, 180MB/s]
13%|#2 | 68.9M/548M [00:00<00:02, 180MB/s]
16%|#5 | 86.1M/548M [00:00<00:02, 180MB/s]
19%|#8 | 103M/548M [00:00<00:02, 177MB/s]
22%|##1 | 120M/548M [00:00<00:02, 177MB/s]
25%|##5 | 138M/548M [00:00<00:02, 178MB/s]
28%|##8 | 155M/548M [00:00<00:02, 178MB/s]
31%|###1 | 172M/548M [00:01<00:02, 179MB/s]
35%|###4 | 189M/548M [00:01<00:02, 179MB/s]
38%|###7 | 206M/548M [00:01<00:01, 179MB/s]
41%|#### | 224M/548M [00:01<00:01, 179MB/s]
44%|####3 | 241M/548M [00:01<00:01, 180MB/s]
47%|####7 | 258M/548M [00:01<00:01, 180MB/s]
50%|##### | 275M/548M [00:01<00:01, 179MB/s]
53%|#####3 | 292M/548M [00:01<00:01, 180MB/s]
57%|#####6 | 310M/548M [00:01<00:01, 180MB/s]
60%|#####9 | 327M/548M [00:01<00:01, 180MB/s]
63%|######2 | 344M/548M [00:02<00:01, 180MB/s]
66%|######6 | 362M/548M [00:02<00:01, 181MB/s]
69%|######9 | 379M/548M [00:02<00:00, 181MB/s]
72%|#######2 | 396M/548M [00:02<00:00, 180MB/s]
75%|#######5 | 414M/548M [00:02<00:00, 181MB/s]
79%|#######8 | 431M/548M [00:02<00:00, 180MB/s]
82%|########1 | 448M/548M [00:02<00:00, 180MB/s]
85%|########4 | 466M/548M [00:02<00:00, 181MB/s]
88%|########8 | 483M/548M [00:02<00:00, 181MB/s]
91%|#########1| 500M/548M [00:02<00:00, 181MB/s]
94%|#########4| 517M/548M [00:03<00:00, 181MB/s]
98%|#########7| 535M/548M [00:03<00:00, 180MB/s]
100%|##########| 548M/548M [00:03<00:00, 180MB/s]
3%|2 | 15.9M/548M [00:00<00:03, 165MB/s]
6%|5 | 31.9M/548M [00:00<00:03, 166MB/s]
9%|8 | 47.9M/548M [00:00<00:03, 167MB/s]
12%|#1 | 63.9M/548M [00:00<00:03, 167MB/s]
15%|#4 | 80.0M/548M [00:00<00:02, 167MB/s]
18%|#7 | 96.1M/548M [00:00<00:02, 168MB/s]
20%|## | 112M/548M [00:00<00:02, 168MB/s]
23%|##3 | 128M/548M [00:00<00:02, 167MB/s]
26%|##6 | 144M/548M [00:00<00:02, 167MB/s]
29%|##9 | 160M/548M [00:01<00:02, 167MB/s]
32%|###2 | 176M/548M [00:01<00:02, 166MB/s]
35%|###5 | 192M/548M [00:01<00:02, 166MB/s]
38%|###7 | 208M/548M [00:01<00:02, 165MB/s]
41%|#### | 224M/548M [00:01<00:02, 165MB/s]
44%|####3 | 240M/548M [00:01<00:01, 165MB/s]
47%|####6 | 256M/548M [00:01<00:01, 165MB/s]
50%|####9 | 271M/548M [00:01<00:01, 165MB/s]
52%|#####2 | 287M/548M [00:01<00:01, 165MB/s]
55%|#####5 | 303M/548M [00:01<00:01, 166MB/s]
58%|#####8 | 320M/548M [00:02<00:01, 167MB/s]
61%|######1 | 336M/548M [00:02<00:01, 168MB/s]
64%|######4 | 352M/548M [00:02<00:01, 169MB/s]
67%|######7 | 369M/548M [00:02<00:01, 169MB/s]
70%|####### | 385M/548M [00:02<00:01, 169MB/s]
73%|#######3 | 401M/548M [00:02<00:00, 170MB/s]
76%|#######6 | 418M/548M [00:02<00:00, 170MB/s]
79%|#######9 | 434M/548M [00:02<00:00, 168MB/s]
82%|########2 | 450M/548M [00:02<00:00, 141MB/s]
85%|########5 | 466M/548M [00:02<00:00, 148MB/s]
88%|########8 | 482M/548M [00:03<00:00, 155MB/s]
91%|######### | 498M/548M [00:03<00:00, 159MB/s]
94%|#########3| 515M/548M [00:03<00:00, 162MB/s]
97%|#########6| 531M/548M [00:03<00:00, 165MB/s]
100%|#########9| 548M/548M [00:03<00:00, 167MB/s]
100%|##########| 548M/548M [00:03<00:00, 165MB/s]
Expand Down Expand Up @@ -762,22 +765,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.076185 Content Loss: 4.141464
Style Loss : 4.037464 Content Loss: 4.144108
run [100]:
Style Loss : 1.117196 Content Loss: 3.007679
Style Loss : 1.150719 Content Loss: 3.040204
run [150]:
Style Loss : 0.697977 Content Loss: 2.643763
Style Loss : 0.710417 Content Loss: 2.655678
run [200]:
Style Loss : 0.472661 Content Loss: 2.485559
Style Loss : 0.479242 Content Loss: 2.487361
run [250]:
Style Loss : 0.342390 Content Loss: 2.399799
Style Loss : 0.344841 Content Loss: 2.403615
run [300]:
Style Loss : 0.261053 Content Loss: 2.347213
Style Loss : 0.262127 Content Loss: 2.350151
Expand All @@ -786,7 +789,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 37.280 seconds)
**Total running time of the script:** ( 0 minutes 37.689 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.583 seconds)
**Total running time of the script:** ( 0 minutes 0.560 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 46674a2

Please sign in to comment.