Skip to content

Commit

Permalink
Automated tutorials push
Browse files Browse the repository at this point in the history
  • Loading branch information
pytorchbot committed Jun 25, 2024
1 parent 46674a2 commit 010419e
Show file tree
Hide file tree
Showing 183 changed files with 11,625 additions and 14,011 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7990a69c",
"id": "de9d7e17",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "cd424fde",
"id": "38547d24",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "41f06fa1",
"id": "6eaa09f5",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
"id": "a8f13a31",
"id": "70db1f88",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "e9a0317a",
"id": "8cf9d4a9",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "cb107f47",
"id": "32d07b59",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9e2a7722",
"id": "b0907d17",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "a175d8e7",
"id": "e734a4d7",
"metadata": {},
"source": [
"\n",
Expand Down
4 changes: 2 additions & 2 deletions _downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "cea1749f",
"id": "594c72cf",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
"id": "9b221904",
"id": "475c2a63",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "34ea9b0f",
"id": "8c63d153",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
"id": "a5bc0df0",
"id": "6dd50681",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "99f4bac3",
"id": "cd6a315d",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "17dc5c5a",
"id": "9a958047",
"metadata": {},
"source": [
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "bb07cc8e",
"id": "26b54310",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -50,7 +50,7 @@
},
{
"cell_type": "markdown",
"id": "d4ef5611",
"id": "34de35c5",
"metadata": {},
"source": [
"\n",
Expand Down
Binary file modified _images/sphx_glr_coding_ddpg_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_dqn_with_rnn_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_neural_style_tutorial_004.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_ppo_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_reinforcement_q_learning_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_spatial_transformer_tutorial_001.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified _images/sphx_glr_torchvision_tutorial_002.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
42 changes: 21 additions & 21 deletions _sources/advanced/coding_ddpg.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1632,26 +1632,26 @@ modules we need.
0%| | 0/10000 [00:00<?, ?it/s]
8%|8 | 800/10000 [00:00<00:08, 1065.00it/s]
16%|#6 | 1600/10000 [00:03<00:20, 401.41it/s]
24%|##4 | 2400/10000 [00:04<00:13, 543.48it/s]
32%|###2 | 3200/10000 [00:05<00:10, 655.82it/s]
40%|#### | 4000/10000 [00:06<00:08, 737.72it/s]
48%|####8 | 4800/10000 [00:06<00:06, 799.24it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 846.72it/s]
reward: -2.56 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.21/6.37, grad norm= 44.83, loss_value= 395.90, loss_actor= 18.20, target value: -18.76: 56%|#####6 | 5600/10000 [00:08<00:05, 846.72it/s]
reward: -2.56 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.21/6.37, grad norm= 44.83, loss_value= 395.90, loss_actor= 18.20, target value: -18.76: 64%|######4 | 6400/10000 [00:09<00:05, 635.05it/s]
reward: -0.10 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.70/6.05, grad norm= 81.33, loss_value= 343.52, loss_actor= 14.75, target value: -16.15: 64%|######4 | 6400/10000 [00:10<00:05, 635.05it/s]
reward: -0.10 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.70/6.05, grad norm= 81.33, loss_value= 343.52, loss_actor= 14.75, target value: -16.15: 72%|#######2 | 7200/10000 [00:12<00:05, 486.42it/s]
reward: -1.82 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.04/5.69, grad norm= 203.44, loss_value= 302.93, loss_actor= 15.25, target value: -20.36: 72%|#######2 | 7200/10000 [00:13<00:05, 486.42it/s]
reward: -1.82 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.04/5.69, grad norm= 203.44, loss_value= 302.93, loss_actor= 15.25, target value: -20.36: 80%|######## | 8000/10000 [00:14<00:04, 418.73it/s]
reward: -4.83 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.87/5.11, grad norm= 241.11, loss_value= 259.33, loss_actor= 16.64, target value: -19.13: 80%|######## | 8000/10000 [00:15<00:04, 418.73it/s]
reward: -4.83 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.87/5.11, grad norm= 241.11, loss_value= 259.33, loss_actor= 16.64, target value: -19.13: 88%|########8 | 8800/10000 [00:17<00:03, 382.18it/s]
reward: -5.14 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.44/4.86, grad norm= 150.89, loss_value= 188.20, loss_actor= 18.66, target value: -16.22: 88%|########8 | 8800/10000 [00:20<00:03, 382.18it/s]
reward: -5.14 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.44/4.86, grad norm= 150.89, loss_value= 188.20, loss_actor= 18.66, target value: -16.22: 96%|#########6| 9600/10000 [00:21<00:01, 284.56it/s]
reward: -5.13 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.81/5.41, grad norm= 125.30, loss_value= 268.30, loss_actor= 16.89, target value: -19.89: 96%|#########6| 9600/10000 [00:22<00:01, 284.56it/s]
reward: -5.13 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-2.81/5.41, grad norm= 125.30, loss_value= 268.30, loss_actor= 16.89, target value: -19.89: : 10400it [00:25, 264.83it/s]
reward: -3.58 (r0 = -2.20), reward eval: reward: 0.53, reward normalized=-3.82/5.60, grad norm= 87.65, loss_value= 267.84, loss_actor= 23.16, target value: -27.36: : 10400it [00:26, 264.83it/s]
8%|8 | 800/10000 [00:00<00:08, 1070.76it/s]
16%|#6 | 1600/10000 [00:03<00:20, 411.40it/s]
24%|##4 | 2400/10000 [00:04<00:13, 557.10it/s]
32%|###2 | 3200/10000 [00:05<00:10, 672.12it/s]
40%|#### | 4000/10000 [00:05<00:07, 756.22it/s]
48%|####8 | 4800/10000 [00:06<00:06, 819.44it/s]
56%|#####6 | 5600/10000 [00:07<00:05, 863.13it/s]
reward: -2.33 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.94/6.04, grad norm= 39.94, loss_value= 309.89, loss_actor= 16.81, target value: -18.49: 56%|#####6 | 5600/10000 [00:08<00:05, 863.13it/s]
reward: -2.33 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.94/6.04, grad norm= 39.94, loss_value= 309.89, loss_actor= 16.81, target value: -18.49: 64%|######4 | 6400/10000 [00:09<00:05, 638.43it/s]
reward: -0.12 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.67/5.59, grad norm= 90.14, loss_value= 235.72, loss_actor= 14.76, target value: -16.42: 64%|######4 | 6400/10000 [00:10<00:05, 638.43it/s]
reward: -0.12 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.67/5.59, grad norm= 90.14, loss_value= 235.72, loss_actor= 14.76, target value: -16.42: 72%|#######2 | 7200/10000 [00:12<00:05, 491.19it/s]
reward: -2.76 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.44/5.94, grad norm= 201.55, loss_value= 271.25, loss_actor= 13.79, target value: -15.96: 72%|#######2 | 7200/10000 [00:12<00:05, 491.19it/s]
reward: -2.76 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.44/5.94, grad norm= 201.55, loss_value= 271.25, loss_actor= 13.79, target value: -15.96: 80%|######## | 8000/10000 [00:14<00:04, 423.48it/s]
reward: -4.90 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.51/4.70, grad norm= 34.93, loss_value= 161.79, loss_actor= 15.98, target value: -16.57: 80%|######## | 8000/10000 [00:15<00:04, 423.48it/s]
reward: -4.90 (r0 = -2.76), reward eval: reward: 0.00, reward normalized=-2.51/4.70, grad norm= 34.93, loss_value= 161.79, loss_actor= 15.98, target value: -16.57: 88%|########8 | 8800/10000 [00:16<00:03, 388.29it/s]
reward: -4.41 (r0 = -2.76), reward eval: reward: -5.39, reward normalized=-2.43/5.13, grad norm= 72.99, loss_value= 241.28, loss_actor= 13.02, target value: -16.27: 88%|########8 | 8800/10000 [00:19<00:03, 388.29it/s]
reward: -4.41 (r0 = -2.76), reward eval: reward: -5.39, reward normalized=-2.43/5.13, grad norm= 72.99, loss_value= 241.28, loss_actor= 13.02, target value: -16.27: 96%|#########6| 9600/10000 [00:21<00:01, 288.47it/s]
reward: -4.65 (r0 = -2.76), reward eval: reward: -5.39, reward normalized=-2.92/5.14, grad norm= 147.22, loss_value= 251.85, loss_actor= 11.03, target value: -20.73: 96%|#########6| 9600/10000 [00:22<00:01, 288.47it/s]
reward: -4.65 (r0 = -2.76), reward eval: reward: -5.39, reward normalized=-2.92/5.14, grad norm= 147.22, loss_value= 251.85, loss_actor= 11.03, target value: -20.73: : 10400it [00:24, 268.73it/s]
reward: -4.72 (r0 = -2.76), reward eval: reward: -5.39, reward normalized=-3.49/4.19, grad norm= 133.24, loss_value= 169.25, loss_actor= 14.39, target value: -24.45: : 10400it [00:25, 268.73it/s]
Expand Down Expand Up @@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 41.549 seconds)
**Total running time of the script:** ( 0 minutes 40.906 seconds)


.. _sphx_glr_download_advanced_coding_ddpg.py:
Expand Down
6 changes: 3 additions & 3 deletions _sources/advanced/dynamic_quantization_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -516,9 +516,9 @@ models run single threaded.
.. code-block:: none
loss: 5.167
elapsed time (seconds): 206.3
elapsed time (seconds): 205.3
loss: 5.168
elapsed time (seconds): 116.9
elapsed time (seconds): 117.6
Expand All @@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu

.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 5 minutes 32.168 seconds)
**Total running time of the script:** ( 5 minutes 31.684 seconds)


.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:
Expand Down
82 changes: 40 additions & 42 deletions _sources/advanced/neural_style_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -410,41 +410,39 @@ network to evaluation mode using ``.eval()``.
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
0%| | 0.00/548M [00:00<?, ?B/s]
3%|2 | 15.9M/548M [00:00<00:03, 165MB/s]
6%|5 | 31.9M/548M [00:00<00:03, 166MB/s]
9%|8 | 47.9M/548M [00:00<00:03, 167MB/s]
12%|#1 | 63.9M/548M [00:00<00:03, 167MB/s]
15%|#4 | 80.0M/548M [00:00<00:02, 167MB/s]
18%|#7 | 96.1M/548M [00:00<00:02, 168MB/s]
20%|## | 112M/548M [00:00<00:02, 168MB/s]
23%|##3 | 128M/548M [00:00<00:02, 167MB/s]
26%|##6 | 144M/548M [00:00<00:02, 167MB/s]
29%|##9 | 160M/548M [00:01<00:02, 167MB/s]
32%|###2 | 176M/548M [00:01<00:02, 166MB/s]
35%|###5 | 192M/548M [00:01<00:02, 166MB/s]
38%|###7 | 208M/548M [00:01<00:02, 165MB/s]
41%|#### | 224M/548M [00:01<00:02, 165MB/s]
44%|####3 | 240M/548M [00:01<00:01, 165MB/s]
47%|####6 | 256M/548M [00:01<00:01, 165MB/s]
50%|####9 | 271M/548M [00:01<00:01, 165MB/s]
52%|#####2 | 287M/548M [00:01<00:01, 165MB/s]
55%|#####5 | 303M/548M [00:01<00:01, 166MB/s]
58%|#####8 | 320M/548M [00:02<00:01, 167MB/s]
61%|######1 | 336M/548M [00:02<00:01, 168MB/s]
64%|######4 | 352M/548M [00:02<00:01, 169MB/s]
67%|######7 | 369M/548M [00:02<00:01, 169MB/s]
70%|####### | 385M/548M [00:02<00:01, 169MB/s]
73%|#######3 | 401M/548M [00:02<00:00, 170MB/s]
76%|#######6 | 418M/548M [00:02<00:00, 170MB/s]
79%|#######9 | 434M/548M [00:02<00:00, 168MB/s]
82%|########2 | 450M/548M [00:02<00:00, 141MB/s]
85%|########5 | 466M/548M [00:02<00:00, 148MB/s]
88%|########8 | 482M/548M [00:03<00:00, 155MB/s]
91%|######### | 498M/548M [00:03<00:00, 159MB/s]
94%|#########3| 515M/548M [00:03<00:00, 162MB/s]
97%|#########6| 531M/548M [00:03<00:00, 165MB/s]
100%|#########9| 548M/548M [00:03<00:00, 167MB/s]
100%|##########| 548M/548M [00:03<00:00, 165MB/s]
3%|3 | 16.8M/548M [00:00<00:03, 175MB/s]
6%|6 | 33.6M/548M [00:00<00:03, 176MB/s]
9%|9 | 50.6M/548M [00:00<00:02, 176MB/s]
12%|#2 | 67.5M/548M [00:00<00:02, 176MB/s]
15%|#5 | 84.4M/548M [00:00<00:02, 176MB/s]
18%|#8 | 101M/548M [00:00<00:02, 175MB/s]
22%|##1 | 118M/548M [00:00<00:02, 176MB/s]
25%|##4 | 135M/548M [00:00<00:02, 176MB/s]
28%|##7 | 152M/548M [00:00<00:02, 176MB/s]
31%|### | 169M/548M [00:01<00:02, 177MB/s]
34%|###3 | 186M/548M [00:01<00:02, 177MB/s]
37%|###7 | 203M/548M [00:01<00:02, 176MB/s]
40%|#### | 220M/548M [00:01<00:01, 175MB/s]
43%|####3 | 236M/548M [00:01<00:01, 175MB/s]
46%|####6 | 253M/548M [00:01<00:01, 174MB/s]
49%|####9 | 270M/548M [00:01<00:01, 172MB/s]
52%|#####2 | 287M/548M [00:01<00:01, 174MB/s]
55%|#####5 | 304M/548M [00:01<00:01, 175MB/s]
58%|#####8 | 320M/548M [00:01<00:01, 175MB/s]
62%|######1 | 337M/548M [00:02<00:01, 175MB/s]
65%|######4 | 354M/548M [00:02<00:01, 176MB/s]
68%|######7 | 371M/548M [00:02<00:01, 175MB/s]
71%|####### | 388M/548M [00:02<00:00, 175MB/s]
74%|#######3 | 405M/548M [00:02<00:00, 176MB/s]
77%|#######7 | 422M/548M [00:02<00:00, 176MB/s]
80%|######## | 439M/548M [00:02<00:00, 176MB/s]
83%|########3 | 456M/548M [00:02<00:00, 176MB/s]
86%|########6 | 473M/548M [00:02<00:00, 176MB/s]
89%|########9 | 490M/548M [00:02<00:00, 175MB/s]
92%|#########2| 506M/548M [00:03<00:00, 175MB/s]
95%|#########5| 523M/548M [00:03<00:00, 175MB/s]
99%|#########8| 540M/548M [00:03<00:00, 176MB/s]
100%|##########| 548M/548M [00:03<00:00, 175MB/s]
Expand Down Expand Up @@ -765,22 +763,22 @@ Finally, we can run the algorithm.
Optimizing..
run [50]:
Style Loss : 4.037464 Content Loss: 4.144108
Style Loss : 4.108783 Content Loss: 4.076407
run [100]:
Style Loss : 1.150719 Content Loss: 3.040204
Style Loss : 1.131557 Content Loss: 3.022170
run [150]:
Style Loss : 0.710417 Content Loss: 2.655678
Style Loss : 0.701424 Content Loss: 2.643672
run [200]:
Style Loss : 0.479242 Content Loss: 2.487361
Style Loss : 0.470905 Content Loss: 2.488043
run [250]:
Style Loss : 0.344841 Content Loss: 2.403615
Style Loss : 0.342489 Content Loss: 2.402129
run [300]:
Style Loss : 0.262127 Content Loss: 2.350151
Style Loss : 0.262571 Content Loss: 2.348458
Expand All @@ -789,7 +787,7 @@ Finally, we can run the algorithm.
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 37.689 seconds)
**Total running time of the script:** ( 0 minutes 37.426 seconds)


.. _sphx_glr_download_advanced_neural_style_tutorial.py:
Expand Down
2 changes: 1 addition & 1 deletion _sources/advanced/numpy_extensions_tutorial.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
.. rst-class:: sphx-glr-timing

**Total running time of the script:** ( 0 minutes 0.560 seconds)
**Total running time of the script:** ( 0 minutes 0.619 seconds)


.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:
Expand Down
Loading

0 comments on commit 010419e

Please sign in to comment.