diff --git a/_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb b/_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb index 5cf0e502b1..7c7c307175 100644 --- a/_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb +++ b/_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "9ba8ee83", + "id": "7990a69c", "metadata": {}, "outputs": [], "source": [ @@ -50,7 +50,7 @@ }, { "cell_type": "markdown", - "id": "05cc4f1b", + "id": "cd424fde", "metadata": {}, "source": [ "\n", diff --git a/_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb b/_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb index ad085456ff..d5dd2551ee 100644 --- a/_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb +++ b/_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb @@ -31,7 +31,7 @@ { "cell_type": "code", "execution_count": null, - "id": "4ffc4dc0", + "id": "41f06fa1", "metadata": {}, "outputs": [], "source": [ @@ -47,7 +47,7 @@ }, { "cell_type": "markdown", - "id": "146701e4", + "id": "a8f13a31", "metadata": {}, "source": [ "\n", diff --git a/_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb b/_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb index c911e7fa66..821328bcb4 100644 --- a/_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb +++ b/_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "7ddb23c2", + "id": "e9a0317a", "metadata": {}, "outputs": [], "source": [ @@ -50,7 +50,7 @@ }, { "cell_type": "markdown", - "id": "35c87e1d", + "id": "cb107f47", "metadata": {}, "source": [ "\n", diff --git a/_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb b/_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb index 8d1821a063..58fe4ee2ce 100644 --- a/_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb +++ b/_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb @@ -35,7 +35,7 @@ { "cell_type": "code", "execution_count": null, - "id": "f165baaf", + "id": "9e2a7722", "metadata": {}, "outputs": [], "source": [ @@ -51,7 +51,7 @@ }, { "cell_type": "markdown", - "id": "db68e714", + "id": "a175d8e7", "metadata": {}, "source": [ "\n", diff --git a/_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb b/_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb index 99d9f26dda..16d9254cf1 100644 --- a/_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb +++ b/_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb @@ -37,7 +37,7 @@ { "cell_type": "code", "execution_count": null, - "id": "0c9495e6", + "id": "cea1749f", "metadata": {}, "outputs": [], "source": [ @@ -53,7 +53,7 @@ }, { "cell_type": "markdown", - "id": "765def5b", + "id": "9b221904", "metadata": {}, "source": [ "\n", diff --git a/_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb b/_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb index b8cda3a84d..98c5fa212b 100644 --- a/_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb +++ b/_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb @@ -35,7 +35,7 @@ { "cell_type": "code", "execution_count": null, - "id": "657d4594", + "id": "34ea9b0f", "metadata": {}, "outputs": [], "source": [ @@ -51,7 +51,7 @@ }, { "cell_type": "markdown", - "id": "ba7c212e", + "id": "a5bc0df0", "metadata": {}, "source": [ "\n", diff --git a/_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb b/_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb index 1404b20bb7..9df7e529f6 100644 --- a/_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb +++ b/_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "2424e425", + "id": "99f4bac3", "metadata": {}, "outputs": [], "source": [ @@ -50,7 +50,7 @@ }, { "cell_type": "markdown", - "id": "ec6ff4d4", + "id": "17dc5c5a", "metadata": {}, "source": [ "\n", diff --git a/_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb b/_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb index eff388e25d..b188778e77 100644 --- a/_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb +++ b/_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb @@ -34,7 +34,7 @@ { "cell_type": "code", "execution_count": null, - "id": "babb6898", + "id": "bb07cc8e", "metadata": {}, "outputs": [], "source": [ @@ -50,7 +50,7 @@ }, { "cell_type": "markdown", - "id": "54aab864", + "id": "d4ef5611", "metadata": {}, "source": [ "\n", diff --git a/_images/sphx_glr_coding_ddpg_001.png b/_images/sphx_glr_coding_ddpg_001.png index d231dffd0b..cf9dff5ac7 100644 Binary files a/_images/sphx_glr_coding_ddpg_001.png and b/_images/sphx_glr_coding_ddpg_001.png differ diff --git a/_images/sphx_glr_dqn_with_rnn_tutorial_001.png b/_images/sphx_glr_dqn_with_rnn_tutorial_001.png index 6124260438..5292f46462 100644 Binary files a/_images/sphx_glr_dqn_with_rnn_tutorial_001.png and b/_images/sphx_glr_dqn_with_rnn_tutorial_001.png differ diff --git a/_images/sphx_glr_neural_style_tutorial_004.png b/_images/sphx_glr_neural_style_tutorial_004.png index 7d86d83fdd..3ca654ec86 100644 Binary files a/_images/sphx_glr_neural_style_tutorial_004.png and b/_images/sphx_glr_neural_style_tutorial_004.png differ diff --git a/_images/sphx_glr_reinforcement_ppo_001.png b/_images/sphx_glr_reinforcement_ppo_001.png index 1be3fe0b25..ea93ac4941 100644 Binary files a/_images/sphx_glr_reinforcement_ppo_001.png and b/_images/sphx_glr_reinforcement_ppo_001.png differ diff --git a/_images/sphx_glr_reinforcement_q_learning_001.png b/_images/sphx_glr_reinforcement_q_learning_001.png index 559586c5ea..8a2ef37b62 100644 Binary files a/_images/sphx_glr_reinforcement_q_learning_001.png and b/_images/sphx_glr_reinforcement_q_learning_001.png differ diff --git a/_images/sphx_glr_spatial_transformer_tutorial_001.png b/_images/sphx_glr_spatial_transformer_tutorial_001.png index 5d4fddf7d8..dbc64859b7 100644 Binary files a/_images/sphx_glr_spatial_transformer_tutorial_001.png and b/_images/sphx_glr_spatial_transformer_tutorial_001.png differ diff --git a/_images/sphx_glr_torchvision_tutorial_002.png b/_images/sphx_glr_torchvision_tutorial_002.png index 38e720e327..e531567d7f 100644 Binary files a/_images/sphx_glr_torchvision_tutorial_002.png and b/_images/sphx_glr_torchvision_tutorial_002.png differ diff --git a/_sources/advanced/coding_ddpg.rst.txt b/_sources/advanced/coding_ddpg.rst.txt index e68e567645..d7db549d18 100644 --- a/_sources/advanced/coding_ddpg.rst.txt +++ b/_sources/advanced/coding_ddpg.rst.txt @@ -1632,26 +1632,26 @@ modules we need. 0%| | 0/10000 [00:00 + @@ -513,162 +513,199 @@ up our dataset. Downloading tokenizer_config.json: 0%| | 0.00/49.0 [00:00 @@ -1493,16 +1530,16 @@ zero-shot pruning, or pruning without fine-tuning / retraining. 0%| | 0/43 [00:00 + @@ -806,7 +806,7 @@ https://colab.research.google.com/drive/1HiICg6jRkBnr5hvK2-VnMi88Vi9pUzEJ .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.220 seconds) + **Total running time of the script:** ( 0 minutes 0.219 seconds) .. _sphx_glr_download_beginner_Intro_to_TorchScript_tutorial.py: diff --git a/_sources/beginner/basics/autogradqs_tutorial.rst.txt b/_sources/beginner/basics/autogradqs_tutorial.rst.txt index 9328d21933..b7d4dab5a0 100644 --- a/_sources/beginner/basics/autogradqs_tutorial.rst.txt +++ b/_sources/beginner/basics/autogradqs_tutorial.rst.txt @@ -113,8 +113,8 @@ documentation `__. .. code-block:: none - Gradient function for z = - Gradient function for loss = + Gradient function for z = + Gradient function for loss = diff --git a/_sources/beginner/basics/buildmodel_tutorial.rst.txt b/_sources/beginner/basics/buildmodel_tutorial.rst.txt index 05559b84e3..2f0fef9a00 100644 --- a/_sources/beginner/basics/buildmodel_tutorial.rst.txt +++ b/_sources/beginner/basics/buildmodel_tutorial.rst.txt @@ -482,7 +482,7 @@ Further Reading .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.303 seconds) + **Total running time of the script:** ( 0 minutes 0.322 seconds) .. _sphx_glr_download_beginner_basics_buildmodel_tutorial.py: diff --git a/_sources/beginner/basics/data_tutorial.rst.txt b/_sources/beginner/basics/data_tutorial.rst.txt index 3825ba91bd..ec9fc72954 100644 --- a/_sources/beginner/basics/data_tutorial.rst.txt +++ b/_sources/beginner/basics/data_tutorial.rst.txt @@ -103,77 +103,43 @@ We load the `FashionMNIST Dataset `_. Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz 0%| | 0/26421880 [00:00`_. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 1 minutes 4.912 seconds) + **Total running time of the script:** ( 1 minutes 3.247 seconds) .. _sphx_glr_download_beginner_basics_quickstart_tutorial.py: diff --git a/_sources/beginner/basics/saveloadrun_tutorial.rst.txt b/_sources/beginner/basics/saveloadrun_tutorial.rst.txt index 101bfcdbbc..b17bb4646b 100644 --- a/_sources/beginner/basics/saveloadrun_tutorial.rst.txt +++ b/_sources/beginner/basics/saveloadrun_tutorial.rst.txt @@ -76,39 +76,38 @@ method: Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg16-397923af.pth 0%| | 0.00/528M [00:00 - - + + + @@ -436,7 +436,7 @@ implements all these methods. Using it is very simple: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.067 seconds) + **Total running time of the script:** ( 0 minutes 0.062 seconds) .. _sphx_glr_download_beginner_blitz_neural_networks_tutorial.py: diff --git a/_sources/beginner/chatbot_tutorial.rst.txt b/_sources/beginner/chatbot_tutorial.rst.txt index 3343b9741d..b928539f34 100644 --- a/_sources/beginner/chatbot_tutorial.rst.txt +++ b/_sources/beginner/chatbot_tutorial.rst.txt @@ -5767,7 +5767,7 @@ in PyTorch! .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 5 minutes 23.844 seconds) + **Total running time of the script:** ( 5 minutes 51.748 seconds) .. _sphx_glr_download_beginner_chatbot_tutorial.py: diff --git a/_sources/beginner/data_loading_tutorial.rst.txt b/_sources/beginner/data_loading_tutorial.rst.txt index f9396d9fe9..7968d2cf65 100644 --- a/_sources/beginner/data_loading_tutorial.rst.txt +++ b/_sources/beginner/data_loading_tutorial.rst.txt @@ -63,7 +63,7 @@ installed: .. code-block:: none - + @@ -661,7 +661,7 @@ For an example with training code, please see .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 2.801 seconds) + **Total running time of the script:** ( 0 minutes 2.734 seconds) .. _sphx_glr_download_beginner_data_loading_tutorial.py: diff --git a/_sources/beginner/dcgan_faces_tutorial.rst.txt b/_sources/beginner/dcgan_faces_tutorial.rst.txt index a413470eb1..8e760aa809 100644 --- a/_sources/beginner/dcgan_faces_tutorial.rst.txt +++ b/_sources/beginner/dcgan_faces_tutorial.rst.txt @@ -1284,42 +1284,42 @@ animation.
- +
- + oninput="anim5c4a5c884f6a4b178d66187683458b26.set_frame(parseInt(this.value));">
- - - - - - - - -
-
- - - Once + - - Loop + - +
@@ -1329,9 +1329,9 @@ animation. /* Instantiate the Animation class. */ /* The IDs given should match those used in the template above. */ (function() { - var img_id = "_anim_img36808508b2b04dc9aa733cbf377e7454"; - var slider_id = "_anim_slider36808508b2b04dc9aa733cbf377e7454"; - var loop_select_id = "_anim_loop_select36808508b2b04dc9aa733cbf377e7454"; + var img_id = "_anim_img5c4a5c884f6a4b178d66187683458b26"; + var slider_id = "_anim_slider5c4a5c884f6a4b178d66187683458b26"; + var loop_select_id = "_anim_loop_select5c4a5c884f6a4b178d66187683458b26"; var frames = new Array(17); frames[0] = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAyAAAAMgCAYAAADbcAZoAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90\ @@ -233662,7 +233662,7 @@ animation. /* set a timeout to make sure all the above elements are created before the object is initialized. */ setTimeout(function() { - anim36808508b2b04dc9aa733cbf377e7454 = new Animation(frames, img_id, slider_id, 1000.0, + anim5c4a5c884f6a4b178d66187683458b26 = new Animation(frames, img_id, slider_id, 1000.0, loop_select_id); }, 0); })() @@ -233735,7 +233735,7 @@ could go from here. You could: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 6 minutes 59.057 seconds) + **Total running time of the script:** ( 6 minutes 57.402 seconds) .. _sphx_glr_download_beginner_dcgan_faces_tutorial.py: diff --git a/_sources/beginner/deploy_seq2seq_hybrid_frontend_tutorial.rst.txt b/_sources/beginner/deploy_seq2seq_hybrid_frontend_tutorial.rst.txt index 6b3d7f37af..9d1500e70d 100644 --- a/_sources/beginner/deploy_seq2seq_hybrid_frontend_tutorial.rst.txt +++ b/_sources/beginner/deploy_seq2seq_hybrid_frontend_tutorial.rst.txt @@ -1135,7 +1135,7 @@ of torch.save(model, PATH). .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.736 seconds) + **Total running time of the script:** ( 0 minutes 0.733 seconds) .. _sphx_glr_download_beginner_deploy_seq2seq_hybrid_frontend_tutorial.py: diff --git a/_sources/beginner/examples_nn/polynomial_nn.rst.txt b/_sources/beginner/examples_nn/polynomial_nn.rst.txt index 76c921d76a..001b137c4b 100644 --- a/_sources/beginner/examples_nn/polynomial_nn.rst.txt +++ b/_sources/beginner/examples_nn/polynomial_nn.rst.txt @@ -144,7 +144,7 @@ input and may have some trainable weights. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.891 seconds) + **Total running time of the script:** ( 0 minutes 0.849 seconds) .. _sphx_glr_download_beginner_examples_nn_polynomial_nn.py: diff --git a/_sources/beginner/examples_tensor/polynomial_numpy.rst.txt b/_sources/beginner/examples_tensor/polynomial_numpy.rst.txt index 619a3e88b8..1e19f8f9b8 100644 --- a/_sources/beginner/examples_tensor/polynomial_numpy.rst.txt +++ b/_sources/beginner/examples_tensor/polynomial_numpy.rst.txt @@ -113,7 +113,7 @@ generic numeric computations. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.342 seconds) + **Total running time of the script:** ( 0 minutes 0.353 seconds) .. _sphx_glr_download_beginner_examples_tensor_polynomial_numpy.py: diff --git a/_sources/beginner/examples_tensor/polynomial_tensor.rst.txt b/_sources/beginner/examples_tensor/polynomial_tensor.rst.txt index 3f389d448b..9fcdb56776 100644 --- a/_sources/beginner/examples_tensor/polynomial_tensor.rst.txt +++ b/_sources/beginner/examples_tensor/polynomial_tensor.rst.txt @@ -123,7 +123,7 @@ just cast the Tensor to a cuda datatype. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.328 seconds) + **Total running time of the script:** ( 0 minutes 0.343 seconds) .. _sphx_glr_download_beginner_examples_tensor_polynomial_tensor.py: diff --git a/_sources/beginner/fgsm_tutorial.rst.txt b/_sources/beginner/fgsm_tutorial.rst.txt index de000a2279..c3eee78706 100644 --- a/_sources/beginner/fgsm_tutorial.rst.txt +++ b/_sources/beginner/fgsm_tutorial.rst.txt @@ -180,7 +180,7 @@ follows: .. code-block:: none - + @@ -267,7 +267,7 @@ pretrained weights. Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz to ../data/MNIST/raw/train-images-idx3-ubyte.gz 0%| | 0/9912422 [00:00\n', 'nokia\n', 'ec\n', 'virgin\n', '2011\n'] Downloading builder script: 0%| | 0.00/5.02k [00:00] + [] @@ -363,17 +363,17 @@ to the function with no history of its own. .. code-block:: none d: - - ((, 0), (None, 0)) - ((, 0), (None, 0)) - ((, 0),) + + ((, 0), (None, 0)) + ((, 0), (None, 0)) + ((, 0),) () c: - + b: - + a: None @@ -417,7 +417,7 @@ call the ``backward()`` method on the output, and check the input’s -1.4142e+00, -1.0000e+00, -5.1764e-01, 2.3850e-08, 5.1764e-01, 1.0000e+00, 1.4142e+00, 1.7321e+00, 1.9319e+00, 2.0000e+00]) - [] + [] @@ -928,17 +928,17 @@ example usage: ------------------------------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls ------------------------------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ - cudaEventRecord 46.00% 9.178ms 46.00% 9.178ms 2.295us 0.000us 0.00% 0.000us 0.000us 4000 - aten::div 26.59% 5.306ms 26.59% 5.306ms 5.306us 15.947ms 50.13% 15.947ms 15.947us 1000 - aten::mul 26.37% 5.261ms 26.37% 5.261ms 5.261us 15.866ms 49.87% 15.866ms 15.866us 1000 - cudaGetDeviceProperties_v2 0.90% 179.000us 0.90% 179.000us 179.000us 0.000us 0.00% 0.000us 0.000us 1 - cudaDeviceSynchronize 0.08% 15.000us 0.08% 15.000us 15.000us 0.000us 0.00% 0.000us 0.000us 1 + cudaEventRecord 45.07% 9.215ms 45.07% 9.215ms 2.304us 0.000us 0.00% 0.000us 0.000us 4000 + aten::div 27.27% 5.575ms 27.27% 5.575ms 5.575us 16.286ms 50.14% 16.286ms 16.286us 1000 + aten::mul 26.58% 5.433ms 26.58% 5.433ms 5.433us 16.193ms 49.86% 16.193ms 16.193us 1000 + cudaGetDeviceProperties_v2 0.96% 196.000us 0.96% 196.000us 196.000us 0.000us 0.00% 0.000us 0.000us 1 + cudaDeviceSynchronize 0.06% 12.000us 0.06% 12.000us 12.000us 0.000us 0.00% 0.000us 0.000us 1 cudaStreamIsCapturing 0.04% 8.000us 0.04% 8.000us 2.000us 0.000us 0.00% 0.000us 0.000us 4 - cudaDeviceGetStreamPriorityRange 0.02% 4.000us 0.02% 4.000us 4.000us 0.000us 0.00% 0.000us 0.000us 1 + cudaDeviceGetStreamPriorityRange 0.01% 3.000us 0.01% 3.000us 3.000us 0.000us 0.00% 0.000us 0.000us 1 cudaGetDeviceCount 0.01% 2.000us 0.01% 2.000us 1.000us 0.000us 0.00% 0.000us 0.000us 2 ------------------------------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ - Self CPU time total: 19.953ms - Self CUDA time total: 31.813ms + Self CPU time total: 20.444ms + Self CUDA time total: 32.479ms @@ -1214,7 +1214,7 @@ API ` .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.785 seconds) + **Total running time of the script:** ( 0 minutes 0.803 seconds) .. _sphx_glr_download_beginner_introyt_autogradyt_tutorial.py: diff --git a/_sources/beginner/introyt/introyt1_tutorial.rst.txt b/_sources/beginner/introyt/introyt1_tutorial.rst.txt index fc695fd1c3..5760ef0c92 100644 --- a/_sources/beginner/introyt/introyt1_tutorial.rst.txt +++ b/_sources/beginner/introyt/introyt1_tutorial.rst.txt @@ -598,23 +598,22 @@ automobile, ship, truck): Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz 0%| | 0/170498071 [00:00 - tensor([[ 1.5243e-12, 0.0000e+00, 0.0000e+00, 1.4013e-45], - [ 1.7314e-09, 0.0000e+00, -1.0777e-21, 4.5769e-41], - [ 1.7314e-09, 0.0000e+00, 4.6195e-27, 0.0000e+00]]) + tensor([[-9.6404e-31, 4.5629e-41, 2.4354e-05, 0.0000e+00], + [ 2.0735e-37, 0.0000e+00, 0.0000e+00, 0.0000e+00], + [ 5.5904e+07, 0.0000e+00, -6.3661e-37, 4.5629e-41]]) @@ -271,17 +271,17 @@ have the ``torch.*_like()`` methods: .. code-block:: none torch.Size([2, 2, 3]) - tensor([[[-4.7069e+26, 4.5769e-41, 3.9978e-08], - [ 0.0000e+00, 7.8104e-38, 0.0000e+00]], + tensor([[[ 9.5868e-10, 0.0000e+00, 0.0000e+00], + [ 0.0000e+00, 8.9683e-44, 0.0000e+00]], - [[ 0.0000e+00, 0.0000e+00, 3.1337e-09], - [ 0.0000e+00, -2.9385e+20, 4.5769e-41]]]) + [[ 1.5695e-43, 0.0000e+00, 2.0005e+03], + [ 0.0000e+00, -6.3661e-37, 4.5629e-41]]]) torch.Size([2, 2, 3]) - tensor([[[ 0.0000e+00, 0.0000e+00, 1.4013e-45], - [ 1.4013e-45, 7.8104e-38, 0.0000e+00]], + tensor([[[ 1.5769e-31, 0.0000e+00, 7.4182e-13], + [ 0.0000e+00, 4.4842e-44, 0.0000e+00]], - [[ 0.0000e+00, 0.0000e+00, 3.5708e-11], - [ 0.0000e+00, -2.9385e+20, 4.5769e-41]]]) + [[ 1.5695e-43, 0.0000e+00, 1.3849e-31], + [ 0.0000e+00, -6.3661e-37, 4.5629e-41]]]) torch.Size([2, 2, 3]) tensor([[[0., 0., 0.], [0., 0., 0.]], @@ -1777,7 +1777,7 @@ are reflected in the other: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.278 seconds) + **Total running time of the script:** ( 0 minutes 0.275 seconds) .. _sphx_glr_download_beginner_introyt_tensors_deeper_tutorial.py: diff --git a/_sources/beginner/introyt/trainingyt.rst.txt b/_sources/beginner/introyt/trainingyt.rst.txt index 5d147a8c1a..a87e8ea1c0 100644 --- a/_sources/beginner/introyt/trainingyt.rst.txt +++ b/_sources/beginner/introyt/trainingyt.rst.txt @@ -127,44 +127,88 @@ and download both training and validation data splits. Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ./data/FashionMNIST/raw/train-images-idx3-ubyte.gz 0%| | 0/26421880 [00:00 + @@ -458,7 +458,7 @@ Pick up some real data and do a comparison! .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 11.459 seconds) + **Total running time of the script:** ( 0 minutes 10.515 seconds) .. _sphx_glr_download_beginner_nlp_advanced_tutorial.py: diff --git a/_sources/beginner/nlp/deep_learning_tutorial.rst.txt b/_sources/beginner/nlp/deep_learning_tutorial.rst.txt index 3c8316992e..ddc798e54d 100644 --- a/_sources/beginner/nlp/deep_learning_tutorial.rst.txt +++ b/_sources/beginner/nlp/deep_learning_tutorial.rst.txt @@ -73,7 +73,7 @@ output below is the mapping of the :math:`i`'th row of the input under .. code-block:: none - + @@ -561,7 +561,7 @@ has to offer. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.236 seconds) + **Total running time of the script:** ( 0 minutes 0.229 seconds) .. _sphx_glr_download_beginner_nlp_deep_learning_tutorial.py: diff --git a/_sources/beginner/nlp/pytorch_tutorial.rst.txt b/_sources/beginner/nlp/pytorch_tutorial.rst.txt index f9f993eedd..be55f56c53 100644 --- a/_sources/beginner/nlp/pytorch_tutorial.rst.txt +++ b/_sources/beginner/nlp/pytorch_tutorial.rst.txt @@ -49,7 +49,7 @@ let's look what we can do with tensors. .. code-block:: none - + @@ -386,7 +386,7 @@ created. Let's see it in action. .. code-block:: none tensor([5., 7., 9.], grad_fn=) - + @@ -421,7 +421,7 @@ But how does that help us compute a gradient? .. code-block:: none tensor(21., grad_fn=) - + @@ -536,7 +536,7 @@ successful programmer in deep learning. False False None - + True None @@ -578,7 +578,7 @@ with ``.requires_grad=True`` by wrapping the code block in .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.026 seconds) + **Total running time of the script:** ( 0 minutes 0.023 seconds) .. _sphx_glr_download_beginner_nlp_pytorch_tutorial.py: diff --git a/_sources/beginner/nlp/sequence_models_tutorial.rst.txt b/_sources/beginner/nlp/sequence_models_tutorial.rst.txt index 497d579282..3e78e31c70 100644 --- a/_sources/beginner/nlp/sequence_models_tutorial.rst.txt +++ b/_sources/beginner/nlp/sequence_models_tutorial.rst.txt @@ -90,7 +90,7 @@ Let's see a quick example. .. code-block:: none - + @@ -376,7 +376,7 @@ this LSTM. Hints: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.786 seconds) + **Total running time of the script:** ( 0 minutes 0.845 seconds) .. _sphx_glr_download_beginner_nlp_sequence_models_tutorial.py: diff --git a/_sources/beginner/nlp/word_embeddings_tutorial.rst.txt b/_sources/beginner/nlp/word_embeddings_tutorial.rst.txt index efdeb6cb1b..2ccb80ff4e 100644 --- a/_sources/beginner/nlp/word_embeddings_tutorial.rst.txt +++ b/_sources/beginner/nlp/word_embeddings_tutorial.rst.txt @@ -195,7 +195,7 @@ indices are integers, not floats). .. code-block:: none - + @@ -343,9 +343,9 @@ examples and update the parameters with backpropagation. .. code-block:: none [(['forty', 'When'], 'winters'), (['winters', 'forty'], 'shall'), (['shall', 'winters'], 'besiege')] - [517.9455602169037, 515.3540275096893, 512.7791795730591, 510.2199420928955, 507.6762545108795, 505.1464672088623, 502.62786650657654, 500.1200931072235, 497.62413024902344, 495.1392092704773] - tensor([-0.8254, -0.4098, -2.3720, -0.0209, -0.3093, -1.0471, -0.9330, 0.2480, - 0.1792, 0.7622], grad_fn=) + [521.698427438736, 519.1963937282562, 516.7113490104675, 514.2416417598724, 511.7873330116272, 509.3473746776581, 506.92100834846497, 504.5057735443115, 502.10225200653076, 499.70783376693726] + tensor([-1.1033, -0.6981, 0.2359, 1.9145, 1.8357, 1.3259, -0.0693, 0.3463, + -0.6511, 1.5578], grad_fn=) @@ -439,14 +439,14 @@ tips: [(['are', 'We', 'to', 'study'], 'about'), (['about', 'are', 'study', 'the'], 'to'), (['to', 'about', 'the', 'idea'], 'study'), (['study', 'to', 'idea', 'of'], 'the'), (['the', 'study', 'of', 'a'], 'idea')] - tensor([15, 36, 5, 34]) + tensor([48, 14, 34, 7]) .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.759 seconds) + **Total running time of the script:** ( 0 minutes 0.738 seconds) .. _sphx_glr_download_beginner_nlp_word_embeddings_tutorial.py: diff --git a/_sources/beginner/nn_tutorial.rst.txt b/_sources/beginner/nn_tutorial.rst.txt index af0e5496eb..1cc0a7bdc3 100644 --- a/_sources/beginner/nn_tutorial.rst.txt +++ b/_sources/beginner/nn_tutorial.rst.txt @@ -1648,8 +1648,8 @@ You should find it runs faster now: .. code-block:: none - 0 0.18235382598638533 - 1 0.1703476063489914 + 0 0.18058827521800994 + 1 0.1704494299888611 @@ -1692,7 +1692,7 @@ what we've seen: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 24.875 seconds) + **Total running time of the script:** ( 0 minutes 25.080 seconds) .. _sphx_glr_download_beginner_nn_tutorial.py: diff --git a/_sources/beginner/onnx/export_simple_model_to_onnx_tutorial.rst.txt b/_sources/beginner/onnx/export_simple_model_to_onnx_tutorial.rst.txt index f172f5091f..489a9aba6c 100644 --- a/_sources/beginner/onnx/export_simple_model_to_onnx_tutorial.rst.txt +++ b/_sources/beginner/onnx/export_simple_model_to_onnx_tutorial.rst.txt @@ -344,7 +344,7 @@ sit tight and have fun going through all of them to learn all there is about the .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 1.083 seconds) + **Total running time of the script:** ( 0 minutes 1.014 seconds) .. _sphx_glr_download_beginner_onnx_export_simple_model_to_onnx_tutorial.py: diff --git a/_sources/beginner/onnx/onnx_registry_tutorial.rst.txt b/_sources/beginner/onnx/onnx_registry_tutorial.rst.txt index 004b1196cc..24bab75dae 100644 --- a/_sources/beginner/onnx/onnx_registry_tutorial.rst.txt +++ b/_sources/beginner/onnx/onnx_registry_tutorial.rst.txt @@ -660,7 +660,7 @@ sit tight and have fun going through all of them to learn all there is about the .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.944 seconds) + **Total running time of the script:** ( 0 minutes 0.920 seconds) .. _sphx_glr_download_beginner_onnx_onnx_registry_tutorial.py: diff --git a/_sources/beginner/template_tutorial.rst.txt b/_sources/beginner/template_tutorial.rst.txt index 1962901a9f..c4d16117d8 100644 --- a/_sources/beginner/template_tutorial.rst.txt +++ b/_sources/beginner/template_tutorial.rst.txt @@ -134,7 +134,7 @@ Further Reading .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.023 seconds) + **Total running time of the script:** ( 0 minutes 0.021 seconds) .. _sphx_glr_download_beginner_template_tutorial.py: diff --git a/_sources/beginner/transfer_learning_tutorial.rst.txt b/_sources/beginner/transfer_learning_tutorial.rst.txt index 03d2b37e6f..a894982002 100644 --- a/_sources/beginner/transfer_learning_tutorial.rst.txt +++ b/_sources/beginner/transfer_learning_tutorial.rst.txt @@ -80,7 +80,7 @@ These two major transfer learning scenarios look as follows: .. code-block:: none - + @@ -377,8 +377,8 @@ Load a pretrained model and reset final fully connected layer. Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth 0%| | 0.00/44.7M [00:00Sobol False 16810.0 - 0.921492 + 0.897621 19 66 0.003182 @@ -1717,7 +1644,7 @@ an easy way to sanity check the optimization. Sobol False 21926.0 - 0.883322 + 0.879206 23 118 0.000145 @@ -1733,7 +1660,7 @@ an easy way to sanity check the optimization. Sobol True 37560.0 - 0.955581 + 0.954121 40 124 0.002745 @@ -1749,7 +1676,7 @@ an easy way to sanity check the optimization. Sobol False 14756.0 - 0.887623 + 0.892503 18 23 0.000166 @@ -1765,7 +1692,7 @@ an easy way to sanity check the optimization. Sobol True 71630.0 - 0.949668 + 0.952770 80 99 0.000642 @@ -1781,7 +1708,7 @@ an easy way to sanity check the optimization. Sobol False 13948.0 - 0.926390 + 0.925682 16 54 0.000444 @@ -1797,7 +1724,7 @@ an easy way to sanity check the optimization. Sobol False 24686.0 - 0.846794 + 0.855924 29 50 0.000177 @@ -1813,7 +1740,7 @@ an easy way to sanity check the optimization. Sobol False 18290.0 - 0.876969 + 0.877489 20 87 0.000119 @@ -1829,7 +1756,7 @@ an easy way to sanity check the optimization. Sobol False 20996.0 - 0.862716 + 0.723215 26 17 0.005245 @@ -1844,13 +1771,13 @@ an easy way to sanity check the optimization. COMPLETED BoTorch True - 33376.0 - 0.947089 - 36 - 112 - 0.001938 + 41030.0 + 0.960493 + 44 + 121 + 0.001528 3 - 0.125679 + 0.089977 64 @@ -1902,14 +1829,14 @@ validation accuracy. The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation. - [WARNING 06-24 22:10:01] ax.service.utils.report_utils: Column reason missing for all trials. Not appending column. + [WARNING 06-25 21:49:23] ax.service.utils.report_utils: Column reason missing for all trials. Not appending column. .. raw:: html
-
+


@@ -1954,7 +1881,7 @@ much easier to model than the validation accuracy (``val_acc``) metric.
-
+


@@ -1987,7 +1914,7 @@ as the hidden sizes increase.
-
+


@@ -2016,7 +1943,7 @@ is much larger).
-
+


@@ -2033,7 +1960,7 @@ for their help with integrating TorchX with Ax. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 17 minutes 13.247 seconds) + **Total running time of the script:** ( 16 minutes 27.058 seconds) .. _sphx_glr_download_intermediate_ax_multiobjective_nas_tutorial.py: diff --git a/_sources/intermediate/char_rnn_classification_tutorial.rst.txt b/_sources/intermediate/char_rnn_classification_tutorial.rst.txt index 61c8ea8c24..002210eefd 100644 --- a/_sources/intermediate/char_rnn_classification_tutorial.rst.txt +++ b/_sources/intermediate/char_rnn_classification_tutorial.rst.txt @@ -594,26 +594,26 @@ average of the loss. .. code-block:: none - 5000 5% (0m 33s) 2.2208 Horigome / Japanese ✓ - 10000 10% (1m 7s) 1.6752 Miazga / Japanese ✗ (Polish) - 15000 15% (1m 42s) 0.1778 Yukhvidov / Russian ✓ - 20000 20% (2m 16s) 1.5856 Mclaughlin / Irish ✗ (Scottish) - 25000 25% (2m 51s) 0.6552 Banh / Vietnamese ✓ - 30000 30% (3m 25s) 1.5547 Machado / Japanese ✗ (Portuguese) - 35000 35% (3m 59s) 0.0168 Fotopoulos / Greek ✓ - 40000 40% (4m 33s) 1.1464 Quirke / Irish ✓ - 45000 45% (5m 7s) 1.7532 Reier / French ✗ (German) - 50000 50% (5m 41s) 0.8413 Hou / Chinese ✓ - 55000 55% (6m 15s) 0.8587 Duan / Vietnamese ✗ (Chinese) - 60000 60% (6m 50s) 0.2047 Giang / Vietnamese ✓ - 65000 65% (7m 24s) 2.5534 Cober / French ✗ (Czech) - 70000 70% (7m 59s) 1.5163 Mateus / Arabic ✗ (Portuguese) - 75000 75% (8m 33s) 0.2217 Hamilton / Scottish ✓ - 80000 80% (9m 7s) 0.4456 Maessen / Dutch ✓ - 85000 85% (9m 41s) 0.0239 Gan / Chinese ✓ - 90000 90% (10m 15s) 0.0521 Bellomi / Italian ✓ - 95000 95% (10m 49s) 0.0867 Vozgov / Russian ✓ - 100000 100% (11m 23s) 0.2730 Tong / Vietnamese ✓ + 5000 5% (0m 34s) 2.2208 Horigome / Japanese ✓ + 10000 10% (1m 10s) 1.6752 Miazga / Japanese ✗ (Polish) + 15000 15% (1m 46s) 0.1778 Yukhvidov / Russian ✓ + 20000 20% (2m 21s) 1.5856 Mclaughlin / Irish ✗ (Scottish) + 25000 25% (2m 57s) 0.6552 Banh / Vietnamese ✓ + 30000 30% (3m 32s) 1.5547 Machado / Japanese ✗ (Portuguese) + 35000 35% (4m 8s) 0.0168 Fotopoulos / Greek ✓ + 40000 40% (4m 43s) 1.1464 Quirke / Irish ✓ + 45000 45% (5m 18s) 1.7532 Reier / French ✗ (German) + 50000 50% (5m 54s) 0.8413 Hou / Chinese ✓ + 55000 55% (6m 29s) 0.8587 Duan / Vietnamese ✗ (Chinese) + 60000 60% (7m 5s) 0.2047 Giang / Vietnamese ✓ + 65000 65% (7m 40s) 2.5534 Cober / French ✗ (Czech) + 70000 70% (8m 16s) 1.5163 Mateus / Arabic ✗ (Portuguese) + 75000 75% (8m 51s) 0.2217 Hamilton / Scottish ✓ + 80000 80% (9m 30s) 0.4456 Maessen / Dutch ✓ + 85000 85% (10m 6s) 0.0239 Gan / Chinese ✓ + 90000 90% (10m 41s) 0.0521 Bellomi / Italian ✓ + 95000 95% (11m 16s) 0.0867 Vozgov / Russian ✓ + 100000 100% (11m 52s) 0.2730 Tong / Vietnamese ✓ @@ -653,7 +653,7 @@ learning: .. code-block:: none - [] + [] @@ -844,7 +844,7 @@ Exercises .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 11 minutes 35.666 seconds) + **Total running time of the script:** ( 12 minutes 4.757 seconds) .. _sphx_glr_download_intermediate_char_rnn_classification_tutorial.py: diff --git a/_sources/intermediate/char_rnn_generation_tutorial.rst.txt b/_sources/intermediate/char_rnn_generation_tutorial.rst.txt index 415e1d18b0..f0cbae6867 100644 --- a/_sources/intermediate/char_rnn_generation_tutorial.rst.txt +++ b/_sources/intermediate/char_rnn_generation_tutorial.rst.txt @@ -464,26 +464,26 @@ in ``all_losses`` for plotting later. .. code-block:: none - 0m 38s (5000 5%) 3.1506 - 1m 18s (10000 10%) 2.5070 - 1m 58s (15000 15%) 3.3047 - 2m 37s (20000 20%) 2.4247 - 3m 18s (25000 25%) 2.6406 - 3m 57s (30000 30%) 2.0266 - 4m 37s (35000 35%) 2.6520 - 5m 16s (40000 40%) 2.4261 - 5m 56s (45000 45%) 2.2302 - 6m 36s (50000 50%) 1.6496 - 7m 15s (55000 55%) 2.7101 - 7m 54s (60000 60%) 2.5396 - 8m 34s (65000 65%) 2.5978 - 9m 13s (70000 70%) 1.6029 - 9m 53s (75000 75%) 0.9634 - 10m 33s (80000 80%) 3.0950 - 11m 12s (85000 85%) 2.0512 - 11m 52s (90000 90%) 2.5302 - 12m 32s (95000 95%) 3.2365 - 13m 12s (100000 100%) 1.7113 + 0m 39s (5000 5%) 3.1506 + 1m 19s (10000 10%) 2.5070 + 1m 59s (15000 15%) 3.3047 + 2m 39s (20000 20%) 2.4247 + 3m 20s (25000 25%) 2.6406 + 4m 0s (30000 30%) 2.0266 + 4m 40s (35000 35%) 2.6520 + 5m 20s (40000 40%) 2.4261 + 6m 0s (45000 45%) 2.2302 + 6m 41s (50000 50%) 1.6496 + 7m 22s (55000 55%) 2.7101 + 8m 3s (60000 60%) 2.5396 + 8m 43s (65000 65%) 2.5978 + 9m 23s (70000 70%) 1.6029 + 10m 4s (75000 75%) 0.9634 + 10m 44s (80000 80%) 3.0950 + 11m 24s (85000 85%) 2.0512 + 12m 4s (90000 90%) 2.5302 + 12m 45s (95000 95%) 3.2365 + 13m 25s (100000 100%) 1.7113 @@ -522,7 +522,7 @@ learning: .. code-block:: none - [] + [] @@ -641,7 +641,7 @@ Exercises .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 13 minutes 12.307 seconds) + **Total running time of the script:** ( 13 minutes 25.694 seconds) .. _sphx_glr_download_intermediate_char_rnn_generation_tutorial.py: diff --git a/_sources/intermediate/custom_function_conv_bn_tutorial.rst.txt b/_sources/intermediate/custom_function_conv_bn_tutorial.rst.txt index 07b0446484..fdf8b2c28e 100644 --- a/_sources/intermediate/custom_function_conv_bn_tutorial.rst.txt +++ b/_sources/intermediate/custom_function_conv_bn_tutorial.rst.txt @@ -568,22 +568,22 @@ allocate one fewer buffer per fused ``conv-bn`` pair. Test set: Average loss: 0.4197, Accuracy: 8681/10000 (87%) Train Epoch: 0 [0/60000 (0%)] Loss: 2.349030 - Train Epoch: 0 [4096/60000 (7%)] Loss: 7.435156 - Train Epoch: 0 [8192/60000 (13%)] Loss: 5.443537 - Train Epoch: 0 [12288/60000 (20%)] Loss: 2.457752 - Train Epoch: 0 [16384/60000 (27%)] Loss: 1.739522 - Train Epoch: 0 [20480/60000 (33%)] Loss: 1.448538 - Train Epoch: 0 [24576/60000 (40%)] Loss: 1.311784 - Train Epoch: 0 [28672/60000 (47%)] Loss: 1.149340 - Train Epoch: 0 [32768/60000 (53%)] Loss: 1.513815 - Train Epoch: 0 [36864/60000 (60%)] Loss: 1.240803 - Train Epoch: 0 [40960/60000 (67%)] Loss: 1.076550 - Train Epoch: 0 [45056/60000 (73%)] Loss: 0.894954 - Train Epoch: 0 [49152/60000 (80%)] Loss: 0.833993 - Train Epoch: 0 [53248/60000 (87%)] Loss: 0.730459 - Train Epoch: 0 [57344/60000 (93%)] Loss: 0.808739 - - Test set: Average loss: 0.4737, Accuracy: 8613/10000 (86%) + Train Epoch: 0 [4096/60000 (7%)] Loss: 7.435157 + Train Epoch: 0 [8192/60000 (13%)] Loss: 5.443542 + Train Epoch: 0 [12288/60000 (20%)] Loss: 2.457858 + Train Epoch: 0 [16384/60000 (27%)] Loss: 1.739216 + Train Epoch: 0 [20480/60000 (33%)] Loss: 1.448291 + Train Epoch: 0 [24576/60000 (40%)] Loss: 1.312150 + Train Epoch: 0 [28672/60000 (47%)] Loss: 1.145356 + Train Epoch: 0 [32768/60000 (53%)] Loss: 1.496005 + Train Epoch: 0 [36864/60000 (60%)] Loss: 1.251128 + Train Epoch: 0 [40960/60000 (67%)] Loss: 1.076815 + Train Epoch: 0 [45056/60000 (73%)] Loss: 0.892333 + Train Epoch: 0 [49152/60000 (80%)] Loss: 0.829717 + Train Epoch: 0 [53248/60000 (87%)] Loss: 0.740473 + Train Epoch: 0 [57344/60000 (93%)] Loss: 0.789344 + + Test set: Average loss: 0.4183, Accuracy: 8869/10000 (89%) cuDNN version: 8902 @@ -598,7 +598,7 @@ allocate one fewer buffer per fused ``conv-bn`` pair. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 37.611 seconds) + **Total running time of the script:** ( 0 minutes 36.520 seconds) .. _sphx_glr_download_intermediate_custom_function_conv_bn_tutorial.py: diff --git a/_sources/intermediate/dqn_with_rnn_tutorial.rst.txt b/_sources/intermediate/dqn_with_rnn_tutorial.rst.txt index f61e6c9f64..526a996746 100644 --- a/_sources/intermediate/dqn_with_rnn_tutorial.rst.txt +++ b/_sources/intermediate/dqn_with_rnn_tutorial.rst.txt @@ -803,15 +803,15 @@ every 50 data collection, and plot the results after training. device=cpu, is_shared=False) - 0%| | 50/1000000 [00:00<2:19:47, 119.22it/s] - 0%| | 50/1000000 [00:10<2:19:47, 119.22it/s] - steps: 10, loss_val: 0.0003, action_spread: tensor([ 8, 42]): 0%| | 50/1000000 [00:24<2:19:47, 119.22it/s] - steps: 10, loss_val: 0.0003, action_spread: tensor([ 8, 42]): 0%| | 100/1000000 [00:25<82:51:55, 3.35it/s] - steps: 18, loss_val: 0.0002, action_spread: tensor([39, 11]): 0%| | 100/1000000 [00:49<82:51:55, 3.35it/s] - steps: 18, loss_val: 0.0002, action_spread: tensor([39, 11]): 0%| | 150/1000000 [00:50<108:04:09, 2.57it/s] - steps: 18, loss_val: 0.0002, action_spread: tensor([10, 40]): 0%| | 150/1000000 [01:14<108:04:09, 2.57it/s] - steps: 18, loss_val: 0.0002, action_spread: tensor([10, 40]): 0%| | 200/1000000 [01:15<119:46:02, 2.32it/s] - steps: 23, loss_val: 0.0003, action_spread: tensor([25, 25]): 0%| | 200/1000000 [01:39<119:46:02, 2.32it/s] + 0%| | 50/1000000 [00:00<2:26:56, 113.41it/s] + 0%| | 50/1000000 [00:15<2:26:56, 113.41it/s] + steps: 16, loss_val: 0.0002, action_spread: tensor([ 8, 42]): 0%| | 50/1000000 [00:25<2:26:56, 113.41it/s] + steps: 16, loss_val: 0.0002, action_spread: tensor([ 8, 42]): 0%| | 100/1000000 [00:26<85:13:26, 3.26it/s] + steps: 16, loss_val: 0.0003, action_spread: tensor([40, 10]): 0%| | 100/1000000 [00:50<85:13:26, 3.26it/s] + steps: 16, loss_val: 0.0003, action_spread: tensor([40, 10]): 0%| | 150/1000000 [00:51<110:30:44, 2.51it/s] + steps: 16, loss_val: 0.0003, action_spread: tensor([ 8, 42]): 0%| | 150/1000000 [01:16<110:30:44, 2.51it/s] + steps: 16, loss_val: 0.0003, action_spread: tensor([ 8, 42]): 0%| | 200/1000000 [01:16<122:26:43, 2.27it/s] + steps: 16, loss_val: 0.0003, action_spread: tensor([43, 7]): 0%| | 200/1000000 [01:41<122:26:43, 2.27it/s] @@ -867,7 +867,7 @@ Further Reading .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 1 minutes 44.549 seconds) + **Total running time of the script:** ( 1 minutes 46.960 seconds) .. _sphx_glr_download_intermediate_dqn_with_rnn_tutorial.py: diff --git a/_sources/intermediate/ensembling.rst.txt b/_sources/intermediate/ensembling.rst.txt index 29e7fe5c4d..bfef212f9c 100644 --- a/_sources/intermediate/ensembling.rst.txt +++ b/_sources/intermediate/ensembling.rst.txt @@ -306,13 +306,13 @@ Curious about performance numbers? Here's how the numbers look. .. code-block:: none - Predictions without vmap + Predictions without vmap [model(minibatch) for model, minibatch in zip(models, minibatches)] - 2.23 ms + 2.08 ms 1 measurement, 100 runs , 1 thread - Predictions with vmap + Predictions with vmap vmap(fmodel)(params, buffers, minibatches) - 844.87 us + 795.37 us 1 measurement, 100 runs , 1 thread @@ -332,7 +332,7 @@ on GitHub. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.857 seconds) + **Total running time of the script:** ( 0 minutes 0.835 seconds) .. _sphx_glr_download_intermediate_ensembling.py: diff --git a/_sources/intermediate/forward_ad_usage.rst.txt b/_sources/intermediate/forward_ad_usage.rst.txt index b1af7e77bd..d534b7cfcc 100644 --- a/_sources/intermediate/forward_ad_usage.rst.txt +++ b/_sources/intermediate/forward_ad_usage.rst.txt @@ -370,7 +370,7 @@ to the module. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.158 seconds) + **Total running time of the script:** ( 0 minutes 0.152 seconds) .. _sphx_glr_download_intermediate_forward_ad_usage.py: diff --git a/_sources/intermediate/fx_profiling_tutorial.rst.txt b/_sources/intermediate/fx_profiling_tutorial.rst.txt index 055929f454..4d48d3daae 100644 --- a/_sources/intermediate/fx_profiling_tutorial.rst.txt +++ b/_sources/intermediate/fx_profiling_tutorial.rst.txt @@ -477,77 +477,77 @@ characteristics of our ResNet18 model; Op type Op Average runtime (s) Pct total runtime ------------- --------------------- --------------------- ------------------- - call_module maxpool 0.00593972 8.94847 - call_module conv1 0.00487566 7.34541 - call_module layer4_0_conv2 0.00375938 5.66369 - call_module layer1_0_conv1 0.00349069 5.25889 - call_module layer1_1_conv2 0.00332069 5.00278 - call_module layer4_1_conv2 0.00318336 4.79589 - call_module layer4_1_conv1 0.00316954 4.77506 - call_module layer1_1_conv1 0.00298667 4.49956 - call_module layer2_1_conv1 0.00292873 4.41228 - call_module layer3_1_conv2 0.00290322 4.37384 - call_module layer3_1_conv1 0.00282097 4.24992 - call_module layer3_0_conv2 0.00270724 4.07859 - call_module layer1_0_conv2 0.00268865 4.05057 - call_module layer2_1_conv2 0.00258994 3.90187 - call_module layer2_0_conv2 0.00253654 3.82141 - call_module layer4_0_conv1 0.00232077 3.49635 - call_module layer2_0_conv1 0.00205183 3.09118 - call_module layer3_0_conv1 0.0017519 2.63932 - call_module bn1 0.00159454 2.40226 - call_module layer2_0_downsample_0 0.00100255 1.51039 - call_module layer4_0_downsample_0 0.000481844 0.725921 - call_module layer3_0_downsample_0 0.000479698 0.722688 - call_function add 0.000430346 0.648336 - call_function add_3 0.000252485 0.380381 - call_module relu 0.000247717 0.373197 - call_module layer1_0_bn1 0.000201225 0.303155 - call_module fc 0.000184298 0.277653 - call_module layer1_1_bn2 0.000181913 0.274061 - call_module avgpool 0.000181913 0.274061 - call_module layer1_1_bn1 0.00016737 0.252151 - call_module layer1_0_bn2 0.000166655 0.251073 - call_function add_1 0.000150442 0.226648 - call_module layer2_1_bn1 0.000145674 0.219464 - call_module layer2_0_bn1 0.000144482 0.217669 - call_module layer2_0_downsample_1 0.000144005 0.21695 - call_module layer4_1_bn2 0.000136614 0.205815 - call_module layer2_0_bn2 0.000136375 0.205456 - call_module layer2_1_bn2 0.000135422 0.204019 - call_module layer3_1_bn1 0.000135422 0.204019 - call_module layer1_1_relu_1 0.000135183 0.20366 - call_module layer3_1_bn2 0.000133991 0.201864 - call_module layer4_0_bn1 0.000129223 0.19468 - call_module layer3_0_bn2 0.000127316 0.191807 - call_module layer4_0_bn2 0.000126839 0.191089 - call_module layer4_1_bn1 0.000125647 0.189293 - call_module layer3_0_bn1 0.000124931 0.188215 - call_module layer3_0_downsample_1 0.000124216 0.187137 - call_module layer4_0_downsample_1 0.000122309 0.184264 - call_module layer1_0_relu 0.000119448 0.179954 - call_module layer1_0_relu_1 0.000103951 0.156606 - call_module layer1_1_relu 0.000102758 0.15481 - call_module layer2_0_relu 8.96454e-05 0.135055 - call_module layer4_0_relu 8.89301e-05 0.133977 - call_module layer4_1_relu 8.86917e-05 0.133618 - call_function add_2 8.72612e-05 0.131463 - call_module layer2_1_relu 8.65459e-05 0.130386 - call_module layer2_1_relu_1 8.51154e-05 0.12823 - call_module layer2_0_relu_1 8.34465e-05 0.125716 - call_module layer3_0_relu 7.98702e-05 0.120328 - call_module layer3_1_relu 7.77245e-05 0.117096 - call_module layer4_0_relu_1 7.65324e-05 0.1153 - call_function add_7 7.60555e-05 0.114581 - call_function add_5 7.55787e-05 0.113863 - call_module layer3_0_relu_1 7.51019e-05 0.113145 - call_module layer4_1_relu_1 7.51019e-05 0.113145 - call_module layer3_1_relu_1 7.41482e-05 0.111708 - call_function add_4 7.31945e-05 0.110271 - call_function add_6 7.29561e-05 0.109912 - call_function flatten 5.24521e-05 0.0790216 - placeholder x 2.76566e-05 0.0416659 - output output 1.97887e-05 0.0298127 + call_module maxpool 0.00575376 8.70941 + call_module layer4_1_conv2 0.00491071 7.4333 + call_module conv1 0.00479746 7.26187 + call_module layer4_0_conv2 0.00369048 5.58625 + call_module layer1_0_conv1 0.00338578 5.12503 + call_module layer1_1_conv2 0.00320458 4.85075 + call_module layer4_1_conv1 0.00319433 4.83523 + call_module layer1_0_conv2 0.00295234 4.46893 + call_module layer2_1_conv1 0.0028708 4.3455 + call_module layer3_1_conv1 0.00283694 4.29426 + call_module layer3_1_conv2 0.00275111 4.16434 + call_module layer3_0_conv2 0.00265932 4.02539 + call_module layer1_1_conv1 0.00259066 3.92146 + call_module layer2_1_conv2 0.0025208 3.81571 + call_module layer2_0_conv2 0.00239658 3.62769 + call_module layer4_0_conv1 0.00222993 3.37543 + call_module layer3_0_conv1 0.00175953 2.66338 + call_module bn1 0.00150132 2.27254 + call_module layer2_0_conv1 0.00146842 2.22274 + call_module layer2_0_downsample_0 0.000971079 1.46991 + call_module layer3_0_downsample_0 0.000450134 0.681365 + call_module layer4_0_downsample_0 0.000447512 0.677395 + call_function add 0.000422955 0.640223 + call_function add_1 0.00041914 0.634449 + call_module relu 0.000286579 0.433793 + call_function add_3 0.000248194 0.375689 + call_module layer1_0_bn1 0.000195265 0.295571 + call_module fc 0.000192881 0.291962 + call_module layer1_1_bn2 0.000168562 0.255151 + call_module layer1_0_bn2 0.000168085 0.254429 + call_module avgpool 0.000153065 0.231693 + call_module layer1_1_bn1 0.000148773 0.225197 + call_module layer4_1_bn2 0.000141859 0.214731 + call_module layer2_0_downsample_1 0.000134468 0.203543 + call_module layer2_1_bn1 0.000132799 0.201017 + call_module layer2_0_bn2 0.000128746 0.194882 + call_module layer1_1_relu_1 0.000128508 0.194521 + call_module layer3_1_bn1 0.000123501 0.186942 + call_module layer2_0_bn1 0.000122309 0.185138 + call_module layer3_1_bn2 0.000121117 0.183333 + call_module layer2_1_bn2 0.000119686 0.181168 + call_module layer4_1_bn1 0.000119448 0.180807 + call_module layer3_0_bn1 0.000116348 0.176115 + call_module layer4_0_bn2 0.000114679 0.173589 + call_module layer1_0_relu 0.000113726 0.172146 + call_module layer3_0_bn2 0.00011301 0.171063 + call_module layer4_0_bn1 0.000112057 0.169619 + call_module layer3_0_downsample_1 0.00011158 0.168898 + call_module layer4_0_downsample_1 0.000108957 0.164928 + call_module layer1_1_relu 9.84669e-05 0.149049 + call_module layer1_0_relu_1 9.75132e-05 0.147605 + call_module layer4_1_relu 8.63075e-05 0.130643 + call_function add_2 8.05855e-05 0.121982 + call_module layer2_1_relu 8.03471e-05 0.121621 + call_module layer4_0_relu 8.01086e-05 0.12126 + call_function add_7 8.01086e-05 0.12126 + call_module layer2_0_relu 7.96318e-05 0.120538 + call_module layer2_1_relu_1 7.82013e-05 0.118373 + call_module layer2_0_relu_1 7.67708e-05 0.116207 + call_module layer3_1_relu 7.39098e-05 0.111877 + call_module layer3_0_relu 7.31945e-05 0.110794 + call_function add_4 7.29561e-05 0.110433 + call_module layer4_1_relu_1 7.27177e-05 0.110072 + call_function add_6 7.15256e-05 0.108268 + call_module layer4_0_relu_1 7.10487e-05 0.107546 + call_module layer3_1_relu_1 6.98566e-05 0.105741 + call_function add_5 6.96182e-05 0.105381 + call_module layer3_0_relu_1 6.93798e-05 0.10502 + call_function flatten 4.29153e-05 0.0649606 + placeholder x 2.5034e-05 0.0378937 + output output 1.85966e-05 0.0281496 @@ -580,7 +580,7 @@ you might have. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.413 seconds) + **Total running time of the script:** ( 0 minutes 0.418 seconds) .. _sphx_glr_download_intermediate_fx_profiling_tutorial.py: diff --git a/_sources/intermediate/inductor_debug_cpu.rst.txt b/_sources/intermediate/inductor_debug_cpu.rst.txt index cf34a042b7..cd880855c4 100644 --- a/_sources/intermediate/inductor_debug_cpu.rst.txt +++ b/_sources/intermediate/inductor_debug_cpu.rst.txt @@ -546,12 +546,12 @@ We set following environment variable as a best practice to benchmark on Intel(R Downloading config.json: 0%| | 0.00/765 [00:00 + compute_jac(xp) - 2.49 ms + 2.41 ms 1 measurement, 500 runs , 1 thread - + jacrev(predict, argnums=2)(weight, bias, x) - 683.11 us + 678.92 us 1 measurement, 500 runs , 1 thread @@ -292,7 +292,7 @@ Let's do a relative performance comparison of the above with our ``get_perf`` fu .. code-block:: none - Performance delta: 72.5988 percent improvement with vmap + Performance delta: 71.8454 percent improvement with vmap @@ -394,13 +394,13 @@ First, let's benchmark with more inputs than outputs: .. code-block:: none torch.Size([2048, 32]) - jacfwd time: + jacfwd time: jacfwd(predict, argnums=2)(weight, bias, x) - 1.24 ms + 1.23 ms 1 measurement, 500 runs , 1 thread - jacrev time: + jacrev time: jacrev(predict, argnums=2)(weight, bias, x) - 11.04 ms + 13.41 ms 1 measurement, 500 runs , 1 thread @@ -425,7 +425,7 @@ and then do a relative benchmark: .. code-block:: none - Performance delta: 789.9596 percent improvement with jacrev + Performance delta: 985.8145 percent improvement with jacrev @@ -462,13 +462,13 @@ and now the reverse - more outputs (M) than inputs (N): .. code-block:: none - jacfwd time: + jacfwd time: jacfwd(predict, argnums=2)(weight, bias, x) - 6.43 ms + 6.73 ms 1 measurement, 500 runs , 1 thread - jacrev time: + jacrev time: jacrev(predict, argnums=2)(weight, bias, x) - 807.59 us + 783.92 us 1 measurement, 500 runs , 1 thread @@ -493,7 +493,7 @@ and a relative performance comparison: .. code-block:: none - Performance delta: 696.1942 percent improvement with jacfwd + Performance delta: 758.4283 percent improvement with jacfwd @@ -749,7 +749,7 @@ instead compose reverse-mode AD with reverse-mode AD: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 12.589 seconds) + **Total running time of the script:** ( 0 minutes 13.847 seconds) .. _sphx_glr_download_intermediate_jacobians_hessians.py: diff --git a/_sources/intermediate/mario_rl_tutorial.rst.txt b/_sources/intermediate/mario_rl_tutorial.rst.txt index 0c4f159ee7..017ca9286f 100644 --- a/_sources/intermediate/mario_rl_tutorial.rst.txt +++ b/_sources/intermediate/mario_rl_tutorial.rst.txt @@ -993,9 +993,9 @@ his world, we suggest running the loop for at least 40,000 episodes! Using CUDA: True - Episode 0 - Step 163 - Epsilon 0.9999592508251706 - Mean Reward 635.0 - Mean Length 163.0 - Mean Loss 0.0 - Mean Q Value 0.0 - Time Delta 1.986 - Time 2024-06-24T22:00:08 - Episode 20 - Step 5007 - Epsilon 0.9987490329557962 - Mean Reward 667.429 - Mean Length 238.429 - Mean Loss 0.0 - Mean Q Value 0.0 - Time Delta 59.3 - Time 2024-06-24T22:01:07 - Episode 39 - Step 8854 - Epsilon 0.9977889477081997 - Mean Reward 656.6 - Mean Length 221.35 - Mean Loss 0.0 - Mean Q Value 0.0 - Time Delta 47.594 - Time 2024-06-24T22:01:55 + Episode 0 - Step 163 - Epsilon 0.9999592508251706 - Mean Reward 635.0 - Mean Length 163.0 - Mean Loss 0.0 - Mean Q Value 0.0 - Time Delta 1.995 - Time 2024-06-25T20:49:45 + Episode 20 - Step 5007 - Epsilon 0.9987490329557962 - Mean Reward 667.429 - Mean Length 238.429 - Mean Loss 0.0 - Mean Q Value 0.0 - Time Delta 59.381 - Time 2024-06-25T20:50:45 + Episode 39 - Step 8854 - Epsilon 0.9977889477081997 - Mean Reward 656.6 - Mean Length 221.35 - Mean Loss 0.0 - Mean Q Value 0.0 - Time Delta 47.646 - Time 2024-06-25T20:51:32 @@ -1012,7 +1012,7 @@ to train an AI to play any of the games at the `OpenAI gym `_. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.068 seconds) + **Total running time of the script:** ( 0 minutes 0.064 seconds) .. _sphx_glr_download_intermediate_memory_format_tutorial.py: diff --git a/_sources/intermediate/model_parallel_tutorial.rst.txt b/_sources/intermediate/model_parallel_tutorial.rst.txt index fb5b59350a..21996ae957 100644 --- a/_sources/intermediate/model_parallel_tutorial.rst.txt +++ b/_sources/intermediate/model_parallel_tutorial.rst.txt @@ -485,7 +485,7 @@ inputs. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 2 minutes 40.704 seconds) + **Total running time of the script:** ( 2 minutes 42.800 seconds) .. _sphx_glr_download_intermediate_model_parallel_tutorial.py: diff --git a/_sources/intermediate/neural_tangent_kernels.rst.txt b/_sources/intermediate/neural_tangent_kernels.rst.txt index 78199ed24f..5601466d7f 100644 --- a/_sources/intermediate/neural_tangent_kernels.rst.txt +++ b/_sources/intermediate/neural_tangent_kernels.rst.txt @@ -360,7 +360,7 @@ for details. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.523 seconds) + **Total running time of the script:** ( 0 minutes 0.538 seconds) .. _sphx_glr_download_intermediate_neural_tangent_kernels.py: diff --git a/_sources/intermediate/optimizer_step_in_backward_tutorial.rst.txt b/_sources/intermediate/optimizer_step_in_backward_tutorial.rst.txt index 3bb1dc67a8..9aba214f91 100644 --- a/_sources/intermediate/optimizer_step_in_backward_tutorial.rst.txt +++ b/_sources/intermediate/optimizer_step_in_backward_tutorial.rst.txt @@ -68,34 +68,127 @@ but, again, feel free to substitute with your own optimizer. Downloading: "https://download.pytorch.org/models/vit_l_16-852ce7e3.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vit_l_16-852ce7e3.pth 0%| | 0.00/1.13G [00:00 + Per-sample-grads without vmap compute_sample_grads(data, targets) - 59.84 ms + 57.70 ms 1 measurement, 100 runs , 1 thread - Per-sample-grads with vmap + Per-sample-grads with vmap ft_compute_sample_grad(params, buffers, data, targets) - 2.90 ms + 2.88 ms 1 measurement, 100 runs , 1 thread - Performance delta: 1966.2484 percent improvement with vmap + Performance delta: 1904.7805 percent improvement with vmap @@ -419,7 +419,7 @@ at on GitHub. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 7.458 seconds) + **Total running time of the script:** ( 0 minutes 7.223 seconds) .. _sphx_glr_download_intermediate_per_sample_grads.py: diff --git a/_sources/intermediate/pruning_tutorial.rst.txt b/_sources/intermediate/pruning_tutorial.rst.txt index ec0a793071..64dc2ac572 100644 --- a/_sources/intermediate/pruning_tutorial.rst.txt +++ b/_sources/intermediate/pruning_tutorial.rst.txt @@ -459,7 +459,7 @@ present. .. code-block:: none - OrderedDict([(3, )]) + OrderedDict([(3, )]) @@ -652,7 +652,7 @@ module attributes, and the module will now have two ``forward_pre_hooks``. .. code-block:: none - OrderedDict([(3, ), (4, )]) + OrderedDict([(3, ), (4, )]) @@ -762,7 +762,7 @@ pruning applied to the ``weight`` parameter. .. code-block:: none - [, ] + [, ] @@ -1359,7 +1359,7 @@ Let's try it out! .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.215 seconds) + **Total running time of the script:** ( 0 minutes 0.218 seconds) .. _sphx_glr_download_intermediate_pruning_tutorial.py: diff --git a/_sources/intermediate/reinforcement_ppo.rst.txt b/_sources/intermediate/reinforcement_ppo.rst.txt index c20902bdac..cc55e6c1c4 100644 --- a/_sources/intermediate/reinforcement_ppo.rst.txt +++ b/_sources/intermediate/reinforcement_ppo.rst.txt @@ -1046,106 +1046,106 @@ The steps include: 0%| | 0/50000 [00:00 + @@ -124,7 +124,7 @@ network. Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz to ./MNIST/raw/train-images-idx3-ubyte.gz 0%| | 0/9912422 [00:00 ([l__self___features_pool0], 1) {} + call_function concated_features ([l__self___features_pool0], 1) {} call_module l__self___features_denseblock1_denselayer1_norm1 L__self___features_denseblock1_denselayer1_norm1 (concated_features,) {} call_module l__self___features_denseblock1_denselayer1_relu1 L__self___features_denseblock1_denselayer1_relu1 (l__self___features_denseblock1_denselayer1_norm1,) {} call_module bottleneck_output L__self___features_denseblock1_denselayer1_conv1 (l__self___features_denseblock1_denselayer1_relu1,) {} call_module l__self___features_denseblock1_denselayer1_norm2 L__self___features_denseblock1_denselayer1_norm2 (bottleneck_output,) {} call_module l__self___features_denseblock1_denselayer1_relu2 L__self___features_denseblock1_denselayer1_relu2 (l__self___features_denseblock1_denselayer1_norm2,) {} call_module new_features L__self___features_denseblock1_denselayer1_conv2 (l__self___features_denseblock1_denselayer1_relu2,) {} - call_function concated_features_1 ([l__self___features_pool0, new_features], 1) {} + call_function concated_features_1 ([l__self___features_pool0, new_features], 1) {} call_module l__self___features_denseblock1_denselayer2_norm1 L__self___features_denseblock1_denselayer2_norm1 (concated_features_1,) {} call_module l__self___features_denseblock1_denselayer2_relu1 L__self___features_denseblock1_denselayer2_relu1 (l__self___features_denseblock1_denselayer2_norm1,) {} call_module bottleneck_output_2 L__self___features_denseblock1_denselayer2_conv1 (l__self___features_denseblock1_denselayer2_relu1,) {} call_module l__self___features_denseblock1_denselayer2_norm2 L__self___features_denseblock1_denselayer2_norm2 (bottleneck_output_2,) {} call_module l__self___features_denseblock1_denselayer2_relu2 L__self___features_denseblock1_denselayer2_relu2 (l__self___features_denseblock1_denselayer2_norm2,) {} call_module new_features_2 L__self___features_denseblock1_denselayer2_conv2 (l__self___features_denseblock1_denselayer2_relu2,) {} - call_function concated_features_2 ([l__self___features_pool0, new_features, new_features_2], 1) {} + call_function concated_features_2 ([l__self___features_pool0, new_features, new_features_2], 1) {} call_module l__self___features_denseblock1_denselayer3_norm1 L__self___features_denseblock1_denselayer3_norm1 (concated_features_2,) {} call_module l__self___features_denseblock1_denselayer3_relu1 L__self___features_denseblock1_denselayer3_relu1 (l__self___features_denseblock1_denselayer3_norm1,) {} call_module bottleneck_output_4 L__self___features_denseblock1_denselayer3_conv1 (l__self___features_denseblock1_denselayer3_relu1,) {} call_module l__self___features_denseblock1_denselayer3_norm2 L__self___features_denseblock1_denselayer3_norm2 (bottleneck_output_4,) {} call_module l__self___features_denseblock1_denselayer3_relu2 L__self___features_denseblock1_denselayer3_relu2 (l__self___features_denseblock1_denselayer3_norm2,) {} call_module new_features_4 L__self___features_denseblock1_denselayer3_conv2 (l__self___features_denseblock1_denselayer3_relu2,) {} - call_function concated_features_3 ([l__self___features_pool0, new_features, new_features_2, new_features_4], 1) {} + call_function concated_features_3 ([l__self___features_pool0, new_features, new_features_2, new_features_4], 1) {} call_module l__self___features_denseblock1_denselayer4_norm1 L__self___features_denseblock1_denselayer4_norm1 (concated_features_3,) {} call_module l__self___features_denseblock1_denselayer4_relu1 L__self___features_denseblock1_denselayer4_relu1 (l__self___features_denseblock1_denselayer4_norm1,) {} call_module bottleneck_output_6 L__self___features_denseblock1_denselayer4_conv1 (l__self___features_denseblock1_denselayer4_relu1,) {} call_module l__self___features_denseblock1_denselayer4_norm2 L__self___features_denseblock1_denselayer4_norm2 (bottleneck_output_6,) {} call_module l__self___features_denseblock1_denselayer4_relu2 L__self___features_denseblock1_denselayer4_relu2 (l__self___features_denseblock1_denselayer4_norm2,) {} call_module new_features_6 L__self___features_denseblock1_denselayer4_conv2 (l__self___features_denseblock1_denselayer4_relu2,) {} - call_function concated_features_4 ([l__self___features_pool0, new_features, new_features_2, new_features_4, new_features_6], 1) {} + call_function concated_features_4 ([l__self___features_pool0, new_features, new_features_2, new_features_4, new_features_6], 1) {} call_module l__self___features_denseblock1_denselayer5_norm1 L__self___features_denseblock1_denselayer5_norm1 (concated_features_4,) {} call_module l__self___features_denseblock1_denselayer5_relu1 L__self___features_denseblock1_denselayer5_relu1 (l__self___features_denseblock1_denselayer5_norm1,) {} call_module bottleneck_output_8 L__self___features_denseblock1_denselayer5_conv1 (l__self___features_denseblock1_denselayer5_relu1,) {} call_module l__self___features_denseblock1_denselayer5_norm2 L__self___features_denseblock1_denselayer5_norm2 (bottleneck_output_8,) {} call_module l__self___features_denseblock1_denselayer5_relu2 L__self___features_denseblock1_denselayer5_relu2 (l__self___features_denseblock1_denselayer5_norm2,) {} call_module new_features_8 L__self___features_denseblock1_denselayer5_conv2 (l__self___features_denseblock1_denselayer5_relu2,) {} - call_function concated_features_5 ([l__self___features_pool0, new_features, new_features_2, new_features_4, new_features_6, new_features_8], 1) {} + call_function concated_features_5 ([l__self___features_pool0, new_features, new_features_2, new_features_4, new_features_6, new_features_8], 1) {} call_module l__self___features_denseblock1_denselayer6_norm1 L__self___features_denseblock1_denselayer6_norm1 (concated_features_5,) {} call_module l__self___features_denseblock1_denselayer6_relu1 L__self___features_denseblock1_denselayer6_relu1 (l__self___features_denseblock1_denselayer6_norm1,) {} call_module bottleneck_output_10 L__self___features_denseblock1_denselayer6_conv1 (l__self___features_denseblock1_denselayer6_relu1,) {} call_module l__self___features_denseblock1_denselayer6_norm2 L__self___features_denseblock1_denselayer6_norm2 (bottleneck_output_10,) {} call_module l__self___features_denseblock1_denselayer6_relu2 L__self___features_denseblock1_denselayer6_relu2 (l__self___features_denseblock1_denselayer6_norm2,) {} call_module new_features_10 L__self___features_denseblock1_denselayer6_conv2 (l__self___features_denseblock1_denselayer6_relu2,) {} - call_function cat_6 ([l__self___features_pool0, new_features, new_features_2, new_features_4, new_features_6, new_features_8, new_features_10], 1) {} + call_function cat_6 ([l__self___features_pool0, new_features, new_features_2, new_features_4, new_features_6, new_features_8, new_features_10], 1) {} call_module l__self___features_transition1_norm L__self___features_transition1_norm (cat_6,) {} call_module l__self___features_transition1_relu L__self___features_transition1_relu (l__self___features_transition1_norm,) {} call_module l__self___features_transition1_conv L__self___features_transition1_conv (l__self___features_transition1_relu,) {} call_module l__self___features_transition1_pool L__self___features_transition1_pool (l__self___features_transition1_conv,) {} - call_function concated_features_6 ([l__self___features_transition1_pool], 1) {} + call_function concated_features_6 ([l__self___features_transition1_pool], 1) {} call_module l__self___features_denseblock2_denselayer1_norm1 L__self___features_denseblock2_denselayer1_norm1 (concated_features_6,) {} call_module l__self___features_denseblock2_denselayer1_relu1 L__self___features_denseblock2_denselayer1_relu1 (l__self___features_denseblock2_denselayer1_norm1,) {} call_module bottleneck_output_12 L__self___features_denseblock2_denselayer1_conv1 (l__self___features_denseblock2_denselayer1_relu1,) {} call_module l__self___features_denseblock2_denselayer1_norm2 L__self___features_denseblock2_denselayer1_norm2 (bottleneck_output_12,) {} call_module l__self___features_denseblock2_denselayer1_relu2 L__self___features_denseblock2_denselayer1_relu2 (l__self___features_denseblock2_denselayer1_norm2,) {} call_module new_features_12 L__self___features_denseblock2_denselayer1_conv2 (l__self___features_denseblock2_denselayer1_relu2,) {} - call_function concated_features_7 ([l__self___features_transition1_pool, new_features_12], 1) {} + call_function concated_features_7 ([l__self___features_transition1_pool, new_features_12], 1) {} call_module l__self___features_denseblock2_denselayer2_norm1 L__self___features_denseblock2_denselayer2_norm1 (concated_features_7,) {} call_module l__self___features_denseblock2_denselayer2_relu1 L__self___features_denseblock2_denselayer2_relu1 (l__self___features_denseblock2_denselayer2_norm1,) {} call_module bottleneck_output_14 L__self___features_denseblock2_denselayer2_conv1 (l__self___features_denseblock2_denselayer2_relu1,) {} call_module l__self___features_denseblock2_denselayer2_norm2 L__self___features_denseblock2_denselayer2_norm2 (bottleneck_output_14,) {} call_module l__self___features_denseblock2_denselayer2_relu2 L__self___features_denseblock2_denselayer2_relu2 (l__self___features_denseblock2_denselayer2_norm2,) {} call_module new_features_14 L__self___features_denseblock2_denselayer2_conv2 (l__self___features_denseblock2_denselayer2_relu2,) {} - call_function concated_features_8 ([l__self___features_transition1_pool, new_features_12, new_features_14], 1) {} + call_function concated_features_8 ([l__self___features_transition1_pool, new_features_12, new_features_14], 1) {} call_module l__self___features_denseblock2_denselayer3_norm1 L__self___features_denseblock2_denselayer3_norm1 (concated_features_8,) {} call_module l__self___features_denseblock2_denselayer3_relu1 L__self___features_denseblock2_denselayer3_relu1 (l__self___features_denseblock2_denselayer3_norm1,) {} call_module bottleneck_output_16 L__self___features_denseblock2_denselayer3_conv1 (l__self___features_denseblock2_denselayer3_relu1,) {} call_module l__self___features_denseblock2_denselayer3_norm2 L__self___features_denseblock2_denselayer3_norm2 (bottleneck_output_16,) {} call_module l__self___features_denseblock2_denselayer3_relu2 L__self___features_denseblock2_denselayer3_relu2 (l__self___features_denseblock2_denselayer3_norm2,) {} call_module new_features_16 L__self___features_denseblock2_denselayer3_conv2 (l__self___features_denseblock2_denselayer3_relu2,) {} - call_function concated_features_9 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16], 1) {} + call_function concated_features_9 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16], 1) {} call_module l__self___features_denseblock2_denselayer4_norm1 L__self___features_denseblock2_denselayer4_norm1 (concated_features_9,) {} call_module l__self___features_denseblock2_denselayer4_relu1 L__self___features_denseblock2_denselayer4_relu1 (l__self___features_denseblock2_denselayer4_norm1,) {} call_module bottleneck_output_18 L__self___features_denseblock2_denselayer4_conv1 (l__self___features_denseblock2_denselayer4_relu1,) {} call_module l__self___features_denseblock2_denselayer4_norm2 L__self___features_denseblock2_denselayer4_norm2 (bottleneck_output_18,) {} call_module l__self___features_denseblock2_denselayer4_relu2 L__self___features_denseblock2_denselayer4_relu2 (l__self___features_denseblock2_denselayer4_norm2,) {} call_module new_features_18 L__self___features_denseblock2_denselayer4_conv2 (l__self___features_denseblock2_denselayer4_relu2,) {} - call_function concated_features_10 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18], 1) {} + call_function concated_features_10 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18], 1) {} call_module l__self___features_denseblock2_denselayer5_norm1 L__self___features_denseblock2_denselayer5_norm1 (concated_features_10,) {} call_module l__self___features_denseblock2_denselayer5_relu1 L__self___features_denseblock2_denselayer5_relu1 (l__self___features_denseblock2_denselayer5_norm1,) {} call_module bottleneck_output_20 L__self___features_denseblock2_denselayer5_conv1 (l__self___features_denseblock2_denselayer5_relu1,) {} call_module l__self___features_denseblock2_denselayer5_norm2 L__self___features_denseblock2_denselayer5_norm2 (bottleneck_output_20,) {} call_module l__self___features_denseblock2_denselayer5_relu2 L__self___features_denseblock2_denselayer5_relu2 (l__self___features_denseblock2_denselayer5_norm2,) {} call_module new_features_20 L__self___features_denseblock2_denselayer5_conv2 (l__self___features_denseblock2_denselayer5_relu2,) {} - call_function concated_features_11 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20], 1) {} + call_function concated_features_11 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20], 1) {} call_module l__self___features_denseblock2_denselayer6_norm1 L__self___features_denseblock2_denselayer6_norm1 (concated_features_11,) {} call_module l__self___features_denseblock2_denselayer6_relu1 L__self___features_denseblock2_denselayer6_relu1 (l__self___features_denseblock2_denselayer6_norm1,) {} call_module bottleneck_output_22 L__self___features_denseblock2_denselayer6_conv1 (l__self___features_denseblock2_denselayer6_relu1,) {} call_module l__self___features_denseblock2_denselayer6_norm2 L__self___features_denseblock2_denselayer6_norm2 (bottleneck_output_22,) {} call_module l__self___features_denseblock2_denselayer6_relu2 L__self___features_denseblock2_denselayer6_relu2 (l__self___features_denseblock2_denselayer6_norm2,) {} call_module new_features_22 L__self___features_denseblock2_denselayer6_conv2 (l__self___features_denseblock2_denselayer6_relu2,) {} - call_function concated_features_12 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22], 1) {} + call_function concated_features_12 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22], 1) {} call_module l__self___features_denseblock2_denselayer7_norm1 L__self___features_denseblock2_denselayer7_norm1 (concated_features_12,) {} call_module l__self___features_denseblock2_denselayer7_relu1 L__self___features_denseblock2_denselayer7_relu1 (l__self___features_denseblock2_denselayer7_norm1,) {} call_module bottleneck_output_24 L__self___features_denseblock2_denselayer7_conv1 (l__self___features_denseblock2_denselayer7_relu1,) {} call_module l__self___features_denseblock2_denselayer7_norm2 L__self___features_denseblock2_denselayer7_norm2 (bottleneck_output_24,) {} call_module l__self___features_denseblock2_denselayer7_relu2 L__self___features_denseblock2_denselayer7_relu2 (l__self___features_denseblock2_denselayer7_norm2,) {} call_module new_features_24 L__self___features_denseblock2_denselayer7_conv2 (l__self___features_denseblock2_denselayer7_relu2,) {} - call_function concated_features_13 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24], 1) {} + call_function concated_features_13 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24], 1) {} call_module l__self___features_denseblock2_denselayer8_norm1 L__self___features_denseblock2_denselayer8_norm1 (concated_features_13,) {} call_module l__self___features_denseblock2_denselayer8_relu1 L__self___features_denseblock2_denselayer8_relu1 (l__self___features_denseblock2_denselayer8_norm1,) {} call_module bottleneck_output_26 L__self___features_denseblock2_denselayer8_conv1 (l__self___features_denseblock2_denselayer8_relu1,) {} call_module l__self___features_denseblock2_denselayer8_norm2 L__self___features_denseblock2_denselayer8_norm2 (bottleneck_output_26,) {} call_module l__self___features_denseblock2_denselayer8_relu2 L__self___features_denseblock2_denselayer8_relu2 (l__self___features_denseblock2_denselayer8_norm2,) {} call_module new_features_26 L__self___features_denseblock2_denselayer8_conv2 (l__self___features_denseblock2_denselayer8_relu2,) {} - call_function concated_features_14 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26], 1) {} + call_function concated_features_14 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26], 1) {} call_module l__self___features_denseblock2_denselayer9_norm1 L__self___features_denseblock2_denselayer9_norm1 (concated_features_14,) {} call_module l__self___features_denseblock2_denselayer9_relu1 L__self___features_denseblock2_denselayer9_relu1 (l__self___features_denseblock2_denselayer9_norm1,) {} call_module bottleneck_output_28 L__self___features_denseblock2_denselayer9_conv1 (l__self___features_denseblock2_denselayer9_relu1,) {} call_module l__self___features_denseblock2_denselayer9_norm2 L__self___features_denseblock2_denselayer9_norm2 (bottleneck_output_28,) {} call_module l__self___features_denseblock2_denselayer9_relu2 L__self___features_denseblock2_denselayer9_relu2 (l__self___features_denseblock2_denselayer9_norm2,) {} call_module new_features_28 L__self___features_denseblock2_denselayer9_conv2 (l__self___features_denseblock2_denselayer9_relu2,) {} - call_function concated_features_15 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28], 1) {} + call_function concated_features_15 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28], 1) {} call_module l__self___features_denseblock2_denselayer10_norm1 L__self___features_denseblock2_denselayer10_norm1 (concated_features_15,) {} call_module l__self___features_denseblock2_denselayer10_relu1 L__self___features_denseblock2_denselayer10_relu1 (l__self___features_denseblock2_denselayer10_norm1,) {} call_module bottleneck_output_30 L__self___features_denseblock2_denselayer10_conv1 (l__self___features_denseblock2_denselayer10_relu1,) {} call_module l__self___features_denseblock2_denselayer10_norm2 L__self___features_denseblock2_denselayer10_norm2 (bottleneck_output_30,) {} call_module l__self___features_denseblock2_denselayer10_relu2 L__self___features_denseblock2_denselayer10_relu2 (l__self___features_denseblock2_denselayer10_norm2,) {} call_module new_features_30 L__self___features_denseblock2_denselayer10_conv2 (l__self___features_denseblock2_denselayer10_relu2,) {} - call_function concated_features_16 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28, new_features_30], 1) {} + call_function concated_features_16 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28, new_features_30], 1) {} call_module l__self___features_denseblock2_denselayer11_norm1 L__self___features_denseblock2_denselayer11_norm1 (concated_features_16,) {} call_module l__self___features_denseblock2_denselayer11_relu1 L__self___features_denseblock2_denselayer11_relu1 (l__self___features_denseblock2_denselayer11_norm1,) {} call_module bottleneck_output_32 L__self___features_denseblock2_denselayer11_conv1 (l__self___features_denseblock2_denselayer11_relu1,) {} call_module l__self___features_denseblock2_denselayer11_norm2 L__self___features_denseblock2_denselayer11_norm2 (bottleneck_output_32,) {} call_module l__self___features_denseblock2_denselayer11_relu2 L__self___features_denseblock2_denselayer11_relu2 (l__self___features_denseblock2_denselayer11_norm2,) {} call_module new_features_32 L__self___features_denseblock2_denselayer11_conv2 (l__self___features_denseblock2_denselayer11_relu2,) {} - call_function concated_features_17 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28, new_features_30, new_features_32], 1) {} + call_function concated_features_17 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28, new_features_30, new_features_32], 1) {} call_module l__self___features_denseblock2_denselayer12_norm1 L__self___features_denseblock2_denselayer12_norm1 (concated_features_17,) {} call_module l__self___features_denseblock2_denselayer12_relu1 L__self___features_denseblock2_denselayer12_relu1 (l__self___features_denseblock2_denselayer12_norm1,) {} call_module bottleneck_output_34 L__self___features_denseblock2_denselayer12_conv1 (l__self___features_denseblock2_denselayer12_relu1,) {} call_module l__self___features_denseblock2_denselayer12_norm2 L__self___features_denseblock2_denselayer12_norm2 (bottleneck_output_34,) {} call_module l__self___features_denseblock2_denselayer12_relu2 L__self___features_denseblock2_denselayer12_relu2 (l__self___features_denseblock2_denselayer12_norm2,) {} call_module new_features_34 L__self___features_denseblock2_denselayer12_conv2 (l__self___features_denseblock2_denselayer12_relu2,) {} - call_function cat_19 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28, new_features_30, new_features_32, new_features_34], 1) {} + call_function cat_19 ([l__self___features_transition1_pool, new_features_12, new_features_14, new_features_16, new_features_18, new_features_20, new_features_22, new_features_24, new_features_26, new_features_28, new_features_30, new_features_32, new_features_34], 1) {} call_module l__self___features_transition2_norm L__self___features_transition2_norm (cat_19,) {} call_module l__self___features_transition2_relu L__self___features_transition2_relu (l__self___features_transition2_norm,) {} call_module l__self___features_transition2_conv L__self___features_transition2_conv (l__self___features_transition2_relu,) {} call_module l__self___features_transition2_pool L__self___features_transition2_pool (l__self___features_transition2_conv,) {} - call_function concated_features_18 ([l__self___features_transition2_pool], 1) {} + call_function concated_features_18 ([l__self___features_transition2_pool], 1) {} call_module l__self___features_denseblock3_denselayer1_norm1 L__self___features_denseblock3_denselayer1_norm1 (concated_features_18,) {} call_module l__self___features_denseblock3_denselayer1_relu1 L__self___features_denseblock3_denselayer1_relu1 (l__self___features_denseblock3_denselayer1_norm1,) {} call_module bottleneck_output_36 L__self___features_denseblock3_denselayer1_conv1 (l__self___features_denseblock3_denselayer1_relu1,) {} call_module l__self___features_denseblock3_denselayer1_norm2 L__self___features_denseblock3_denselayer1_norm2 (bottleneck_output_36,) {} call_module l__self___features_denseblock3_denselayer1_relu2 L__self___features_denseblock3_denselayer1_relu2 (l__self___features_denseblock3_denselayer1_norm2,) {} call_module new_features_36 L__self___features_denseblock3_denselayer1_conv2 (l__self___features_denseblock3_denselayer1_relu2,) {} - call_function concated_features_19 ([l__self___features_transition2_pool, new_features_36], 1) {} + call_function concated_features_19 ([l__self___features_transition2_pool, new_features_36], 1) {} call_module l__self___features_denseblock3_denselayer2_norm1 L__self___features_denseblock3_denselayer2_norm1 (concated_features_19,) {} call_module l__self___features_denseblock3_denselayer2_relu1 L__self___features_denseblock3_denselayer2_relu1 (l__self___features_denseblock3_denselayer2_norm1,) {} call_module bottleneck_output_38 L__self___features_denseblock3_denselayer2_conv1 (l__self___features_denseblock3_denselayer2_relu1,) {} call_module l__self___features_denseblock3_denselayer2_norm2 L__self___features_denseblock3_denselayer2_norm2 (bottleneck_output_38,) {} call_module l__self___features_denseblock3_denselayer2_relu2 L__self___features_denseblock3_denselayer2_relu2 (l__self___features_denseblock3_denselayer2_norm2,) {} call_module new_features_38 L__self___features_denseblock3_denselayer2_conv2 (l__self___features_denseblock3_denselayer2_relu2,) {} - call_function concated_features_20 ([l__self___features_transition2_pool, new_features_36, new_features_38], 1) {} + call_function concated_features_20 ([l__self___features_transition2_pool, new_features_36, new_features_38], 1) {} call_module l__self___features_denseblock3_denselayer3_norm1 L__self___features_denseblock3_denselayer3_norm1 (concated_features_20,) {} call_module l__self___features_denseblock3_denselayer3_relu1 L__self___features_denseblock3_denselayer3_relu1 (l__self___features_denseblock3_denselayer3_norm1,) {} call_module bottleneck_output_40 L__self___features_denseblock3_denselayer3_conv1 (l__self___features_denseblock3_denselayer3_relu1,) {} call_module l__self___features_denseblock3_denselayer3_norm2 L__self___features_denseblock3_denselayer3_norm2 (bottleneck_output_40,) {} call_module l__self___features_denseblock3_denselayer3_relu2 L__self___features_denseblock3_denselayer3_relu2 (l__self___features_denseblock3_denselayer3_norm2,) {} call_module new_features_40 L__self___features_denseblock3_denselayer3_conv2 (l__self___features_denseblock3_denselayer3_relu2,) {} - call_function concated_features_21 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40], 1) {} + call_function concated_features_21 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40], 1) {} call_module l__self___features_denseblock3_denselayer4_norm1 L__self___features_denseblock3_denselayer4_norm1 (concated_features_21,) {} call_module l__self___features_denseblock3_denselayer4_relu1 L__self___features_denseblock3_denselayer4_relu1 (l__self___features_denseblock3_denselayer4_norm1,) {} call_module bottleneck_output_42 L__self___features_denseblock3_denselayer4_conv1 (l__self___features_denseblock3_denselayer4_relu1,) {} call_module l__self___features_denseblock3_denselayer4_norm2 L__self___features_denseblock3_denselayer4_norm2 (bottleneck_output_42,) {} call_module l__self___features_denseblock3_denselayer4_relu2 L__self___features_denseblock3_denselayer4_relu2 (l__self___features_denseblock3_denselayer4_norm2,) {} call_module new_features_42 L__self___features_denseblock3_denselayer4_conv2 (l__self___features_denseblock3_denselayer4_relu2,) {} - call_function concated_features_22 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42], 1) {} + call_function concated_features_22 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42], 1) {} call_module l__self___features_denseblock3_denselayer5_norm1 L__self___features_denseblock3_denselayer5_norm1 (concated_features_22,) {} call_module l__self___features_denseblock3_denselayer5_relu1 L__self___features_denseblock3_denselayer5_relu1 (l__self___features_denseblock3_denselayer5_norm1,) {} call_module bottleneck_output_44 L__self___features_denseblock3_denselayer5_conv1 (l__self___features_denseblock3_denselayer5_relu1,) {} call_module l__self___features_denseblock3_denselayer5_norm2 L__self___features_denseblock3_denselayer5_norm2 (bottleneck_output_44,) {} call_module l__self___features_denseblock3_denselayer5_relu2 L__self___features_denseblock3_denselayer5_relu2 (l__self___features_denseblock3_denselayer5_norm2,) {} call_module new_features_44 L__self___features_denseblock3_denselayer5_conv2 (l__self___features_denseblock3_denselayer5_relu2,) {} - call_function concated_features_23 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44], 1) {} + call_function concated_features_23 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44], 1) {} call_module l__self___features_denseblock3_denselayer6_norm1 L__self___features_denseblock3_denselayer6_norm1 (concated_features_23,) {} call_module l__self___features_denseblock3_denselayer6_relu1 L__self___features_denseblock3_denselayer6_relu1 (l__self___features_denseblock3_denselayer6_norm1,) {} call_module bottleneck_output_46 L__self___features_denseblock3_denselayer6_conv1 (l__self___features_denseblock3_denselayer6_relu1,) {} call_module l__self___features_denseblock3_denselayer6_norm2 L__self___features_denseblock3_denselayer6_norm2 (bottleneck_output_46,) {} call_module l__self___features_denseblock3_denselayer6_relu2 L__self___features_denseblock3_denselayer6_relu2 (l__self___features_denseblock3_denselayer6_norm2,) {} call_module new_features_46 L__self___features_denseblock3_denselayer6_conv2 (l__self___features_denseblock3_denselayer6_relu2,) {} - call_function concated_features_24 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46], 1) {} + call_function concated_features_24 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46], 1) {} call_module l__self___features_denseblock3_denselayer7_norm1 L__self___features_denseblock3_denselayer7_norm1 (concated_features_24,) {} call_module l__self___features_denseblock3_denselayer7_relu1 L__self___features_denseblock3_denselayer7_relu1 (l__self___features_denseblock3_denselayer7_norm1,) {} call_module bottleneck_output_48 L__self___features_denseblock3_denselayer7_conv1 (l__self___features_denseblock3_denselayer7_relu1,) {} call_module l__self___features_denseblock3_denselayer7_norm2 L__self___features_denseblock3_denselayer7_norm2 (bottleneck_output_48,) {} call_module l__self___features_denseblock3_denselayer7_relu2 L__self___features_denseblock3_denselayer7_relu2 (l__self___features_denseblock3_denselayer7_norm2,) {} call_module new_features_48 L__self___features_denseblock3_denselayer7_conv2 (l__self___features_denseblock3_denselayer7_relu2,) {} - call_function concated_features_25 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48], 1) {} + call_function concated_features_25 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48], 1) {} call_module l__self___features_denseblock3_denselayer8_norm1 L__self___features_denseblock3_denselayer8_norm1 (concated_features_25,) {} call_module l__self___features_denseblock3_denselayer8_relu1 L__self___features_denseblock3_denselayer8_relu1 (l__self___features_denseblock3_denselayer8_norm1,) {} call_module bottleneck_output_50 L__self___features_denseblock3_denselayer8_conv1 (l__self___features_denseblock3_denselayer8_relu1,) {} call_module l__self___features_denseblock3_denselayer8_norm2 L__self___features_denseblock3_denselayer8_norm2 (bottleneck_output_50,) {} call_module l__self___features_denseblock3_denselayer8_relu2 L__self___features_denseblock3_denselayer8_relu2 (l__self___features_denseblock3_denselayer8_norm2,) {} call_module new_features_50 L__self___features_denseblock3_denselayer8_conv2 (l__self___features_denseblock3_denselayer8_relu2,) {} - call_function concated_features_26 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50], 1) {} + call_function concated_features_26 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50], 1) {} call_module l__self___features_denseblock3_denselayer9_norm1 L__self___features_denseblock3_denselayer9_norm1 (concated_features_26,) {} call_module l__self___features_denseblock3_denselayer9_relu1 L__self___features_denseblock3_denselayer9_relu1 (l__self___features_denseblock3_denselayer9_norm1,) {} call_module bottleneck_output_52 L__self___features_denseblock3_denselayer9_conv1 (l__self___features_denseblock3_denselayer9_relu1,) {} call_module l__self___features_denseblock3_denselayer9_norm2 L__self___features_denseblock3_denselayer9_norm2 (bottleneck_output_52,) {} call_module l__self___features_denseblock3_denselayer9_relu2 L__self___features_denseblock3_denselayer9_relu2 (l__self___features_denseblock3_denselayer9_norm2,) {} call_module new_features_52 L__self___features_denseblock3_denselayer9_conv2 (l__self___features_denseblock3_denselayer9_relu2,) {} - call_function concated_features_27 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52], 1) {} + call_function concated_features_27 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52], 1) {} call_module l__self___features_denseblock3_denselayer10_norm1 L__self___features_denseblock3_denselayer10_norm1 (concated_features_27,) {} call_module l__self___features_denseblock3_denselayer10_relu1 L__self___features_denseblock3_denselayer10_relu1 (l__self___features_denseblock3_denselayer10_norm1,) {} call_module bottleneck_output_54 L__self___features_denseblock3_denselayer10_conv1 (l__self___features_denseblock3_denselayer10_relu1,) {} call_module l__self___features_denseblock3_denselayer10_norm2 L__self___features_denseblock3_denselayer10_norm2 (bottleneck_output_54,) {} call_module l__self___features_denseblock3_denselayer10_relu2 L__self___features_denseblock3_denselayer10_relu2 (l__self___features_denseblock3_denselayer10_norm2,) {} call_module new_features_54 L__self___features_denseblock3_denselayer10_conv2 (l__self___features_denseblock3_denselayer10_relu2,) {} - call_function concated_features_28 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54], 1) {} + call_function concated_features_28 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54], 1) {} call_module l__self___features_denseblock3_denselayer11_norm1 L__self___features_denseblock3_denselayer11_norm1 (concated_features_28,) {} call_module l__self___features_denseblock3_denselayer11_relu1 L__self___features_denseblock3_denselayer11_relu1 (l__self___features_denseblock3_denselayer11_norm1,) {} call_module bottleneck_output_56 L__self___features_denseblock3_denselayer11_conv1 (l__self___features_denseblock3_denselayer11_relu1,) {} call_module l__self___features_denseblock3_denselayer11_norm2 L__self___features_denseblock3_denselayer11_norm2 (bottleneck_output_56,) {} call_module l__self___features_denseblock3_denselayer11_relu2 L__self___features_denseblock3_denselayer11_relu2 (l__self___features_denseblock3_denselayer11_norm2,) {} call_module new_features_56 L__self___features_denseblock3_denselayer11_conv2 (l__self___features_denseblock3_denselayer11_relu2,) {} - call_function concated_features_29 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56], 1) {} + call_function concated_features_29 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56], 1) {} call_module l__self___features_denseblock3_denselayer12_norm1 L__self___features_denseblock3_denselayer12_norm1 (concated_features_29,) {} call_module l__self___features_denseblock3_denselayer12_relu1 L__self___features_denseblock3_denselayer12_relu1 (l__self___features_denseblock3_denselayer12_norm1,) {} call_module bottleneck_output_58 L__self___features_denseblock3_denselayer12_conv1 (l__self___features_denseblock3_denselayer12_relu1,) {} call_module l__self___features_denseblock3_denselayer12_norm2 L__self___features_denseblock3_denselayer12_norm2 (bottleneck_output_58,) {} call_module l__self___features_denseblock3_denselayer12_relu2 L__self___features_denseblock3_denselayer12_relu2 (l__self___features_denseblock3_denselayer12_norm2,) {} call_module new_features_58 L__self___features_denseblock3_denselayer12_conv2 (l__self___features_denseblock3_denselayer12_relu2,) {} - call_function concated_features_30 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58], 1) {} + call_function concated_features_30 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58], 1) {} call_module l__self___features_denseblock3_denselayer13_norm1 L__self___features_denseblock3_denselayer13_norm1 (concated_features_30,) {} call_module l__self___features_denseblock3_denselayer13_relu1 L__self___features_denseblock3_denselayer13_relu1 (l__self___features_denseblock3_denselayer13_norm1,) {} call_module bottleneck_output_60 L__self___features_denseblock3_denselayer13_conv1 (l__self___features_denseblock3_denselayer13_relu1,) {} call_module l__self___features_denseblock3_denselayer13_norm2 L__self___features_denseblock3_denselayer13_norm2 (bottleneck_output_60,) {} call_module l__self___features_denseblock3_denselayer13_relu2 L__self___features_denseblock3_denselayer13_relu2 (l__self___features_denseblock3_denselayer13_norm2,) {} call_module new_features_60 L__self___features_denseblock3_denselayer13_conv2 (l__self___features_denseblock3_denselayer13_relu2,) {} - call_function concated_features_31 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60], 1) {} + call_function concated_features_31 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60], 1) {} call_module l__self___features_denseblock3_denselayer14_norm1 L__self___features_denseblock3_denselayer14_norm1 (concated_features_31,) {} call_module l__self___features_denseblock3_denselayer14_relu1 L__self___features_denseblock3_denselayer14_relu1 (l__self___features_denseblock3_denselayer14_norm1,) {} call_module bottleneck_output_62 L__self___features_denseblock3_denselayer14_conv1 (l__self___features_denseblock3_denselayer14_relu1,) {} call_module l__self___features_denseblock3_denselayer14_norm2 L__self___features_denseblock3_denselayer14_norm2 (bottleneck_output_62,) {} call_module l__self___features_denseblock3_denselayer14_relu2 L__self___features_denseblock3_denselayer14_relu2 (l__self___features_denseblock3_denselayer14_norm2,) {} call_module new_features_62 L__self___features_denseblock3_denselayer14_conv2 (l__self___features_denseblock3_denselayer14_relu2,) {} - call_function concated_features_32 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62], 1) {} + call_function concated_features_32 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62], 1) {} call_module l__self___features_denseblock3_denselayer15_norm1 L__self___features_denseblock3_denselayer15_norm1 (concated_features_32,) {} call_module l__self___features_denseblock3_denselayer15_relu1 L__self___features_denseblock3_denselayer15_relu1 (l__self___features_denseblock3_denselayer15_norm1,) {} call_module bottleneck_output_64 L__self___features_denseblock3_denselayer15_conv1 (l__self___features_denseblock3_denselayer15_relu1,) {} call_module l__self___features_denseblock3_denselayer15_norm2 L__self___features_denseblock3_denselayer15_norm2 (bottleneck_output_64,) {} call_module l__self___features_denseblock3_denselayer15_relu2 L__self___features_denseblock3_denselayer15_relu2 (l__self___features_denseblock3_denselayer15_norm2,) {} call_module new_features_64 L__self___features_denseblock3_denselayer15_conv2 (l__self___features_denseblock3_denselayer15_relu2,) {} - call_function concated_features_33 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64], 1) {} + call_function concated_features_33 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64], 1) {} call_module l__self___features_denseblock3_denselayer16_norm1 L__self___features_denseblock3_denselayer16_norm1 (concated_features_33,) {} call_module l__self___features_denseblock3_denselayer16_relu1 L__self___features_denseblock3_denselayer16_relu1 (l__self___features_denseblock3_denselayer16_norm1,) {} call_module bottleneck_output_66 L__self___features_denseblock3_denselayer16_conv1 (l__self___features_denseblock3_denselayer16_relu1,) {} call_module l__self___features_denseblock3_denselayer16_norm2 L__self___features_denseblock3_denselayer16_norm2 (bottleneck_output_66,) {} call_module l__self___features_denseblock3_denselayer16_relu2 L__self___features_denseblock3_denselayer16_relu2 (l__self___features_denseblock3_denselayer16_norm2,) {} call_module new_features_66 L__self___features_denseblock3_denselayer16_conv2 (l__self___features_denseblock3_denselayer16_relu2,) {} - call_function concated_features_34 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66], 1) {} + call_function concated_features_34 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66], 1) {} call_module l__self___features_denseblock3_denselayer17_norm1 L__self___features_denseblock3_denselayer17_norm1 (concated_features_34,) {} call_module l__self___features_denseblock3_denselayer17_relu1 L__self___features_denseblock3_denselayer17_relu1 (l__self___features_denseblock3_denselayer17_norm1,) {} call_module bottleneck_output_68 L__self___features_denseblock3_denselayer17_conv1 (l__self___features_denseblock3_denselayer17_relu1,) {} call_module l__self___features_denseblock3_denselayer17_norm2 L__self___features_denseblock3_denselayer17_norm2 (bottleneck_output_68,) {} call_module l__self___features_denseblock3_denselayer17_relu2 L__self___features_denseblock3_denselayer17_relu2 (l__self___features_denseblock3_denselayer17_norm2,) {} call_module new_features_68 L__self___features_denseblock3_denselayer17_conv2 (l__self___features_denseblock3_denselayer17_relu2,) {} - call_function concated_features_35 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68], 1) {} + call_function concated_features_35 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68], 1) {} call_module l__self___features_denseblock3_denselayer18_norm1 L__self___features_denseblock3_denselayer18_norm1 (concated_features_35,) {} call_module l__self___features_denseblock3_denselayer18_relu1 L__self___features_denseblock3_denselayer18_relu1 (l__self___features_denseblock3_denselayer18_norm1,) {} call_module bottleneck_output_70 L__self___features_denseblock3_denselayer18_conv1 (l__self___features_denseblock3_denselayer18_relu1,) {} call_module l__self___features_denseblock3_denselayer18_norm2 L__self___features_denseblock3_denselayer18_norm2 (bottleneck_output_70,) {} call_module l__self___features_denseblock3_denselayer18_relu2 L__self___features_denseblock3_denselayer18_relu2 (l__self___features_denseblock3_denselayer18_norm2,) {} call_module new_features_70 L__self___features_denseblock3_denselayer18_conv2 (l__self___features_denseblock3_denselayer18_relu2,) {} - call_function concated_features_36 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70], 1) {} + call_function concated_features_36 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70], 1) {} call_module l__self___features_denseblock3_denselayer19_norm1 L__self___features_denseblock3_denselayer19_norm1 (concated_features_36,) {} call_module l__self___features_denseblock3_denselayer19_relu1 L__self___features_denseblock3_denselayer19_relu1 (l__self___features_denseblock3_denselayer19_norm1,) {} call_module bottleneck_output_72 L__self___features_denseblock3_denselayer19_conv1 (l__self___features_denseblock3_denselayer19_relu1,) {} call_module l__self___features_denseblock3_denselayer19_norm2 L__self___features_denseblock3_denselayer19_norm2 (bottleneck_output_72,) {} call_module l__self___features_denseblock3_denselayer19_relu2 L__self___features_denseblock3_denselayer19_relu2 (l__self___features_denseblock3_denselayer19_norm2,) {} call_module new_features_72 L__self___features_denseblock3_denselayer19_conv2 (l__self___features_denseblock3_denselayer19_relu2,) {} - call_function concated_features_37 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72], 1) {} + call_function concated_features_37 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72], 1) {} call_module l__self___features_denseblock3_denselayer20_norm1 L__self___features_denseblock3_denselayer20_norm1 (concated_features_37,) {} call_module l__self___features_denseblock3_denselayer20_relu1 L__self___features_denseblock3_denselayer20_relu1 (l__self___features_denseblock3_denselayer20_norm1,) {} call_module bottleneck_output_74 L__self___features_denseblock3_denselayer20_conv1 (l__self___features_denseblock3_denselayer20_relu1,) {} call_module l__self___features_denseblock3_denselayer20_norm2 L__self___features_denseblock3_denselayer20_norm2 (bottleneck_output_74,) {} call_module l__self___features_denseblock3_denselayer20_relu2 L__self___features_denseblock3_denselayer20_relu2 (l__self___features_denseblock3_denselayer20_norm2,) {} call_module new_features_74 L__self___features_denseblock3_denselayer20_conv2 (l__self___features_denseblock3_denselayer20_relu2,) {} - call_function concated_features_38 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74], 1) {} + call_function concated_features_38 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74], 1) {} call_module l__self___features_denseblock3_denselayer21_norm1 L__self___features_denseblock3_denselayer21_norm1 (concated_features_38,) {} call_module l__self___features_denseblock3_denselayer21_relu1 L__self___features_denseblock3_denselayer21_relu1 (l__self___features_denseblock3_denselayer21_norm1,) {} call_module bottleneck_output_76 L__self___features_denseblock3_denselayer21_conv1 (l__self___features_denseblock3_denselayer21_relu1,) {} call_module l__self___features_denseblock3_denselayer21_norm2 L__self___features_denseblock3_denselayer21_norm2 (bottleneck_output_76,) {} call_module l__self___features_denseblock3_denselayer21_relu2 L__self___features_denseblock3_denselayer21_relu2 (l__self___features_denseblock3_denselayer21_norm2,) {} call_module new_features_76 L__self___features_denseblock3_denselayer21_conv2 (l__self___features_denseblock3_denselayer21_relu2,) {} - call_function concated_features_39 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76], 1) {} + call_function concated_features_39 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76], 1) {} call_module l__self___features_denseblock3_denselayer22_norm1 L__self___features_denseblock3_denselayer22_norm1 (concated_features_39,) {} call_module l__self___features_denseblock3_denselayer22_relu1 L__self___features_denseblock3_denselayer22_relu1 (l__self___features_denseblock3_denselayer22_norm1,) {} call_module bottleneck_output_78 L__self___features_denseblock3_denselayer22_conv1 (l__self___features_denseblock3_denselayer22_relu1,) {} call_module l__self___features_denseblock3_denselayer22_norm2 L__self___features_denseblock3_denselayer22_norm2 (bottleneck_output_78,) {} call_module l__self___features_denseblock3_denselayer22_relu2 L__self___features_denseblock3_denselayer22_relu2 (l__self___features_denseblock3_denselayer22_norm2,) {} call_module new_features_78 L__self___features_denseblock3_denselayer22_conv2 (l__self___features_denseblock3_denselayer22_relu2,) {} - call_function concated_features_40 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76, new_features_78], 1) {} + call_function concated_features_40 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76, new_features_78], 1) {} call_module l__self___features_denseblock3_denselayer23_norm1 L__self___features_denseblock3_denselayer23_norm1 (concated_features_40,) {} call_module l__self___features_denseblock3_denselayer23_relu1 L__self___features_denseblock3_denselayer23_relu1 (l__self___features_denseblock3_denselayer23_norm1,) {} call_module bottleneck_output_80 L__self___features_denseblock3_denselayer23_conv1 (l__self___features_denseblock3_denselayer23_relu1,) {} call_module l__self___features_denseblock3_denselayer23_norm2 L__self___features_denseblock3_denselayer23_norm2 (bottleneck_output_80,) {} call_module l__self___features_denseblock3_denselayer23_relu2 L__self___features_denseblock3_denselayer23_relu2 (l__self___features_denseblock3_denselayer23_norm2,) {} call_module new_features_80 L__self___features_denseblock3_denselayer23_conv2 (l__self___features_denseblock3_denselayer23_relu2,) {} - call_function concated_features_41 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76, new_features_78, new_features_80], 1) {} + call_function concated_features_41 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76, new_features_78, new_features_80], 1) {} call_module l__self___features_denseblock3_denselayer24_norm1 L__self___features_denseblock3_denselayer24_norm1 (concated_features_41,) {} call_module l__self___features_denseblock3_denselayer24_relu1 L__self___features_denseblock3_denselayer24_relu1 (l__self___features_denseblock3_denselayer24_norm1,) {} call_module bottleneck_output_82 L__self___features_denseblock3_denselayer24_conv1 (l__self___features_denseblock3_denselayer24_relu1,) {} call_module l__self___features_denseblock3_denselayer24_norm2 L__self___features_denseblock3_denselayer24_norm2 (bottleneck_output_82,) {} call_module l__self___features_denseblock3_denselayer24_relu2 L__self___features_denseblock3_denselayer24_relu2 (l__self___features_denseblock3_denselayer24_norm2,) {} call_module new_features_82 L__self___features_denseblock3_denselayer24_conv2 (l__self___features_denseblock3_denselayer24_relu2,) {} - call_function cat_44 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76, new_features_78, new_features_80, new_features_82], 1) {} + call_function cat_44 ([l__self___features_transition2_pool, new_features_36, new_features_38, new_features_40, new_features_42, new_features_44, new_features_46, new_features_48, new_features_50, new_features_52, new_features_54, new_features_56, new_features_58, new_features_60, new_features_62, new_features_64, new_features_66, new_features_68, new_features_70, new_features_72, new_features_74, new_features_76, new_features_78, new_features_80, new_features_82], 1) {} call_module l__self___features_transition3_norm L__self___features_transition3_norm (cat_44,) {} call_module l__self___features_transition3_relu L__self___features_transition3_relu (l__self___features_transition3_norm,) {} call_module l__self___features_transition3_conv L__self___features_transition3_conv (l__self___features_transition3_relu,) {} call_module l__self___features_transition3_pool L__self___features_transition3_pool (l__self___features_transition3_conv,) {} - call_function concated_features_42 ([l__self___features_transition3_pool], 1) {} + call_function concated_features_42 ([l__self___features_transition3_pool], 1) {} call_module l__self___features_denseblock4_denselayer1_norm1 L__self___features_denseblock4_denselayer1_norm1 (concated_features_42,) {} call_module l__self___features_denseblock4_denselayer1_relu1 L__self___features_denseblock4_denselayer1_relu1 (l__self___features_denseblock4_denselayer1_norm1,) {} call_module bottleneck_output_84 L__self___features_denseblock4_denselayer1_conv1 (l__self___features_denseblock4_denselayer1_relu1,) {} call_module l__self___features_denseblock4_denselayer1_norm2 L__self___features_denseblock4_denselayer1_norm2 (bottleneck_output_84,) {} call_module l__self___features_denseblock4_denselayer1_relu2 L__self___features_denseblock4_denselayer1_relu2 (l__self___features_denseblock4_denselayer1_norm2,) {} call_module new_features_84 L__self___features_denseblock4_denselayer1_conv2 (l__self___features_denseblock4_denselayer1_relu2,) {} - call_function concated_features_43 ([l__self___features_transition3_pool, new_features_84], 1) {} + call_function concated_features_43 ([l__self___features_transition3_pool, new_features_84], 1) {} call_module l__self___features_denseblock4_denselayer2_norm1 L__self___features_denseblock4_denselayer2_norm1 (concated_features_43,) {} call_module l__self___features_denseblock4_denselayer2_relu1 L__self___features_denseblock4_denselayer2_relu1 (l__self___features_denseblock4_denselayer2_norm1,) {} call_module bottleneck_output_86 L__self___features_denseblock4_denselayer2_conv1 (l__self___features_denseblock4_denselayer2_relu1,) {} call_module l__self___features_denseblock4_denselayer2_norm2 L__self___features_denseblock4_denselayer2_norm2 (bottleneck_output_86,) {} call_module l__self___features_denseblock4_denselayer2_relu2 L__self___features_denseblock4_denselayer2_relu2 (l__self___features_denseblock4_denselayer2_norm2,) {} call_module new_features_86 L__self___features_denseblock4_denselayer2_conv2 (l__self___features_denseblock4_denselayer2_relu2,) {} - call_function concated_features_44 ([l__self___features_transition3_pool, new_features_84, new_features_86], 1) {} + call_function concated_features_44 ([l__self___features_transition3_pool, new_features_84, new_features_86], 1) {} call_module l__self___features_denseblock4_denselayer3_norm1 L__self___features_denseblock4_denselayer3_norm1 (concated_features_44,) {} call_module l__self___features_denseblock4_denselayer3_relu1 L__self___features_denseblock4_denselayer3_relu1 (l__self___features_denseblock4_denselayer3_norm1,) {} call_module bottleneck_output_88 L__self___features_denseblock4_denselayer3_conv1 (l__self___features_denseblock4_denselayer3_relu1,) {} call_module l__self___features_denseblock4_denselayer3_norm2 L__self___features_denseblock4_denselayer3_norm2 (bottleneck_output_88,) {} call_module l__self___features_denseblock4_denselayer3_relu2 L__self___features_denseblock4_denselayer3_relu2 (l__self___features_denseblock4_denselayer3_norm2,) {} call_module new_features_88 L__self___features_denseblock4_denselayer3_conv2 (l__self___features_denseblock4_denselayer3_relu2,) {} - call_function concated_features_45 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88], 1) {} + call_function concated_features_45 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88], 1) {} call_module l__self___features_denseblock4_denselayer4_norm1 L__self___features_denseblock4_denselayer4_norm1 (concated_features_45,) {} call_module l__self___features_denseblock4_denselayer4_relu1 L__self___features_denseblock4_denselayer4_relu1 (l__self___features_denseblock4_denselayer4_norm1,) {} call_module bottleneck_output_90 L__self___features_denseblock4_denselayer4_conv1 (l__self___features_denseblock4_denselayer4_relu1,) {} call_module l__self___features_denseblock4_denselayer4_norm2 L__self___features_denseblock4_denselayer4_norm2 (bottleneck_output_90,) {} call_module l__self___features_denseblock4_denselayer4_relu2 L__self___features_denseblock4_denselayer4_relu2 (l__self___features_denseblock4_denselayer4_norm2,) {} call_module new_features_90 L__self___features_denseblock4_denselayer4_conv2 (l__self___features_denseblock4_denselayer4_relu2,) {} - call_function concated_features_46 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90], 1) {} + call_function concated_features_46 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90], 1) {} call_module l__self___features_denseblock4_denselayer5_norm1 L__self___features_denseblock4_denselayer5_norm1 (concated_features_46,) {} call_module l__self___features_denseblock4_denselayer5_relu1 L__self___features_denseblock4_denselayer5_relu1 (l__self___features_denseblock4_denselayer5_norm1,) {} call_module bottleneck_output_92 L__self___features_denseblock4_denselayer5_conv1 (l__self___features_denseblock4_denselayer5_relu1,) {} call_module l__self___features_denseblock4_denselayer5_norm2 L__self___features_denseblock4_denselayer5_norm2 (bottleneck_output_92,) {} call_module l__self___features_denseblock4_denselayer5_relu2 L__self___features_denseblock4_denselayer5_relu2 (l__self___features_denseblock4_denselayer5_norm2,) {} call_module new_features_92 L__self___features_denseblock4_denselayer5_conv2 (l__self___features_denseblock4_denselayer5_relu2,) {} - call_function concated_features_47 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92], 1) {} + call_function concated_features_47 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92], 1) {} call_module l__self___features_denseblock4_denselayer6_norm1 L__self___features_denseblock4_denselayer6_norm1 (concated_features_47,) {} call_module l__self___features_denseblock4_denselayer6_relu1 L__self___features_denseblock4_denselayer6_relu1 (l__self___features_denseblock4_denselayer6_norm1,) {} call_module bottleneck_output_94 L__self___features_denseblock4_denselayer6_conv1 (l__self___features_denseblock4_denselayer6_relu1,) {} call_module l__self___features_denseblock4_denselayer6_norm2 L__self___features_denseblock4_denselayer6_norm2 (bottleneck_output_94,) {} call_module l__self___features_denseblock4_denselayer6_relu2 L__self___features_denseblock4_denselayer6_relu2 (l__self___features_denseblock4_denselayer6_norm2,) {} call_module new_features_94 L__self___features_denseblock4_denselayer6_conv2 (l__self___features_denseblock4_denselayer6_relu2,) {} - call_function concated_features_48 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94], 1) {} + call_function concated_features_48 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94], 1) {} call_module l__self___features_denseblock4_denselayer7_norm1 L__self___features_denseblock4_denselayer7_norm1 (concated_features_48,) {} call_module l__self___features_denseblock4_denselayer7_relu1 L__self___features_denseblock4_denselayer7_relu1 (l__self___features_denseblock4_denselayer7_norm1,) {} call_module bottleneck_output_96 L__self___features_denseblock4_denselayer7_conv1 (l__self___features_denseblock4_denselayer7_relu1,) {} call_module l__self___features_denseblock4_denselayer7_norm2 L__self___features_denseblock4_denselayer7_norm2 (bottleneck_output_96,) {} call_module l__self___features_denseblock4_denselayer7_relu2 L__self___features_denseblock4_denselayer7_relu2 (l__self___features_denseblock4_denselayer7_norm2,) {} call_module new_features_96 L__self___features_denseblock4_denselayer7_conv2 (l__self___features_denseblock4_denselayer7_relu2,) {} - call_function concated_features_49 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96], 1) {} + call_function concated_features_49 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96], 1) {} call_module l__self___features_denseblock4_denselayer8_norm1 L__self___features_denseblock4_denselayer8_norm1 (concated_features_49,) {} call_module l__self___features_denseblock4_denselayer8_relu1 L__self___features_denseblock4_denselayer8_relu1 (l__self___features_denseblock4_denselayer8_norm1,) {} call_module bottleneck_output_98 L__self___features_denseblock4_denselayer8_conv1 (l__self___features_denseblock4_denselayer8_relu1,) {} call_module l__self___features_denseblock4_denselayer8_norm2 L__self___features_denseblock4_denselayer8_norm2 (bottleneck_output_98,) {} call_module l__self___features_denseblock4_denselayer8_relu2 L__self___features_denseblock4_denselayer8_relu2 (l__self___features_denseblock4_denselayer8_norm2,) {} call_module new_features_98 L__self___features_denseblock4_denselayer8_conv2 (l__self___features_denseblock4_denselayer8_relu2,) {} - call_function concated_features_50 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98], 1) {} + call_function concated_features_50 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98], 1) {} call_module l__self___features_denseblock4_denselayer9_norm1 L__self___features_denseblock4_denselayer9_norm1 (concated_features_50,) {} call_module l__self___features_denseblock4_denselayer9_relu1 L__self___features_denseblock4_denselayer9_relu1 (l__self___features_denseblock4_denselayer9_norm1,) {} call_module bottleneck_output_100 L__self___features_denseblock4_denselayer9_conv1 (l__self___features_denseblock4_denselayer9_relu1,) {} call_module l__self___features_denseblock4_denselayer9_norm2 L__self___features_denseblock4_denselayer9_norm2 (bottleneck_output_100,) {} call_module l__self___features_denseblock4_denselayer9_relu2 L__self___features_denseblock4_denselayer9_relu2 (l__self___features_denseblock4_denselayer9_norm2,) {} call_module new_features_100 L__self___features_denseblock4_denselayer9_conv2 (l__self___features_denseblock4_denselayer9_relu2,) {} - call_function concated_features_51 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100], 1) {} + call_function concated_features_51 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100], 1) {} call_module l__self___features_denseblock4_denselayer10_norm1 L__self___features_denseblock4_denselayer10_norm1 (concated_features_51,) {} call_module l__self___features_denseblock4_denselayer10_relu1 L__self___features_denseblock4_denselayer10_relu1 (l__self___features_denseblock4_denselayer10_norm1,) {} call_module bottleneck_output_102 L__self___features_denseblock4_denselayer10_conv1 (l__self___features_denseblock4_denselayer10_relu1,) {} call_module l__self___features_denseblock4_denselayer10_norm2 L__self___features_denseblock4_denselayer10_norm2 (bottleneck_output_102,) {} call_module l__self___features_denseblock4_denselayer10_relu2 L__self___features_denseblock4_denselayer10_relu2 (l__self___features_denseblock4_denselayer10_norm2,) {} call_module new_features_102 L__self___features_denseblock4_denselayer10_conv2 (l__self___features_denseblock4_denselayer10_relu2,) {} - call_function concated_features_52 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102], 1) {} + call_function concated_features_52 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102], 1) {} call_module l__self___features_denseblock4_denselayer11_norm1 L__self___features_denseblock4_denselayer11_norm1 (concated_features_52,) {} call_module l__self___features_denseblock4_denselayer11_relu1 L__self___features_denseblock4_denselayer11_relu1 (l__self___features_denseblock4_denselayer11_norm1,) {} call_module bottleneck_output_104 L__self___features_denseblock4_denselayer11_conv1 (l__self___features_denseblock4_denselayer11_relu1,) {} call_module l__self___features_denseblock4_denselayer11_norm2 L__self___features_denseblock4_denselayer11_norm2 (bottleneck_output_104,) {} call_module l__self___features_denseblock4_denselayer11_relu2 L__self___features_denseblock4_denselayer11_relu2 (l__self___features_denseblock4_denselayer11_norm2,) {} call_module new_features_104 L__self___features_denseblock4_denselayer11_conv2 (l__self___features_denseblock4_denselayer11_relu2,) {} - call_function concated_features_53 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104], 1) {} + call_function concated_features_53 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104], 1) {} call_module l__self___features_denseblock4_denselayer12_norm1 L__self___features_denseblock4_denselayer12_norm1 (concated_features_53,) {} call_module l__self___features_denseblock4_denselayer12_relu1 L__self___features_denseblock4_denselayer12_relu1 (l__self___features_denseblock4_denselayer12_norm1,) {} call_module bottleneck_output_106 L__self___features_denseblock4_denselayer12_conv1 (l__self___features_denseblock4_denselayer12_relu1,) {} call_module l__self___features_denseblock4_denselayer12_norm2 L__self___features_denseblock4_denselayer12_norm2 (bottleneck_output_106,) {} call_module l__self___features_denseblock4_denselayer12_relu2 L__self___features_denseblock4_denselayer12_relu2 (l__self___features_denseblock4_denselayer12_norm2,) {} call_module new_features_106 L__self___features_denseblock4_denselayer12_conv2 (l__self___features_denseblock4_denselayer12_relu2,) {} - call_function concated_features_54 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106], 1) {} + call_function concated_features_54 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106], 1) {} call_module l__self___features_denseblock4_denselayer13_norm1 L__self___features_denseblock4_denselayer13_norm1 (concated_features_54,) {} call_module l__self___features_denseblock4_denselayer13_relu1 L__self___features_denseblock4_denselayer13_relu1 (l__self___features_denseblock4_denselayer13_norm1,) {} call_module bottleneck_output_108 L__self___features_denseblock4_denselayer13_conv1 (l__self___features_denseblock4_denselayer13_relu1,) {} call_module l__self___features_denseblock4_denselayer13_norm2 L__self___features_denseblock4_denselayer13_norm2 (bottleneck_output_108,) {} call_module l__self___features_denseblock4_denselayer13_relu2 L__self___features_denseblock4_denselayer13_relu2 (l__self___features_denseblock4_denselayer13_norm2,) {} call_module new_features_108 L__self___features_denseblock4_denselayer13_conv2 (l__self___features_denseblock4_denselayer13_relu2,) {} - call_function concated_features_55 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108], 1) {} + call_function concated_features_55 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108], 1) {} call_module l__self___features_denseblock4_denselayer14_norm1 L__self___features_denseblock4_denselayer14_norm1 (concated_features_55,) {} call_module l__self___features_denseblock4_denselayer14_relu1 L__self___features_denseblock4_denselayer14_relu1 (l__self___features_denseblock4_denselayer14_norm1,) {} call_module bottleneck_output_110 L__self___features_denseblock4_denselayer14_conv1 (l__self___features_denseblock4_denselayer14_relu1,) {} call_module l__self___features_denseblock4_denselayer14_norm2 L__self___features_denseblock4_denselayer14_norm2 (bottleneck_output_110,) {} call_module l__self___features_denseblock4_denselayer14_relu2 L__self___features_denseblock4_denselayer14_relu2 (l__self___features_denseblock4_denselayer14_norm2,) {} call_module new_features_110 L__self___features_denseblock4_denselayer14_conv2 (l__self___features_denseblock4_denselayer14_relu2,) {} - call_function concated_features_56 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108, new_features_110], 1) {} + call_function concated_features_56 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108, new_features_110], 1) {} call_module l__self___features_denseblock4_denselayer15_norm1 L__self___features_denseblock4_denselayer15_norm1 (concated_features_56,) {} call_module l__self___features_denseblock4_denselayer15_relu1 L__self___features_denseblock4_denselayer15_relu1 (l__self___features_denseblock4_denselayer15_norm1,) {} call_module bottleneck_output_112 L__self___features_denseblock4_denselayer15_conv1 (l__self___features_denseblock4_denselayer15_relu1,) {} call_module l__self___features_denseblock4_denselayer15_norm2 L__self___features_denseblock4_denselayer15_norm2 (bottleneck_output_112,) {} call_module l__self___features_denseblock4_denselayer15_relu2 L__self___features_denseblock4_denselayer15_relu2 (l__self___features_denseblock4_denselayer15_norm2,) {} call_module new_features_112 L__self___features_denseblock4_denselayer15_conv2 (l__self___features_denseblock4_denselayer15_relu2,) {} - call_function concated_features_57 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108, new_features_110, new_features_112], 1) {} + call_function concated_features_57 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108, new_features_110, new_features_112], 1) {} call_module l__self___features_denseblock4_denselayer16_norm1 L__self___features_denseblock4_denselayer16_norm1 (concated_features_57,) {} call_module l__self___features_denseblock4_denselayer16_relu1 L__self___features_denseblock4_denselayer16_relu1 (l__self___features_denseblock4_denselayer16_norm1,) {} call_module bottleneck_output_114 L__self___features_denseblock4_denselayer16_conv1 (l__self___features_denseblock4_denselayer16_relu1,) {} call_module l__self___features_denseblock4_denselayer16_norm2 L__self___features_denseblock4_denselayer16_norm2 (bottleneck_output_114,) {} call_module l__self___features_denseblock4_denselayer16_relu2 L__self___features_denseblock4_denselayer16_relu2 (l__self___features_denseblock4_denselayer16_norm2,) {} call_module new_features_114 L__self___features_denseblock4_denselayer16_conv2 (l__self___features_denseblock4_denselayer16_relu2,) {} - call_function cat_61 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108, new_features_110, new_features_112, new_features_114], 1) {} + call_function cat_61 ([l__self___features_transition3_pool, new_features_84, new_features_86, new_features_88, new_features_90, new_features_92, new_features_94, new_features_96, new_features_98, new_features_100, new_features_102, new_features_104, new_features_106, new_features_108, new_features_110, new_features_112, new_features_114], 1) {} call_module features L__self___features_norm5 (cat_61,) {} - call_function out (features,) {'inplace': True} - call_function out_1 (out, (1, 1)) {} - call_function out_2 (out_1, 1) {} + call_function out (features,) {'inplace': True} + call_function out_1 (out, (1, 1)) {} + call_function out_2 (out_1, 1) {} call_module out_3 L__self___classifier (out_2,) {} output output output ((out_3,),) {} @@ -1483,7 +1483,7 @@ data-dependent control flow. Consider the function below, where the line ------------- ------ ------------------------------------------------------ ----------- -------- placeholder l_a_ L_a_ () {} placeholder l_b_ L_b_ () {} - call_function abs_1 (l_a_,) {} + call_function abs_1 (l_a_,) {} call_function add (abs_1, 1) {} call_function x (l_a_, add) {} call_method sum_1 sum (l_b_,) {} @@ -1565,7 +1565,7 @@ We can see where TorchDynamo breaks the graph by using ``torch._dynamo.explain`` Ops per Graph: Ops 1: - + @@ -1574,30 +1574,38 @@ We can see where TorchDynamo breaks the graph by using ``torch._dynamo.explain`` Out Guards: Guard 1: + Name: "L['b']" + Source: local + Create Function: TENSOR_MATCH + Guard Types: ['TENSOR_MATCH'] + Code List: ["hasattr(L['b'], '_dynamo_dynamic_indices') == False"] + Object Weakref: + Guarded Class Weakref: + Guard 2: Name: '' Source: global - Create Function: DEFAULT_DEVICE - Guard Types: ['DEFAULT_DEVICE'] - Code List: ['utils_device.CURRENT_DEVICE == None'] + Create Function: DETERMINISTIC_ALGORITHMS + Guard Types: None + Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 2: - Name: "G['torch'].abs" + Guard 3: + Name: "G['torch']" Source: global Create Function: FUNCTION_MATCH Guard Types: None Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 3: - Name: "L['b']" - Source: local - Create Function: TENSOR_MATCH - Guard Types: ['TENSOR_MATCH'] - Code List: ["hasattr(L['b'], '_dynamo_dynamic_indices') == False"] - Object Weakref: - Guarded Class Weakref: Guard 4: + Name: '' + Source: global + Create Function: TORCH_FUNCTION_STATE + Guard Types: None + Code List: None + Object Weakref: None + Guarded Class Weakref: None + Guard 5: Name: '' Source: shape_env Create Function: SHAPE_ENV @@ -1605,63 +1613,71 @@ We can see where TorchDynamo breaks the graph by using ``torch._dynamo.explain`` Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 5: + Guard 6: Name: '' Source: global - Create Function: BACKEND_MATCH - Guard Types: ['BACKEND_MATCH'] - Code List: ['___check_current_backend(140158500761472)'] + Create Function: DEFAULT_DEVICE + Guard Types: ['DEFAULT_DEVICE'] + Code List: ['utils_device.CURRENT_DEVICE == None'] Object Weakref: None Guarded Class Weakref: None - Guard 6: - Name: "G['torch']" + Guard 7: + Name: "G['torch'].abs" Source: global Create Function: FUNCTION_MATCH Guard Types: None Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 7: + Guard 8: Name: '' Source: global - Create Function: GRAD_MODE - Guard Types: None - Code List: None + Create Function: BACKEND_MATCH + Guard Types: ['BACKEND_MATCH'] + Code List: ['___check_current_backend(139670084077168)'] Object Weakref: None Guarded Class Weakref: None - Guard 8: + Guard 9: Name: "L['a']" Source: local Create Function: TENSOR_MATCH Guard Types: ['TENSOR_MATCH'] Code List: ["hasattr(L['a'], '_dynamo_dynamic_indices') == False"] - Object Weakref: - Guarded Class Weakref: - Guard 9: + Object Weakref: + Guarded Class Weakref: + Guard 10: Name: '' Source: global - Create Function: DETERMINISTIC_ALGORITHMS + Create Function: GRAD_MODE Guard Types: None Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 10: + Guard 11: + Name: "L['b']" + Source: local + Create Function: TENSOR_MATCH + Guard Types: ['TENSOR_MATCH'] + Code List: ["hasattr(L['b'], '_dynamo_dynamic_indices') == False"] + Object Weakref: + Guarded Class Weakref: + Guard 12: Name: '' Source: global - Create Function: TORCH_FUNCTION_STATE + Create Function: DETERMINISTIC_ALGORITHMS Guard Types: None Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 11: + Guard 13: Name: '' Source: global - Create Function: DEFAULT_DEVICE - Guard Types: ['DEFAULT_DEVICE'] - Code List: ['utils_device.CURRENT_DEVICE == None'] + Create Function: TORCH_FUNCTION_STATE + Guard Types: None + Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 12: + Guard 14: Name: '' Source: shape_env Create Function: SHAPE_ENV @@ -1669,50 +1685,34 @@ We can see where TorchDynamo breaks the graph by using ``torch._dynamo.explain`` Code List: None Object Weakref: None Guarded Class Weakref: None - Guard 13: + Guard 15: + Name: '' + Source: global + Create Function: DEFAULT_DEVICE + Guard Types: ['DEFAULT_DEVICE'] + Code List: ['utils_device.CURRENT_DEVICE == None'] + Object Weakref: None + Guarded Class Weakref: None + Guard 16: Name: '' Source: global Create Function: BACKEND_MATCH Guard Types: ['BACKEND_MATCH'] - Code List: ['___check_current_backend(140158500761472)'] + Code List: ['___check_current_backend(139670084077168)'] Object Weakref: None Guarded Class Weakref: None - Guard 14: + Guard 17: Name: "L['x']" Source: local Create Function: TENSOR_MATCH Guard Types: ['TENSOR_MATCH'] Code List: ["hasattr(L['x'], '_dynamo_dynamic_indices') == False"] - Object Weakref: - Guarded Class Weakref: - Guard 15: - Name: "L['b']" - Source: local - Create Function: TENSOR_MATCH - Guard Types: ['TENSOR_MATCH'] - Code List: ["hasattr(L['b'], '_dynamo_dynamic_indices') == False"] - Object Weakref: - Guarded Class Weakref: - Guard 16: - Name: '' - Source: global - Create Function: GRAD_MODE - Guard Types: None - Code List: None - Object Weakref: None - Guarded Class Weakref: None - Guard 17: - Name: '' - Source: global - Create Function: DETERMINISTIC_ALGORITHMS - Guard Types: None - Code List: None - Object Weakref: None - Guarded Class Weakref: None + Object Weakref: + Guarded Class Weakref: Guard 18: Name: '' Source: global - Create Function: TORCH_FUNCTION_STATE + Create Function: GRAD_MODE Guard Types: None Code List: None Object Weakref: None @@ -1720,8 +1720,8 @@ We can see where TorchDynamo breaks the graph by using ``torch._dynamo.explain`` Compile Times: TorchDynamo compilation metrics: Function Runtimes (s) ------------------------------- -------------- - _compile..compile_inner 0.0111, 0.0068 - OutputGraph.call_user_compiler 0.0001, 0.0001 + _compile..compile_inner 0.0111, 0.0067 + OutputGraph.call_user_compiler 0.0001, 0.0000 @@ -1821,13 +1821,13 @@ the model we used above for demonstrating speedups. .. code-block:: none - tensor([[ 0.1343, 0.1977, -0.2055, ..., 0.1862, -0.1632, -0.1640], - [ 0.2683, 0.2349, -0.1905, ..., 0.2167, -0.0055, 0.0311], - [ 0.0162, 0.2181, -0.1115, ..., 0.1708, -0.1679, -0.0636], + tensor([[ 0.1344, 0.1976, -0.2056, ..., 0.1861, -0.1632, -0.1642], + [ 0.2680, 0.2349, -0.1903, ..., 0.2170, -0.0055, 0.0309], + [ 0.0164, 0.2180, -0.1115, ..., 0.1709, -0.1681, -0.0635], ..., - [ 0.0805, 0.2680, -0.1888, ..., 0.0382, -0.2072, -0.1445], - [-0.0210, 0.0859, -0.2458, ..., 0.1863, -0.1280, -0.0282], - [-0.0387, 0.0729, -0.1960, ..., 0.0863, -0.2200, -0.1486]], + [ 0.0807, 0.2680, -0.1889, ..., 0.0381, -0.2073, -0.1444], + [-0.0209, 0.0858, -0.2457, ..., 0.1862, -0.1280, -0.0282], + [-0.0386, 0.0730, -0.1961, ..., 0.0864, -0.2199, -0.1485]], device='cuda:0', grad_fn=) @@ -1855,7 +1855,7 @@ with FX graphs. We hope that you will give ``torch.compile`` a try! .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 6 minutes 12.784 seconds) + **Total running time of the script:** ( 6 minutes 11.441 seconds) .. _sphx_glr_download_intermediate_torch_compile_tutorial.py: diff --git a/_sources/intermediate/torch_export_tutorial.rst.txt b/_sources/intermediate/torch_export_tutorial.rst.txt index 8270aedeb2..e5dee147a5 100644 --- a/_sources/intermediate/torch_export_tutorial.rst.txt +++ b/_sources/intermediate/torch_export_tutorial.rst.txt @@ -655,15 +655,15 @@ operation is not recorded in the graph. def forward(self, arg0_1: "f32[3, 3]"): # No stacktrace found for following nodes add: "f32[3, 3]" = torch.ops.aten.add.Tensor(arg0_1, 1); arg0_1 = None - add_1: "f32[3, 3]" = torch.ops.aten.add.Tensor(add, 140156818500448); add = None + add_1: "f32[3, 3]" = torch.ops.aten.add.Tensor(add, 139668425840400); add = None return (add_1,) Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=, arg=TensorArgument(name='add_1'), target=None)]) Range constraints: {} - tensor([[1.4016e+14, 1.4016e+14, 1.4016e+14], - [1.4016e+14, 1.4016e+14, 1.4016e+14], - [1.4016e+14, 1.4016e+14, 1.4016e+14]]) + tensor([[1.3967e+14, 1.3967e+14, 1.3967e+14], + [1.3967e+14, 1.3967e+14, 1.3967e+14], + [1.3967e+14, 1.3967e+14, 1.3967e+14]]) @@ -1397,7 +1397,7 @@ error out. guards = output_graph.shape_env.produce_guards( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 3308, in produce_guards raise ConstraintViolationError( - torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (inp5_dim0, inp4_dim0, inp5_dim1)! For more information, run with TORCH_LOGS="+dynamic". + torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (inp4_dim0, inp5_dim0, inp5_dim1)! For more information, run with TORCH_LOGS="+dynamic". - The values of inp5_dim0 = L['y'].size()[0] and inp4_dim1 = L['x'].size()[1] must always be equal. - Not all values of inp5_dim1 = L['y'].size()[1] in the specified range satisfy the generated guard Ne(L['y'].size()[1], 16). - Not all values of inp4_dim0 = L['x'].size()[0] in the specified range satisfy the generated guard 2 <= L['x'].size()[0] and L['x'].size()[0] <= 16 @@ -1427,7 +1427,7 @@ error out. gm_torch_level = _export_to_torch_ir( File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/export/_trace.py", line 359, in _export_to_torch_ir raise UserError(UserErrorType.CONSTRAINT_VIOLATION, str(e)) # noqa: TRY200 - torch._dynamo.exc.UserError: Constraints violated (inp5_dim0, inp4_dim0, inp5_dim1)! For more information, run with TORCH_LOGS="+dynamic". + torch._dynamo.exc.UserError: Constraints violated (inp4_dim0, inp5_dim0, inp5_dim1)! For more information, run with TORCH_LOGS="+dynamic". - The values of inp5_dim0 = L['y'].size()[0] and inp4_dim1 = L['x'].size()[1] must always be equal. - Not all values of inp5_dim1 = L['y'].size()[1] in the specified range satisfy the generated guard Ne(L['y'].size()[1], 16). - Not all values of inp4_dim0 = L['x'].size()[0] in the specified range satisfy the generated guard 2 <= L['x'].size()[0] and L['x'].size()[0] <= 16 @@ -1524,40 +1524,40 @@ or use ``torch._logging.set_logs``. .. code-block:: none - I0624 21:50:47.454000 140165439955584 torch/_dynamo/logging.py:55] [16/0] Step 1: torchdynamo start tracing forward /var/lib/workspace/intermediate_source/torch_export_tutorial.py:481 - I0624 21:50:47.457000 140165439955584 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s0 = 8 for L['x'].size()[0] [2, 16] (_dynamo/variables/builder.py:1881 in ) - I0624 21:50:47.457000 140165439955584 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s1 = 16 for L['x'].size()[1] [2, 9223372036854775806] (_dynamo/variables/builder.py:1881 in ) - I0624 21:50:47.459000 140165439955584 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s2 = 16 for L['y'].size()[0] [2, 9223372036854775806] (_dynamo/variables/builder.py:1881 in ) - I0624 21:50:47.460000 140165439955584 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s3 = 32 for L['y'].size()[1] [17, 9223372036854775806] (_dynamo/variables/builder.py:1881 in ) - I0624 21:50:47.468000 140165439955584 torch/fx/experimental/symbolic_shapes.py:3809] [16/0] set_replacement s2 = s1 (solve_backed) ValueRanges(lower=2, upper=9223372036854775806, is_bool=False) - I0624 21:50:47.469000 140165439955584 torch/fx/experimental/symbolic_shapes.py:4035] [16/0] eval Eq(s1, s2) [guard added] at ar/lib/workspace/intermediate_source/torch_export_tutorial.py:483 in forward (_meta_registrations.py:2014 in meta_mm) - I0624 21:50:47.469000 140165439955584 torch/_dynamo/logging.py:55] [16/0] Step 1: torchdynamo done tracing forward (RETURN_VALUE) - I0624 21:50:47.471000 140165439955584 torch/fx/experimental/symbolic_shapes.py:3809] [16/0] set_replacement s2 = s1 (find) ValueRanges(lower=2, upper=9223372036854775806, is_bool=False) - I0624 21:50:47.472000 140165439955584 torch/_dynamo/logging.py:55] [16/0] Step 2: calling compiler function dynamo_normalization_capturing_compiler - I0624 21:50:47.472000 140165439955584 torch/_dynamo/logging.py:55] [16/0] Step 2: done compiler function dynamo_normalization_capturing_compiler - I0624 21:50:47.474000 140165439955584 torch/fx/experimental/symbolic_shapes.py:2806] [16/0] produce_guards - I0624 21:50:47.502000 140165439955584 torch/_dynamo/eval_frame.py:1339] Summary of dimension constraints: - I0624 21:50:47.502000 140165439955584 torch/_dynamo/eval_frame.py:1339] Suggested fixes: - I0624 21:50:47.502000 140165439955584 torch/_dynamo/eval_frame.py:1339] inp4_dim0 = Dim('inp4_dim0', max=16) - I0624 21:50:47.502000 140165439955584 torch/_dynamo/eval_frame.py:1339] inp5_dim1 = Dim('inp5_dim1', min=17) - I0624 21:50:47.502000 140165439955584 torch/_dynamo/eval_frame.py:1339] shared_dim = Dim('shared_dim') - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] Dynamo captured graph: - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] class GraphModule(torch.nn.Module): - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] def forward(self, L_x_ : torch.Tensor, L_y_ : torch.Tensor): - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] l_x_ = L_x_ - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] l_y_ = L_y_ - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] # File: /var/lib/workspace/intermediate_source/torch_export_tutorial.py:482 in forward, code: if x.shape[0] <= 16: - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] size = l_x_.size() - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] getitem = size[0]; size = None - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] le = getitem <= 16; getitem = None - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] # File: /var/lib/workspace/intermediate_source/torch_export_tutorial.py:483 in forward, code: return x @ y[:, :16] - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] getitem_2 = l_y_[(slice(None, None, None), slice(None, 16, None))]; l_y_ = None - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] matmul = l_x_ @ getitem_2; l_x_ = getitem_2 = None - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] return (matmul,) - I0624 21:50:47.503000 140165439955584 torch/_dynamo/eval_frame.py:1363] + I0625 21:38:32.787000 139677001331328 torch/_dynamo/logging.py:55] [16/0] Step 1: torchdynamo start tracing forward /var/lib/workspace/intermediate_source/torch_export_tutorial.py:481 + I0625 21:38:32.790000 139677001331328 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s0 = 8 for L['x'].size()[0] [2, 16] (_dynamo/variables/builder.py:1881 in ) + I0625 21:38:32.791000 139677001331328 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s1 = 16 for L['x'].size()[1] [2, 9223372036854775806] (_dynamo/variables/builder.py:1881 in ) + I0625 21:38:32.793000 139677001331328 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s2 = 16 for L['y'].size()[0] [2, 9223372036854775806] (_dynamo/variables/builder.py:1881 in ) + I0625 21:38:32.794000 139677001331328 torch/fx/experimental/symbolic_shapes.py:2724] [16/0] create_symbol s3 = 32 for L['y'].size()[1] [17, 9223372036854775806] (_dynamo/variables/builder.py:1881 in ) + I0625 21:38:32.802000 139677001331328 torch/fx/experimental/symbolic_shapes.py:3809] [16/0] set_replacement s2 = s1 (solve_backed) ValueRanges(lower=2, upper=9223372036854775806, is_bool=False) + I0625 21:38:32.802000 139677001331328 torch/fx/experimental/symbolic_shapes.py:4035] [16/0] eval Eq(s1, s2) [guard added] at ar/lib/workspace/intermediate_source/torch_export_tutorial.py:483 in forward (_meta_registrations.py:2014 in meta_mm) + I0625 21:38:32.803000 139677001331328 torch/_dynamo/logging.py:55] [16/0] Step 1: torchdynamo done tracing forward (RETURN_VALUE) + I0625 21:38:32.805000 139677001331328 torch/fx/experimental/symbolic_shapes.py:3809] [16/0] set_replacement s2 = s1 (find) ValueRanges(lower=2, upper=9223372036854775806, is_bool=False) + I0625 21:38:32.805000 139677001331328 torch/_dynamo/logging.py:55] [16/0] Step 2: calling compiler function dynamo_normalization_capturing_compiler + I0625 21:38:32.805000 139677001331328 torch/_dynamo/logging.py:55] [16/0] Step 2: done compiler function dynamo_normalization_capturing_compiler + I0625 21:38:32.807000 139677001331328 torch/fx/experimental/symbolic_shapes.py:2806] [16/0] produce_guards + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1339] Summary of dimension constraints: + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1339] Suggested fixes: + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1339] inp4_dim0 = Dim('inp4_dim0', max=16) + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1339] inp5_dim1 = Dim('inp5_dim1', min=17) + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1339] shared_dim = Dim('shared_dim') + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] Dynamo captured graph: + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] class GraphModule(torch.nn.Module): + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] def forward(self, L_x_ : torch.Tensor, L_y_ : torch.Tensor): + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] l_x_ = L_x_ + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] l_y_ = L_y_ + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] # File: /var/lib/workspace/intermediate_source/torch_export_tutorial.py:482 in forward, code: if x.shape[0] <= 16: + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] size = l_x_.size() + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] getitem = size[0]; size = None + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] le = getitem <= 16; getitem = None + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] # File: /var/lib/workspace/intermediate_source/torch_export_tutorial.py:483 in forward, code: return x @ y[:, :16] + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] getitem_2 = l_y_[(slice(None, None, None), slice(None, 16, None))]; l_y_ = None + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] matmul = l_x_ @ getitem_2; l_x_ = getitem_2 = None + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] return (matmul,) + I0625 21:38:32.834000 139677001331328 torch/_dynamo/eval_frame.py:1363] @@ -2026,7 +2026,7 @@ and considerations (control flow ops, constraints, etc.) that need to be made in .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 1.787 seconds) + **Total running time of the script:** ( 0 minutes 1.776 seconds) .. _sphx_glr_download_intermediate_torch_export_tutorial.py: diff --git a/_sources/intermediate/torchvision_tutorial.rst.txt b/_sources/intermediate/torchvision_tutorial.rst.txt index f3071dd654..b387bda675 100644 --- a/_sources/intermediate/torchvision_tutorial.rst.txt +++ b/_sources/intermediate/torchvision_tutorial.rst.txt @@ -157,7 +157,7 @@ Here is one example of a pair of images and segmentation masks .. code-block:: none - + @@ -321,10 +321,10 @@ way of doing it: Downloading: "https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth 0%| | 0.00/160M [00:00 + @@ -848,7 +851,7 @@ the torchvision repository. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 49.578 seconds) + **Total running time of the script:** ( 0 minutes 49.792 seconds) .. _sphx_glr_download_intermediate_torchvision_tutorial.py: diff --git a/_sources/recipes/compiling_optimizer_lr_scheduler.rst.txt b/_sources/recipes/compiling_optimizer_lr_scheduler.rst.txt index f7782e7f9f..e95d8cb824 100644 --- a/_sources/recipes/compiling_optimizer_lr_scheduler.rst.txt +++ b/_sources/recipes/compiling_optimizer_lr_scheduler.rst.txt @@ -167,31 +167,31 @@ LR in a tensor. .. code-block:: none - V0624 22:12:16.459000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 - V0624 22:12:16.459000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): - V0624 22:12:16.459000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - ___key_to_id(L['self'].state) == [140157035258528,140157035257008,140157035255728,140157035252208,140157035253328,140157035255648,140157035256128,140157035259088,140157035254608,140157035255968] - V0624 22:12:19.776000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 - V0624 22:12:19.776000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): - V0624 22:12:19.776000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 - V0624 22:12:19.776000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_140156798886016_140157020364352_c79']() is not None - V0624 22:12:22.340000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 - V0624 22:12:22.340000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): - V0624 22:12:22.340000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.004666666666666667 - V0624 22:12:22.340000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 - V0624 22:12:22.340000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_140156798886016_140157020364352_c79']() is not None - V0624 22:12:24.889000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 - V0624 22:12:24.889000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): - V0624 22:12:24.889000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.006000000000000001 - V0624 22:12:24.889000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.004666666666666667 - V0624 22:12:24.889000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 - V0624 22:12:24.889000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_140156798886016_140157020364352_c79']() is not None - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.007333333333333335 - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.006000000000000001 - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.004666666666666667 - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 - V0624 22:12:27.449000 140165439955584 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_140156798886016_140157020364352_c79']() is not None + V0625 22:03:41.119000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 + V0625 22:03:41.119000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): + V0625 22:03:41.119000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - ___key_to_id(L['self'].state) == [139670086608880,139670086608960,139670086601920,139670086603360,139670086600320,139670086599760,139670086606880,139670086609760,139670086612080,139670086602480] + V0625 22:03:44.430000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 + V0625 22:03:44.430000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): + V0625 22:03:44.430000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 + V0625 22:03:44.430000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_139668517417792_139670031288176_c79']() is not None + V0625 22:03:46.979000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 + V0625 22:03:46.979000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): + V0625 22:03:46.979000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.004666666666666667 + V0625 22:03:46.979000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 + V0625 22:03:46.979000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_139668517417792_139670031288176_c79']() is not None + V0625 22:03:49.513000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 + V0625 22:03:49.513000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): + V0625 22:03:49.513000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.006000000000000001 + V0625 22:03:49.513000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.004666666666666667 + V0625 22:03:49.513000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 + V0625 22:03:49.513000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_139668517417792_139670031288176_c79']() is not None + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] Recompiling function step in /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/optim/adam.py:135 + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] triggered by the following guard failure(s): + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.007333333333333335 + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.006000000000000001 + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.004666666666666667 + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - L['self'].param_groups[0]['lr'] == 0.003333333333333333 + V0625 22:03:52.060000 139677001331328 torch/_dynamo/guards.py:1425] [__recompiles] - G['__optimizer_139668517417792_139670031288176_c79']() is not None @@ -219,7 +219,7 @@ See also: .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 17.012 seconds) + **Total running time of the script:** ( 0 minutes 16.936 seconds) .. _sphx_glr_download_recipes_compiling_optimizer_lr_scheduler.py: diff --git a/_sources/recipes/recipes/changing_default_device.rst.txt b/_sources/recipes/recipes/changing_default_device.rst.txt index 01bd47c4ec..19d81b73b7 100644 --- a/_sources/recipes/recipes/changing_default_device.rst.txt +++ b/_sources/recipes/recipes/changing_default_device.rst.txt @@ -128,7 +128,7 @@ is causing problems for you, please comment on .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.093 seconds) + **Total running time of the script:** ( 0 minutes 0.099 seconds) .. _sphx_glr_download_recipes_recipes_changing_default_device.py: diff --git a/_sources/recipes/recipes/module_load_state_dict_tips.rst.txt b/_sources/recipes/recipes/module_load_state_dict_tips.rst.txt index 5e56ff42e5..9b6abf59a9 100644 --- a/_sources/recipes/recipes/module_load_state_dict_tips.rst.txt +++ b/_sources/recipes/recipes/module_load_state_dict_tips.rst.txt @@ -157,7 +157,7 @@ loaded into CPU RAM, which can be undesirable when: .. code-block:: none - loading time without mmap=0.019269943237304688 + loading time without mmap=0.021975040435791016 @@ -189,7 +189,7 @@ storages will be memory-mapped. .. code-block:: none - loading time with mmap=0.0016238689422607422 + loading time with mmap=0.0016951560974121094 @@ -352,7 +352,7 @@ be used to aid when loading a model from a checkpoint. .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.381 seconds) + **Total running time of the script:** ( 0 minutes 0.387 seconds) .. _sphx_glr_download_recipes_recipes_module_load_state_dict_tips.py: diff --git a/_sources/recipes/recipes/reasoning_about_shapes.rst.txt b/_sources/recipes/recipes/reasoning_about_shapes.rst.txt index 4d6469063e..5633207423 100644 --- a/_sources/recipes/recipes/reasoning_about_shapes.rst.txt +++ b/_sources/recipes/recipes/reasoning_about_shapes.rst.txt @@ -60,7 +60,7 @@ of a layer without materializing any data. .. code-block:: none tensor(..., device='meta', size=(2, 5, 9, 9), grad_fn=) - Time taken: 0.00018528500004322268 + Time taken: 0.00018522299978940282 @@ -94,7 +94,7 @@ inputs will not significantly alter the time taken for shape computation. tensor(..., device='meta', size=(1024, 5, 65535, 65535), grad_fn=) - Time taken: 0.0001513819988758769 + Time taken: 0.00011191299927304499 diff --git a/_sources/recipes/recipes/swap_tensors.rst.txt b/_sources/recipes/recipes/swap_tensors.rst.txt index 990b9c9d8d..47c47fbf60 100644 --- a/_sources/recipes/recipes/swap_tensors.rst.txt +++ b/_sources/recipes/recipes/swap_tensors.rst.txt @@ -217,8 +217,8 @@ of the subclass' payload (``elem``) does not change. .. code-block:: none - Before: id(m.weight)=139700413278192, id(m.bias)=139700413272992 - After: id(m.weight)=139700413278192, id(m.bias)=139700413272992 + Before: id(m.weight)=139960653111168, id(m.bias)=139960653111328 + After: id(m.weight)=139960653111168, id(m.bias)=139960653111328 m.weight.dtype: torch.bfloat16 m.weight.elem.dtype: torch.float32 m.bias.dtype: torch.bfloat16 @@ -259,8 +259,8 @@ the ``dtype`` of the payload is properly converted. .. code-block:: none - Before: id(m.weight)=139700413277872, id(m.bias)=139698941485248 - After: id(m.weight)=139700413277872, id(m.bias)=139698941485248 + Before: id(m.weight)=139960653106848, id(m.bias)=139960652981136 + After: id(m.weight)=139960653106848, id(m.bias)=139960652981136 m.weight.dtype: torch.bfloat16 m.weight.elem.dtype: torch.bfloat16 m.bias.dtype: torch.bfloat16 @@ -392,7 +392,7 @@ for biases, we want to preserve the properties of the tensor in the ``state_dict .. code-block:: none - Before: id(weight)=139700413275472, id(bias)=139700413270912 + Before: id(weight)=139960653108048, id(bias)=139960653117168 m.state_dict() before load_state_dict(): OrderedDict([('weight', MyQuantizedLinearWeight(tensor(..., device='meta', size=(5, 3)), scale=0.5)), ('bias', tensor(..., device='meta', size=(5,)))]) state_dict: @@ -401,7 +401,7 @@ for biases, we want to preserve the properties of the tensor in the ``state_dict [ 0.2932, -0.3519, -0.5715], [-0.2231, -0.4428, 0.4737], [ 0.1663, 0.2391, 0.1826]])), ('bias', tensor([-0.0100, 0.4518, -0.4102, 0.0364, -0.3941]))]) - After: id(weight)=139700413275472, id(bias)=139700413270912 + After: id(weight)=139960653108048, id(bias)=139960653117168 m.state_dict() after load_state_dict(): OrderedDict([('weight', MyQuantizedLinearWeight(tensor([[ 0.2430, 0.5155, 0.3337], [-0.2524, 0.3333, 0.1033], @@ -431,7 +431,7 @@ use the two new extension points that are gated by .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 0.020 seconds) + **Total running time of the script:** ( 0 minutes 0.018 seconds) .. _sphx_glr_download_recipes_recipes_swap_tensors.py: diff --git a/_sources/recipes/torch_compile_user_defined_triton_kernel_tutorial.rst.txt b/_sources/recipes/torch_compile_user_defined_triton_kernel_tutorial.rst.txt index 681f4dfa1f..87577ee551 100644 --- a/_sources/recipes/torch_compile_user_defined_triton_kernel_tutorial.rst.txt +++ b/_sources/recipes/torch_compile_user_defined_triton_kernel_tutorial.rst.txt @@ -249,7 +249,7 @@ See Also .. rst-class:: sphx-glr-timing - **Total running time of the script:** ( 0 minutes 1.467 seconds) + **Total running time of the script:** ( 0 minutes 1.471 seconds) .. _sphx_glr_download_recipes_torch_compile_user_defined_triton_kernel_tutorial.py: diff --git a/advanced/coding_ddpg.html b/advanced/coding_ddpg.html index b5b776299d..a08919fbaf 100644 --- a/advanced/coding_ddpg.html +++ b/advanced/coding_ddpg.html @@ -1559,26 +1559,26 @@

Time to train the policy
  0%|          | 0/10000 [00:00<?, ?it/s]
-  8%|8         | 800/10000 [00:00<00:08, 1075.72it/s]
- 16%|#6        | 1600/10000 [00:03<00:20, 405.15it/s]
- 24%|##4       | 2400/10000 [00:04<00:13, 550.81it/s]
- 32%|###2      | 3200/10000 [00:05<00:10, 666.48it/s]
- 40%|####      | 4000/10000 [00:06<00:07, 750.44it/s]
- 48%|####8     | 4800/10000 [00:06<00:06, 811.02it/s]
- 56%|#####6    | 5600/10000 [00:07<00:05, 859.06it/s]
-reward: -2.16 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.09/6.32, grad norm= 218.52, loss_value= 380.54, loss_actor= 14.07, target value: -11.24:  56%|#####6    | 5600/10000 [00:08<00:05, 859.06it/s]
-reward: -2.16 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.09/6.32, grad norm= 218.52, loss_value= 380.54, loss_actor= 14.07, target value: -11.24:  64%|######4   | 6400/10000 [00:09<00:05, 638.38it/s]
-reward: -0.11 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.50/6.05, grad norm= 33.89, loss_value= 358.66, loss_actor= 14.75, target value: -15.00:  64%|######4   | 6400/10000 [00:10<00:05, 638.38it/s]
-reward: -0.11 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.50/6.05, grad norm= 33.89, loss_value= 358.66, loss_actor= 14.75, target value: -15.00:  72%|#######2  | 7200/10000 [00:12<00:05, 488.30it/s]
-reward: -2.20 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-1.80/6.42, grad norm= 179.01, loss_value= 451.17, loss_actor= 11.22, target value: -11.76:  72%|#######2  | 7200/10000 [00:12<00:05, 488.30it/s]
-reward: -2.20 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-1.80/6.42, grad norm= 179.01, loss_value= 451.17, loss_actor= 11.22, target value: -11.76:  80%|########  | 8000/10000 [00:14<00:04, 419.11it/s]
-reward: -4.57 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.74/5.57, grad norm= 190.83, loss_value= 283.69, loss_actor= 16.99, target value: -17.47:  80%|########  | 8000/10000 [00:15<00:04, 419.11it/s]
-reward: -4.57 (r0 = -3.21), reward eval: reward: -0.00, reward normalized=-2.74/5.57, grad norm= 190.83, loss_value= 283.69, loss_actor= 16.99, target value: -17.47:  88%|########8 | 8800/10000 [00:17<00:03, 382.68it/s]
-reward: -5.03 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.02/5.53, grad norm= 218.52, loss_value= 326.20, loss_actor= 14.51, target value: -20.00:  88%|########8 | 8800/10000 [00:19<00:03, 382.68it/s]
-reward: -5.03 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.02/5.53, grad norm= 218.52, loss_value= 326.20, loss_actor= 14.51, target value: -20.00:  96%|#########6| 9600/10000 [00:21<00:01, 285.28it/s]
-reward: -4.58 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-2.86/5.12, grad norm= 213.90, loss_value= 296.15, loss_actor= 13.14, target value: -20.54:  96%|#########6| 9600/10000 [00:22<00:01, 285.28it/s]
-reward: -4.58 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-2.86/5.12, grad norm= 213.90, loss_value= 296.15, loss_actor= 13.14, target value: -20.54: : 10400it [00:25, 264.79it/s]
-reward: -3.55 (r0 = -3.21), reward eval: reward: -2.90, reward normalized=-3.59/4.43, grad norm= 57.50, loss_value= 177.22, loss_actor= 20.78, target value: -23.86: : 10400it [00:25, 264.79it/s]
+  8%|8         | 800/10000 [00:00<00:08, 1065.00it/s]
+ 16%|#6        | 1600/10000 [00:03<00:20, 401.41it/s]
+ 24%|##4       | 2400/10000 [00:04<00:13, 543.48it/s]
+ 32%|###2      | 3200/10000 [00:05<00:10, 655.82it/s]
+ 40%|####      | 4000/10000 [00:06<00:08, 737.72it/s]
+ 48%|####8     | 4800/10000 [00:06<00:06, 799.24it/s]
+ 56%|#####6    | 5600/10000 [00:07<00:05, 846.72it/s]
+reward: -2.56 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.21/6.37, grad norm= 44.83, loss_value= 395.90, loss_actor= 18.20, target value: -18.76:  56%|#####6    | 5600/10000 [00:08<00:05, 846.72it/s]
+reward: -2.56 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.21/6.37, grad norm= 44.83, loss_value= 395.90, loss_actor= 18.20, target value: -18.76:  64%|######4   | 6400/10000 [00:09<00:05, 635.05it/s]
+reward: -0.10 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.70/6.05, grad norm= 81.33, loss_value= 343.52, loss_actor= 14.75, target value: -16.15:  64%|######4   | 6400/10000 [00:10<00:05, 635.05it/s]
+reward: -0.10 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.70/6.05, grad norm= 81.33, loss_value= 343.52, loss_actor= 14.75, target value: -16.15:  72%|#######2  | 7200/10000 [00:12<00:05, 486.42it/s]
+reward: -1.82 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.04/5.69, grad norm= 203.44, loss_value= 302.93, loss_actor= 15.25, target value: -20.36:  72%|#######2  | 7200/10000 [00:13<00:05, 486.42it/s]
+reward: -1.82 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-3.04/5.69, grad norm= 203.44, loss_value= 302.93, loss_actor= 15.25, target value: -20.36:  80%|########  | 8000/10000 [00:14<00:04, 418.73it/s]
+reward: -4.83 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.87/5.11, grad norm= 241.11, loss_value= 259.33, loss_actor= 16.64, target value: -19.13:  80%|########  | 8000/10000 [00:15<00:04, 418.73it/s]
+reward: -4.83 (r0 = -2.20), reward eval: reward: -0.01, reward normalized=-2.87/5.11, grad norm= 241.11, loss_value= 259.33, loss_actor= 16.64, target value: -19.13:  88%|########8 | 8800/10000 [00:17<00:03, 382.18it/s]
+reward: -5.14 (r0 = -2.20), reward eval: reward:  0.53, reward normalized=-2.44/4.86, grad norm= 150.89, loss_value= 188.20, loss_actor= 18.66, target value: -16.22:  88%|########8 | 8800/10000 [00:20<00:03, 382.18it/s]
+reward: -5.14 (r0 = -2.20), reward eval: reward:  0.53, reward normalized=-2.44/4.86, grad norm= 150.89, loss_value= 188.20, loss_actor= 18.66, target value: -16.22:  96%|#########6| 9600/10000 [00:21<00:01, 284.56it/s]
+reward: -5.13 (r0 = -2.20), reward eval: reward:  0.53, reward normalized=-2.81/5.41, grad norm= 125.30, loss_value= 268.30, loss_actor= 16.89, target value: -19.89:  96%|#########6| 9600/10000 [00:22<00:01, 284.56it/s]
+reward: -5.13 (r0 = -2.20), reward eval: reward:  0.53, reward normalized=-2.81/5.41, grad norm= 125.30, loss_value= 268.30, loss_actor= 16.89, target value: -19.89: : 10400it [00:25, 264.83it/s]
+reward: -3.58 (r0 = -2.20), reward eval: reward:  0.53, reward normalized=-3.82/5.60, grad norm= 87.65, loss_value= 267.84, loss_actor= 23.16, target value: -27.36: : 10400it [00:26, 264.83it/s]
 
@@ -1622,7 +1622,7 @@

Next Steps[Feature] Distpatch IQL loss module.)

  • Allowing flexible TensorDict keys.

  • -

    Total running time of the script: ( 0 minutes 41.406 seconds)

    +

    Total running time of the script: ( 0 minutes 41.549 seconds)