Skip to content

Commit

Permalink
docs: add colab badges to examples (#1637)
Browse files Browse the repository at this point in the history
Adds "Open in Colab" badges for all examples in docs
  • Loading branch information
SauravMaheshkar authored Aug 20, 2024
1 parent bc2d199 commit 1c9285a
Show file tree
Hide file tree
Showing 21 changed files with 186 additions and 0 deletions.
9 changes: 9 additions & 0 deletions docs/source/examples/aim.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,9 @@ Reference:
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/aim.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/aim.py
Expand All @@ -27,6 +30,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/aim.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/aim.py
Expand All @@ -35,6 +41,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/aim.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/barlowtwins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@ Reference:

.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/barlowtwins.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/barlowtwins.py
Expand All @@ -22,6 +25,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/barlowtwins.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/barlowtwins.py
Expand All @@ -30,6 +36,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/barlowtwins.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/byol.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ Reference:

.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/byol.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/byol.py
Expand All @@ -21,6 +24,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/byol.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/byol.py
Expand All @@ -29,6 +35,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/byol.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/dcl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,9 @@ with DCL loss.
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/dcl.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/dcl.py
Expand All @@ -42,6 +45,9 @@ with DCL loss.

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/dcl.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/dcl.py
Expand All @@ -50,6 +56,9 @@ with DCL loss.

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/dcl.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/densecl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,9 @@ Reference:

.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/densecl.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/densecl.py
Expand All @@ -24,6 +27,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/densecl.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/densecl.py
Expand All @@ -32,6 +38,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/densecl.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/dino.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@ Reference:
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/dino.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/dino.py
Expand All @@ -20,6 +23,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/dino.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/dino.py
Expand All @@ -28,6 +34,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/dino.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/fastsiam.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,9 @@ Reference:
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/fastsiam.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/fastsiam.py
Expand All @@ -22,6 +25,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/fastsiam.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/fastsiam.py
Expand All @@ -30,6 +36,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/fastsiam.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/mae.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@ Reference:
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/mae.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/mae.py
Expand All @@ -35,6 +38,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/mae.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/mae.py
Expand All @@ -43,6 +49,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/mae.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/mmcr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ Reference:

.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/mmcr.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/mmcr.py
Expand All @@ -21,6 +24,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/mmcr.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/mmcr.py
Expand All @@ -29,6 +35,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/mmcr.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/moco.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ Tutorials:

.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/moco.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/moco.py
Expand All @@ -28,6 +31,9 @@ Tutorials:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/moco.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/moco.py
Expand All @@ -36,6 +42,9 @@ Tutorials:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/moco.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/msn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ See :ref:`PMSN` for a version of MSN for datasets with non-uniform class distrib
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/msn.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/msn.py
Expand All @@ -28,6 +31,9 @@ See :ref:`PMSN` for a version of MSN for datasets with non-uniform class distrib

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/msn.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/msn.py
Expand All @@ -36,6 +42,9 @@ See :ref:`PMSN` for a version of MSN for datasets with non-uniform class distrib

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/msn.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/nnclr.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ Reference:
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/nnclr.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/nnclr.py
Expand All @@ -19,6 +22,9 @@ Reference:

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/nnclr.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/nnclr.py
Expand All @@ -27,6 +33,9 @@ Reference:

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/nnclr.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
9 changes: 9 additions & 0 deletions docs/source/examples/pmsn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ For PMSN, you can use the exact same code as for :ref:`msn` but change
.. tabs::
.. tab:: PyTorch

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch/pmsn.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch/pmsn.py
Expand All @@ -45,6 +48,9 @@ For PMSN, you can use the exact same code as for :ref:`msn` but change

.. tab:: Lightning

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning/pmsn.ipynb

This example can be run from the command line with::

python lightly/examples/pytorch_lightning/pmsn.py
Expand All @@ -53,6 +59,9 @@ For PMSN, you can use the exact same code as for :ref:`msn` but change

.. tab:: Lightning Distributed

.. image:: https://colab.research.google.com/assets/colab-badge.svg
:target: https://colab.research.google.com/github/lightly-ai/lightly/blob/master/examples/notebooks/pytorch_lightning_distributed/pmsn.ipynb

This example runs on multiple gpus using Distributed Data Parallel (DDP)
training with Pytorch Lightning. At least one GPU must be available on
the system. The example can be run from the command line with::
Expand Down
Loading

0 comments on commit 1c9285a

Please sign in to comment.