Skip to content

Commit

Permalink
fix(gen): remove backslashes on .html extensions
Browse files Browse the repository at this point in the history
  • Loading branch information
SamyOubouaziz committed Jul 29, 2024
1 parent e5398a3 commit cfb9d1b
Show file tree
Hide file tree
Showing 120 changed files with 182 additions and 182 deletions.
2 changes: 1 addition & 1 deletion bare-metal/dedibox/how-to/connect-to-dedibox.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ To connect to your server from Windows, you will need to use a small application

To connect to your Instance from Windows, you will need to use a small application called **PuTTY**, an SSH client.

1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html/)
1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)
2. Launch PuTTY on your computer. The main screen of the application displays.
3. Enter your Instance's IP address in the **Hostname** field.
<Message type="tip">
Expand Down
2 changes: 1 addition & 1 deletion bare-metal/dedibox/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ To connect to your server from Windows, you will need to use a small application

To connect to your Instance from Windows, you will need to use a small application called **PuTTY**, an SSH client.

1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html/)
1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)
2. Launch PuTTY on your computer. The main screen of the application displays.
3. Enter your Instance's IP address in the **Hostname** field.
<Message type="tip">
Expand Down
2 changes: 1 addition & 1 deletion bare-metal/elastic-metal/how-to/connect-to-server.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ This page shows you how to connect to your Scaleway Elastic Metal server via SSH

To connect to your Elastic Metal server from Windows, you will need to use a small application called **PuTTY**, an SSH client.

1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html/)
1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)
2. Launch PuTTY on your computer. The main screen of the application displays:
<Lightbox src="scaleway-putty-main.webp" alt="" />
3. Enter your Elastic Metal server's IP address in the **Hostname** field.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ in the first or second partition. The partition that contains this boot file
must be formatted as FAT32.

This `boot.itb` file is in fact a [FIT
Image](https://docs.u-boot.org/en/latest/usage/fit/source_file_format.html/)
Image](https://docs.u-boot.org/en/latest/usage/fit/source_file_format.html)
that must contain the following sections:

- **kernel**: A Linux kernel image.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ category: containers
product: kubernetes
---

NVIDIA's [GPU operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html/) is installed by default on all new GPU pools, automatically bringing required software on your GPU worker nodes.
NVIDIA's [GPU operator](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/overview.html) is installed by default on all new GPU pools, automatically bringing required software on your GPU worker nodes.

Find out how to activate or configure the operator in the [documentation](/containers/kubernetes/how-to/use-nvidia-gpu-operator/).

2 changes: 1 addition & 1 deletion compute/gpu/how-to/use-mig-with-kubernetes.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -394,4 +394,4 @@ In this guide, we will explore the capabilities of NVIDIA MIG within a Kubernete
All nodes added by the autoscaler will automatically receive the label `MIG`. Note, that updates to a tag may take up to five minutes to fully propagate.
</Message>

For more information about NVIDIA MIG, refer to the official [NVIDIA MIG user guide](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/) and the [Kubernetes GPU operator documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/23.6.1/gpu-operator-mig.html/).
For more information about NVIDIA MIG, refer to the official [NVIDIA MIG user guide](https://docs.nvidia.com/datacenter/tesla/mig-user-guide/) and the [Kubernetes GPU operator documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/23.6.1/gpu-operator-mig.html).
2 changes: 1 addition & 1 deletion compute/gpu/how-to/use-preinstalled-env.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ Using the latest Ubuntu Focal GPU OS11 image gives you a minimal OS installation
1. [Connect to your Instance via SSH](/compute/instances/how-to/connect-to-instance/).

You are now directly within the conda `ai` preinstalled environment.
2. Use the [official conda documentation](https://docs.conda.io/projects/conda/en/latest/commands.html/) if you need any help managing your conda environment.
2. Use the [official conda documentation](https://docs.conda.io/projects/conda/en/latest/commands.html) if you need any help managing your conda environment.
<Message type="tip">
For a full, detailed list of the Python packages and versions preinstalled in this environment, look at the content of the `/root/conda-ai-env-requirements.frozen` file.
</Message>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Below, you will find a guide to help you make an informed decision:
* **Workload requirements:** Identify the nature of your workload. Are you running machine learning, deep learning, high-performance computing (HPC), data analytics, or graphics-intensive applications? Different Instance types are optimized for different types of workloads. For example, the H100 is not designed for graphics rendering. However, other models are. As [stated by Tim Dettmers](https://timdettmers.com/2023/01/30/which-gpu-for-deep-learning/), “Tensor Cores are most important, followed by the memory bandwidth of a GPU, the cache hierarchy, and only then FLOPS of a GPU.”. For more information, refer to the [NVIDIA GPU portfolio](https://www.nvidia.com/content/dam/en-zz/solutions/data-center/data-center-gpu-portfolio-line-card.pdf/).
* **Performance requirements:** Evaluate the performance specifications you need, such as the number of GPUs, GPU memory, processing power, and network bandwidth. You need a lot of memory and fast storage for demanding tasks like training larger Deep Learning models.
* **GPU type:** Scaleway offers different GPU types, such as various NVIDIA GPUs. Each GPU has varying levels of performance, memory, and capabilities. Choose a GPU that aligns with your specific workload requirements.
* **GPU memory:** GPU memory bandwidth is an important criterion influencing overall performance. Then, larger GPU memory (VRAM) is crucial for memory-intensive tasks like training larger deep learning models, especially when using larger batch sizes. Modern GPUs offer specialized data formats designed to optimize deep learning performance. These formats, including Bfloat16, [FP8](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html/), int8 and int4, enable the storage of more data in memory and can enhance performance (for example, moving from FP16 to FP8 can double the number of TFLOPS). To make an informed decision, it is thus crucial to select the appropriate architecture. Options range from Pascal and Ampere to Ada Lovelace and Hopper. Ensuring that the GPU possesses sufficient memory capacity to accommodate your specific workload is essential, preventing any potential memory-related bottlenecks. Equally important, is matching the GPU's memory type to the nature of your workload.
* **GPU memory:** GPU memory bandwidth is an important criterion influencing overall performance. Then, larger GPU memory (VRAM) is crucial for memory-intensive tasks like training larger deep learning models, especially when using larger batch sizes. Modern GPUs offer specialized data formats designed to optimize deep learning performance. These formats, including Bfloat16, [FP8](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html), int8 and int4, enable the storage of more data in memory and can enhance performance (for example, moving from FP16 to FP8 can double the number of TFLOPS). To make an informed decision, it is thus crucial to select the appropriate architecture. Options range from Pascal and Ampere to Ada Lovelace and Hopper. Ensuring that the GPU possesses sufficient memory capacity to accommodate your specific workload is essential, preventing any potential memory-related bottlenecks. Equally important, is matching the GPU's memory type to the nature of your workload.
* **CPU and RAM:** A powerful CPU can be beneficial for tasks that involve preprocessing or post-processing. Sufficient system memory is also crucial to prevent memory-related bottlenecks or to cache your data in RAM.
* **GPU driver and software compatibility:** Ensure that the GPU Instance type you choose supports the GPU drivers and software frameworks you need for your workload. This includes CUDA libraries, machine learning frameworks (TensorFlow, PyTorch, etc.), and other specific software tools. For all [Scaleway GPU OS images](/compute/gpu/reference-content/docker-images/), we offer a driver version that enables the use of all GPUs, from the oldest to the latest models. As is the NGC CLI, `nvidia-docker` is preinstalled, enabling containers to be used with CUDA, cuDNN, and the main deep learning frameworks.
* **Scaling:** Consider the scalability requirements of your workload. The most efficient way to scale up your workload is by using:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Kubernetes GPU time-slicing divides the GPU resources at the container level wit
While time-slicing facilitates shared GPU access across a broader user spectrum, it comes with a trade-off. It sacrifices the memory and fault isolation advantages inherent to MIG. Additionally, it presents a solution to enable shared GPU access on earlier GPU generations lacking MIG support.
Combining MIG and time-slicing is feasible to expand the scope of shared access to MIG instances.

For more information and examples about NVIDIA GPUs time-slicing using Kubernetes, refer to the [official documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/23.6.0/gpu-sharing.html/).
For more information and examples about NVIDIA GPUs time-slicing using Kubernetes, refer to the [official documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/23.6.0/gpu-sharing.html).

<Message type="note">
Using time-slicing for GPUs with Kubernetes could bring overhead due to context-switching, potentially affecting GPU-intensive operations' performance.
Expand Down
2 changes: 1 addition & 1 deletion compute/gpu/reference-content/understanding-nvidia-fp8.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,4 @@ The `E5M2` format adapts the IEEE FP16 format, allocating five bits to the expon

The FP8 standard preserves accuracy comparable to 16-bit formats across a wide range of applications, architectures, and networks.

For more information about the FP8 standard, and instructions how to use it with H100 GPU Instances, refer to NVIDIA's [offical FP8 documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html/) and the [code example repository](https://github.com/NVIDIA/TransformerEngine/tree/main/examples/).
For more information about the FP8 standard, and instructions how to use it with H100 GPU Instances, refer to NVIDIA's [offical FP8 documentation](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html) and the [code example repository](https://github.com/NVIDIA/TransformerEngine/tree/main/examples/).
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,4 @@ NVIDIA NeMo can be used for various applications such as:
- Building chat bots.
- Developing natural language understanding models for various applications.

Developers, researchers, and companies interested in developing conversational AI models can benefit from NVIDIA NeMo to speed up the development process and create high-quality models. For more information, refer to the [official NVIDIA NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html/).
Developers, researchers, and companies interested in developing conversational AI models can benefit from NVIDIA NeMo to speed up the development process and create high-quality models. For more information, refer to the [official NVIDIA NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/index.html).
2 changes: 1 addition & 1 deletion compute/gpu/reference-content/understanding-nvidia-ngc.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,4 @@ NGC provides a repository of pre-configured containers, models, and software sta

NVIDIA closely collaborates with software developers to optimize leading AI and machine learning frameworks for peak performance on NVIDIA GPUs. This optimization significantly expedites both training and inference tasks. Software hosted on NGC undergoes scans against an aggregated set of common vulnerabilities and exposures (CVEs), crypto, and private keys.

For more information on NGC, refer to the official [NVIDIA NGC documentation](https://docs.nvidia.com/ngc/index.html/).
For more information on NGC, refer to the official [NVIDIA NGC documentation](https://docs.nvidia.com/ngc/index.html).
4 changes: 2 additions & 2 deletions compute/gpu/troubleshooting/install-nvidia-drivers-ubuntu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,6 @@ If you encounter errors or issues during the installation process, consider the
## Additional links

- [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/)
- [Frameworks Support Matrix - NVIDIA Docs](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html/)
- [Frameworks Support Matrix - NVIDIA Docs](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html)
- [How to access the GPU using Docker](/compute/gpu/how-to/use-gpu-with-docker/)
- [NVIDIA Container Toolkit documentation](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html/)
- [NVIDIA Container Toolkit documentation](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html)
2 changes: 1 addition & 1 deletion compute/instances/api-cli/using-cloud-init.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,6 @@ Subcommands:
````

For detailed information on cloud-init, refer to the official cloud-init [documentation](http://cloudinit.readthedocs.io/en/latest/index.html/).
For detailed information on cloud-init, refer to the official cloud-init [documentation](http://cloudinit.readthedocs.io/en/latest/index.html).


2 changes: 1 addition & 1 deletion compute/instances/how-to/connect-to-instance.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ This page shows how to connect to your Scaleway Instance via SSH. Thanks to the

To connect to your Instance from Windows, you will need to use a small application called **PuTTY**, an SSH client.

1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html/).
1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
2. Launch PuTTY on your computer. The main screen of the application displays:
<Lightbox src="scaleway-putty-main.webp" alt="" />
3. Enter your Instance's IP address in the **Hostname** field.
Expand Down
2 changes: 1 addition & 1 deletion compute/instances/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ You are now connected to your Instance.

To connect to your Instance from Windows, you will need to use a small application called **PuTTY**, an SSH client.

1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html/)
1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)
2. Launch PuTTY on your computer.
3. Enter your Instance's IP address in the **Hostname** field.
4. In the side menu, under **Connection**, navigate to the **Auth** sub-category. (**Connection** > **SSH** > **Auth**).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,14 +86,14 @@ To configure securely your DNS server, proceed as follows:
- If you need recursion, limit the authorized range of IPs that can perform those requests.
- [BIND](https://kb.isc.org/docs/aa-01316/)
- [unbound](https://nlnetlabs.nl/documentation/unbound/unbound.conf/)
- If you use PowerDNS, you can also use [dnsdist](https://dnsdist.org/index.html/).
- If you use PowerDNS, you can also use [dnsdist](https://dnsdist.org/index.html).
- Enable RateLimiting of queries and answers from your authoritative DNS
- [BIND](https://kb.isc.org/docs/aa-00994/)
- [unbound](https://nlnetlabs.nl/documentation/unbound/unbound.conf/)
- If you use PowerDNS, you can also use [dnsdist](https://dnsdist.org/index.html/).
- If you use PowerDNS, you can also use [dnsdist](https://dnsdist.org/index.html).
- Set ACL on your remote control if used and limit it to localhost if possible
- [rndc for BIND](https://mirror.apps.cam.ac.uk/pub/doc/redhat/redhat7.3/rhl-rg-en-7.3/s1-bind-rndc.html/)
- [dnsdist for PowerDNS](https://dnsdist.org/index.html/)
- [rndc for BIND](https://mirror.apps.cam.ac.uk/pub/doc/redhat/redhat7.3/rhl-rg-en-7.3/s1-bind-rndc.html)
- [dnsdist for PowerDNS](https://dnsdist.org/index.html)
- [unbound-control for unbound](https://nlnetlabs.nl/documentation/unbound/unbound-control/)

## Preventing HTTP(s) proxy from being used in a DDoS attack
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ This guide outlines the steps to enable DNS resolution on a Scaleway Instance th

When a Scaleway Instance uses routed IP addresses, the IPv6 stack is automatically configured using [`SLAAC`](https://datatracker.ietf.org/doc/html/rfc4862/). With this method, the Instance is periodically advertised with various network configuration details, including the DNS server addresses it should use. The Instance is then free to consume these advertisements or not. By default, the operating system images provided by Scaleway are configured to leverage these advertisements to configure the IPv6 networking and the related DNS servers. The Debian Bullseye image is no exception.

When configuring the network at boot time, the `cloud-init` software detects the appropriate network configuration method used by the system at hand and writes and/or applies the necessary configuration files/parameters. On Debian Bullseye, and because of [`cloud-init`'s built-in order of detection](https://cloudinit.readthedocs.io/en/latest/reference/network-config.html#network-output-policy/), the primary detected method is [ENI](https://cloudinit.readthedocs.io/en/latest/reference/network-config-format-eni.html/), which configures the network through Debian's well known `/etc/network/interfaces` set of files, along with the `ifupdown` toolset.
When configuring the network at boot time, the `cloud-init` software detects the appropriate network configuration method used by the system at hand and writes and/or applies the necessary configuration files/parameters. On Debian Bullseye, and because of [`cloud-init`'s built-in order of detection](https://cloudinit.readthedocs.io/en/latest/reference/network-config.html#network-output-policy/), the primary detected method is [ENI](https://cloudinit.readthedocs.io/en/latest/reference/network-config-format-eni.html), which configures the network through Debian's well known `/etc/network/interfaces` set of files, along with the `ifupdown` toolset.

This configuration method does not interact well with SLAAC's DNS advertisements. This results in an absence of DNS resolver configuration, thus breaking most of the network activities.

Expand Down
2 changes: 1 addition & 1 deletion console/billing/api-cli/retrieve-monthly-consumption.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Follow the procedure below to download your monthly consumption using the Scalew

- A Scaleway account and logged into the [console](https://console.scaleway.com/organization/)
- Created an [API key](https://www.scaleway.com/en/docs/identity-and-access-management/iam/how-to/create-api-keys/) with sufficient [IAM permissions](https://www.scaleway.com/en/docs/identity-and-access-management/iam/reference-content/permission-sets/) to perform the actions described on this page
- [Installed `curl`](https://curl.se/download.html/)
- [Installed `curl`](https://curl.se/download.html)
- Configured your environment variables.

## Exporting your environment variables
Expand Down
2 changes: 1 addition & 1 deletion console/billing/how-to/add-payment-method.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Before you can order Scaleway resources, you must add a payment method to your a

<Message type="important">
* This method requires a successful [KYC verification](/console/account/how-to/verify-identity/).
* To add a SEPA mandate, both your postal and bank addresses must be part of the [SEPA zone](https://www.ecb.europa.eu/paym/integration/retail/sepa/html/index.en.html/).
* To add a SEPA mandate, both your postal and bank addresses must be part of the [SEPA zone](https://www.ecb.europa.eu/paym/integration/retail/sepa/html/index.en.html).
</Message>

1. Access the [Scaleway console](https://console.scaleway.com/organization/).
Expand Down
Loading

0 comments on commit cfb9d1b

Please sign in to comment.