Skip to content

Commit

Permalink
Merge pull request lxc#308 from colireg/main
Browse files Browse the repository at this point in the history
doc: Fixed typos
  • Loading branch information
stgraber authored Dec 15, 2023
2 parents ede906f + 9b02037 commit c1faad8
Show file tree
Hide file tree
Showing 31 changed files with 50 additions and 50 deletions.
12 changes: 6 additions & 6 deletions doc/api-extensions.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ And adds support for the following HTTP header on PUT requests:

* If-Match (ETag value retrieved through previous GET)

This makes it possible to GET a Incus object, modify it and PUT it without
This makes it possible to GET an Incus object, modify it and PUT it without
risking to hit a race condition where Incus or another client modified the
object in the meantime.

Expand Down Expand Up @@ -214,7 +214,7 @@ Rules necessary for `dnsmasq` to work (DHCP/DNS) will always be applied if

## `network_routes`

Introduces `ipv4.routes` and `ipv6.routes` which allow routing additional subnets to a Incus bridge.
Introduces `ipv4.routes` and `ipv6.routes` which allow routing additional subnets to an Incus bridge.

## `storage`

Expand Down Expand Up @@ -408,7 +408,7 @@ and `xfs`.

## `resources`

This adds support for querying a Incus daemon for the system resources it has
This adds support for querying an Incus daemon for the system resources it has
available.

## `kernel_limits`
Expand Down Expand Up @@ -465,7 +465,7 @@ This makes it possible to retrieve symlinks using the file API.
## `network_leases`

Adds a new `/1.0/networks/NAME/leases` API endpoint to query the lease database on
bridges which run a Incus-managed DHCP server.
bridges which run an Incus-managed DHCP server.

## `unix_device_hotplug`

Expand Down Expand Up @@ -1004,7 +1004,7 @@ redirect file-system mounts to their fuse implementation. To this end, set e.g.

## `container_disk_ceph`

This allows for existing a Ceph RBD or CephFS to be directly connected to a Incus container.
This allows for existing a Ceph RBD or CephFS to be directly connected to an Incus container.

## `virtual-machines`

Expand Down Expand Up @@ -2222,7 +2222,7 @@ This adds the possibility to import ISO images as custom storage volumes.
This adds the `--type` flag to [`incus storage volume import`](incus_storage_volume_import.md).

## `network_allocations`
This adds the possibility to list a Incus deployment's network allocations.
This adds the possibility to list an Incus deployment's network allocations.

Through the [`incus network list-allocations`](incus_network_list-allocations.md) command and the `--project <PROJECT> | --all-projects` flags,
you can list all the used IP addresses, hardware addresses (for instances), resource URIs and whether it uses NAT for
Expand Down
6 changes: 3 additions & 3 deletions doc/authentication.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ any backward compatibility to broken protocol or ciphers.
(authentication-trusted-clients)=
### Trusted TLS clients

You can obtain the list of TLS certificates trusted by a Incus server with [`incus config trust list`](incus_config_trust_list.md).
You can obtain the list of TLS certificates trusted by an Incus server with [`incus config trust list`](incus_config_trust_list.md).

Trusted clients can be added in either of the following ways:

Expand Down Expand Up @@ -101,7 +101,7 @@ To enable PKI mode, complete the following steps:
1. Place the certificates issued by the CA on the clients and the server, replacing the automatically generated ones.
1. Restart the server.

In that mode, any connection to a Incus daemon will be done using the
In that mode, any connection to an Incus daemon will be done using the
pre-seeded CA certificate.

If the server certificate isn't signed by the CA, the connection will simply go through the normal authentication mechanism.
Expand All @@ -122,7 +122,7 @@ Any user that authenticates through the configured OIDC Identity Provider gets f
To configure Incus to use OIDC authentication, set the [`oidc.*`](server-options-oidc) server configuration options.
Your OIDC provider must be configured to enable the [Device Authorization Grant](https://oauth.net/2/device-flow/) type.

To add a remote pointing to a Incus server configured with OIDC authentication, run [`incus remote add <remote_name> <remote_address>`](incus_remote_add.md).
To add a remote pointing to an Incus server configured with OIDC authentication, run [`incus remote add <remote_name> <remote_address>`](incus_remote_add.md).
You are then prompted to authenticate through your web browser, where you must confirm the device code that Incus uses.
The Incus client then retrieves and stores the access and refresh tokens and provides those to Incus for all interactions.

Expand Down
2 changes: 1 addition & 1 deletion doc/backup.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
(backups)=
# How to back up a Incus server
# How to back up an Incus server

In a production setup, you should always back up the contents of your Incus server.

Expand Down
2 changes: 1 addition & 1 deletion doc/database.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ With a database, you can run a simple query on the database to retrieve this inf

## Cowsql

In a Incus cluster, all members of the cluster must share the same database state.
In an Incus cluster, all members of the cluster must share the same database state.
Therefore, Incus uses [Cowsql](https://github.com/cowsql/cowsql), a distributed version of SQLite.
Cowsql provides replication, fault-tolerance, and automatic failover without the need of external database processes.

Expand Down
4 changes: 2 additions & 2 deletions doc/explanation/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ If you want to quickly set up a basic Incus cluster, check out [MicroCloud](http
(clustering-members)=
## Cluster members

A Incus cluster consists of one bootstrap server and at least two further cluster members.
An Incus cluster consists of one bootstrap server and at least two further cluster members.
It stores its state in a [distributed database](../database.md), which is a [Cowsql](https://github.com/cowsql/cowsql/) database replicated using the Raft algorithm.

While you could create a cluster with only two members, it is strongly recommended that the number of cluster members be at least three.
Expand Down Expand Up @@ -116,7 +116,7 @@ The special value of `-1` can be used to have the image copied to all cluster me
(cluster-groups)=
## Cluster groups

In a Incus cluster, you can add members to cluster groups.
In an Incus cluster, you can add members to cluster groups.
You can use these cluster groups to launch instances on a cluster member that belongs to a subset of all available members.
For example, you could create a cluster group for all members that have a GPU and then launch all instances that require a GPU on this cluster group.

Expand Down
2 changes: 1 addition & 1 deletion doc/explanation/projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ For example, projects can be useful in the following scenarios:
You want to keep these instances separate to make it easier to locate and maintain them, and you might want to reuse the same instance names in each customer project for consistency reasons.
Each instance in a customer project should use the same base configuration (for example, networks and storage), but the configuration might differ between customer projects.

In this case, you can create a Incus project for each customer project (thus each group of instances) and use different profiles, networks, and storage for each Incus project.
In this case, you can create an Incus project for each customer project (thus each group of instances) and use different profiles, networks, and storage for each Incus project.
- Your Incus server is shared between multiple users.
Each user runs their own instances, and might want to configure their own profiles.
You want to keep the user instances confined, so that each user can interact only with their own instances and cannot see the instances created by other users.
Expand Down
6 changes: 3 additions & 3 deletions doc/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ Use either of the following methods to grant the required permissions:
Privileged containers do not have this issue because all UID/GID in the container are the same as outside.
But that's also the cause of most of the security issues with such privileged containers.

## How can I run Docker inside a Incus container?
## How can I run Docker inside an Incus container?

To run Docker inside a Incus container, set the {config:option}`instance-security:security.nesting` property of the container to `true`:
To run Docker inside an Incus container, set the {config:option}`instance-security:security.nesting` property of the container to `true`:

incus config set <container> security.nesting true

Expand All @@ -74,7 +74,7 @@ Various configuration files are stored in that directory, for example:
## Why can I not ping my Incus instance from another host?

Many switches do not allow MAC address changes, and will either drop traffic with an incorrect MAC or disable the port totally.
If you can ping a Incus instance from the host, but are not able to ping it from a different host, this could be the cause.
If you can ping an Incus instance from the host, but are not able to ping it from a different host, this could be the cause.

The way to diagnose this problem is to run a `tcpdump` on the uplink and you will see either ``ARP Who has `xx.xx.xx.xx` tell `yy.yy.yy.yy` ``, with you sending responses but them not getting acknowledged, or ICMP packets going in and out successfully, but never being received by the other host.

Expand Down
2 changes: 1 addition & 1 deletion doc/howto/cluster_form.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
(cluster-form)=
# How to form a cluster

When forming a Incus cluster, you start with a bootstrap server.
When forming an Incus cluster, you start with a bootstrap server.
This bootstrap server can be an existing Incus server or a newly installed one.

After initializing the bootstrap server, you can join additional servers to the cluster.
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/cluster_manage.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@ When you upgrade the last member, the blocked members will notice that all serve

## Update the cluster certificate

In a Incus cluster, the API on all servers responds with the same shared certificate, which is usually a standard self-signed certificate with an expiry set to ten years.
In an Incus cluster, the API on all servers responds with the same shared certificate, which is usually a standard self-signed certificate with an expiry set to ten years.

The certificate is stored at `/var/lib/incus/cluster.crt` and is the same on all cluster members.

Expand Down
2 changes: 1 addition & 1 deletion doc/howto/images_copy.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ See [`incus image import --help`](incus_image_import.md) for all available flags
### Import from a file on a remote web server

You can import image files from a remote web server by URL.
This method is an alternative to running a Incus server for the sole purpose of distributing an image to users.
This method is an alternative to running an Incus server for the sole purpose of distributing an image to users.
It only requires a basic web server with support for custom headers (see {ref}`images-copy-http-headers`).

The image files must be provided as unified images (see {ref}`image-format-unified`).
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/images_remote.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ The URL must use HTTPS.
### Add a remote Incus server

<!-- Include start add remotes -->
To add a Incus server as a remote, enter the following command:
To add an Incus server as a remote, enter the following command:

incus remote add <remote_name> <IP|FQDN|URL> [flags]

Expand Down
6 changes: 3 additions & 3 deletions doc/howto/import_machines_to_instances.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
(import-machines-to-instances)=
# How to import physical or virtual machines to Incus instances

Incus provides a tool (`incus-migrate`) to create a Incus instance based on an existing disk or image.
Incus provides a tool (`incus-migrate`) to create an Incus instance based on an existing disk or image.

You can run the tool on any Linux machine.
It connects to a Incus server and creates a blank instance, which you can configure during or after the migration.
It connects to an Incus server and creates a blank instance, which you can configure during or after the migration.
The tool then copies the data from the disk or image that you provide to the instance.

```{note}
Expand Down Expand Up @@ -51,7 +51,7 @@ The tool can create both containers and virtual machines:
</details>
````

Complete the following steps to migrate an existing machine to a Incus instance:
Complete the following steps to migrate an existing machine to an Incus instance:

1. Download the `bin.linux.incus-migrate` tool ([`bin.linux.incus-migrate.aarch64`](https://github.com/lxc/incus/releases/latest/download/bin.linux.incus-migrate.aarch64) or [`bin.linux.incus-migrate.x86_64`](https://github.com/lxc/incus/releases/latest/download/bin.linux.incus-migrate.x86_64)) from the **Assets** section of the latest [Incus release](https://github.com/lxc/incus/releases).
1. Place the tool on the machine that you want to use to create the instance.
Expand Down
4 changes: 2 additions & 2 deletions doc/howto/initialize.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
(initialize)=
# How to initialize Incus

Before you can create a Incus instance, you must configure and initialize Incus.
Before you can create an Incus instance, you must configure and initialize Incus.

## Interactive configuration

Expand Down Expand Up @@ -120,7 +120,7 @@ Failure modes when overwriting entities are the same as for the `PUT` requests i

```{note}
The rollback process might potentially fail, although rarely (typically due to backend bugs or limitations).
You should therefore be careful when trying to reconfigure a Incus daemon via preseed.
You should therefore be careful when trying to reconfigure an Incus daemon via preseed.
```

### Default profile
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/instances_create.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Image
Unless the image is available locally, you must specify the name of the image server and the name of the image (for example, `images:ubuntu/22.04` for the official 22.04 Ubuntu image).

Instance name
: Instance names must be unique within a Incus deployment (also within a cluster).
: Instance names must be unique within an Incus deployment (also within a cluster).
See {ref}`instance-properties` for additional requirements.

Flags
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/network_bgp.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ If you want to directly route external addresses to specific Incus servers or in
Incus will then act as a BGP peer and advertise relevant routes and next hops to external routers, for example, your network router.
It automatically establishes sessions with upstream BGP routers and announces the addresses and subnets that it's using.

The BGP server feature can be used to allow a Incus server or cluster to directly use internal/external address space by getting the specific subnets or addresses routed to the correct host.
The BGP server feature can be used to allow an Incus server or cluster to directly use internal/external address space by getting the specific subnets or addresses routed to the correct host.
This way, traffic can be forwarded to the target instance.

For bridge networks, the following addresses and networks are being advertised:
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/network_bridge_firewalld.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ There are different ways of working around this problem:

Uninstall Docker
: The easiest way to prevent such issues is to uninstall Docker from the system that runs Incus and restart the system.
You can run Docker inside a Incus container or virtual machine instead.
You can run Docker inside an Incus container or virtual machine instead.

Enable IPv4 forwarding
: If uninstalling Docker is not an option, enabling IPv4 forwarding before the Docker service starts will prevent Docker from modifying the global FORWARD policy.
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/network_bridge_resolved.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# How to integrate with `systemd-resolved`

If the system that runs Incus uses `systemd-resolved` to perform DNS lookups, you should notify `resolved` of the domains that Incus can resolve.
To do so, add the DNS servers and domains provided by a Incus network bridge to the `resolved` configuration.
To do so, add the DNS servers and domains provided by an Incus network bridge to the `resolved` configuration.

```{note}
The `dns.mode` option (see {ref}`network-bridge-options`) must be set to `managed` or `dynamic` if you want to use this feature.
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/network_create.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ If you do not specify a `--type` argument, the default type of `bridge` is used.
(network-create-cluster)=
### Create a network in a cluster

If you are running a Incus cluster and want to create a network, you must create the network for each cluster member separately.
If you are running an Incus cluster and want to create a network, you must create the network for each cluster member separately.
The reason for this is that the network configuration, for example, the name of the parent network interface, might be different between cluster members.

Therefore, you must first create a pending network on each member with the `--target=<cluster_member>` flag and the appropriate configuration for the member.
Expand Down
2 changes: 1 addition & 1 deletion doc/howto/network_increase_bandwidth.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
You can increase the network bandwidth of your Incus setup by configuring the transmit queue length (`txqueuelen`).
This change makes sense in the following scenarios:

- You have a NIC with 1 GbE or higher on a Incus host with a lot of local activity (instance-instance connections or host-instance connections).
- You have a NIC with 1 GbE or higher on an Incus host with a lot of local activity (instance-instance connections or host-instance connections).
- You have an internet connection with 1 GbE or higher on your Incus host.

The more instances you use, the more you can benefit from this tweak.
Expand Down
4 changes: 2 additions & 2 deletions doc/howto/network_ipam.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
(network-ipam)=
# How to display IPAM information of a Incus deployment
# How to display IPAM information of an Incus deployment

{abbr}`IPAM (IP Address Management)` is a method used to plan, track, and manage the information associated with a computer network's IP address space. In essence, it's a way of organizing, monitoring, and manipulating the IP space in a network.

Expand Down Expand Up @@ -33,4 +33,4 @@ The resulting output will look something like this:

Each listed entry lists the IP address (in CIDR notation) of one of the following Incus entities: `network`, `network-forward`, `network-load-balancer`, and `instance`.
An entry contains an IP address using the CIDR notation.
It also contains a Incus resource URI, the type of the entity, whether it is in NAT mode, and the hardware address (only for the `instance` entity).
It also contains an Incus resource URI, the type of the entity, whether it is in NAT mode, and the hardware address (only for the `instance` entity).
6 changes: 3 additions & 3 deletions doc/howto/network_ovn_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ Complete the following steps to create a standalone OVN network that is connecte
+------+---------+---------------------+----------------------------------------------+-----------+-----------+
```

## Set up a Incus cluster on OVN
## Set up an Incus cluster on OVN

Complete the following steps to set up a Incus cluster that uses an OVN network.
Complete the following steps to set up an Incus cluster that uses an OVN network.

Just like Incus, the distributed database for OVN must be run on a cluster that consists of an odd number of members.
The following instructions use the minimum of three servers, which run both the distributed database for OVN and the OVN controller.
Expand Down Expand Up @@ -119,7 +119,7 @@ In addition, you can add any number of servers to the Incus cluster that run onl
external_ids:ovn-encap-type=geneve \
external_ids:ovn-encap-ip=<local>

1. Create a Incus cluster by running `incus admin init` on all machines.
1. Create an Incus cluster by running `incus admin init` on all machines.
On the first machine, create the cluster.
Then join the other machines with tokens by running [`incus cluster add <machine_name>`](incus_cluster_add.md) on the first machine and specifying the token when initializing Incus on the other machine.
1. On the first machine, create and configure the uplink network:
Expand Down
4 changes: 2 additions & 2 deletions doc/howto/network_zones.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Network zones are available for the {ref}`network-ovn` and the {ref}`network-bri
Network zones can be used to serve DNS records for Incus networks.

You can use network zones to automatically maintain valid forward and reverse records for all your instances.
This can be useful if you are operating a Incus cluster with multiple instances across many networks.
This can be useful if you are operating an Incus cluster with multiple instances across many networks.

Having DNS records for each instance makes it easier to access network services running on an instance.
It is also important when hosting, for example, an outbound SMTP service.
Expand Down Expand Up @@ -101,7 +101,7 @@ To make use of network zones, you must enable the built-in DNS server.
To do so, set the {config:option}`server-core:core.dns_address` configuration option to a local address on the Incus server.
To avoid conflicts with an existing DNS we suggest not using the port 53.
This is the address on which the DNS server will listen.
Note that in a Incus cluster, the address may be different on each cluster member.
Note that in an Incus cluster, the address may be different on each cluster member.

```{note}
The built-in DNS server supports only zone transfers through AXFR.
Expand Down
Loading

0 comments on commit c1faad8

Please sign in to comment.