diff --git a/docs/dag/kubernetes/concepts_and_definitions.rst b/docs/dag/kubernetes/concepts_and_definitions.rst index 1378d27d7..eca5920eb 100644 --- a/docs/dag/kubernetes/concepts_and_definitions.rst +++ b/docs/dag/kubernetes/concepts_and_definitions.rst @@ -39,7 +39,7 @@ At a minimum, the PV must contain these parameters: * Read Write Once (RWO) - Supported by all PVs * Read Only Many (ROX) - Supported primarily by file and file-like protocols, e.g. NFS and CephFS. However, some block protocols are supported, such as iSCSI * Read Write Many (RWX) - Supported by file and file-like protocols such as NFS and also supported by iSCSI raw block - devices + devices * Protocol - Type of protocol (e.g. "iSCSI" or "NFS") to use and additional information needed to access the storage. For example, an NFS PV will need the NFS server and a mount path. * Reclaim policy - It describes the Kubernetes action when the PV is released. Three reclaim policy options are available: diff --git a/docs/dag/kubernetes/deploying_trident.rst b/docs/dag/kubernetes/deploying_trident.rst index 154306b63..718169f6f 100644 --- a/docs/dag/kubernetes/deploying_trident.rst +++ b/docs/dag/kubernetes/deploying_trident.rst @@ -30,22 +30,52 @@ Three ways to install Trident are discussed in this chapter. **Normal install mode** -Normal installation involves running the ``tridentctl install -n trident`` command which deploys the Trident pod on the Kubernetes cluster. Trident installation is quite a straightforward process. For more information on installation and provisioning of volumes, refer to the :ref:`Deploying documentation `. +Installing Trident on a Kubernetes cluster will result in the Trident +installer: + +1. Fetching the container images over the Internet. + +2. Creating a deployment and/or node daemonset which spin up Trident pods + on all eligible nodes in the Kubernetes cluster. + +A standard installation such as this can be performed in two different +ways: + +1. Using ``tridentctl install`` to install Trident. + +2. Using the Trident Operator. + +This mode of installing is the easiest way to install Trident and +works for most environments that do not impose network restrictions. The +:ref:`Deploying ` guide will help you get started. **Offline install mode** In many organizations, production and development environments do not have access to public repositories for pulling and posting images as these environments are completely secured and restricted. Such environments only allow pulling images from trusted private repositories. To perform an air-gapped installation of Trident, you can use the ``--image-registry`` flag -when invoking ``tridentctl install`` to point to a private image registry. This registry must -contain the Trident image (obtained `here `_) and the -CSI sidecar images as required by your Kubernetes version. The -:ref:`Customized Installation ` section talks about the options available +when invoking ``tridentctl install`` to point to a private image registry. If installing with +the Trident Operator, you can alternatively specify ``spec.imageRegistry`` in your +TridentProvisioner. This registry must contain the Trident image +(obtained `here `_) +and the CSI sidecar images as required by your Kubernetes version. + +To customize your installation further, you can use ``tridentctl`` to generate the manifests +for Trident's resources. This includes the deployment, daemonset, service account and the cluster +role that Trident creates as part of its installation. +The :ref:`Customized Installation ` section talks about the options available for performing a custom Trident install. **Remote install mode** -Trident can be installed on a Kubernetes cluster from a remote machine. To do a remote install, install the appropriate version of ``kubectl`` on the remote machine from where you would be running the ``tridentctl install`` command. Copy the configuration files from the Kubernetes cluster and set the KUBECONFIG environment variable on the remote machine. Initiate a ``kubectl get nodes`` command to verify you can connect to the required Kubernetes cluster. Complete the Trident deployment from the remote machine using the normal installation steps. +Trident can be installed on a Kubernetes cluster from a remote machine. +To do a remote install, install the appropriate version of ``kubectl`` +on the remote machine from where you would be installing Trident. Copy +the configuration files from the Kubernetes cluster and set the KUBECONFIG +environment variable on the remote machine. Initiate a ``kubectl get nodes`` +command to verify you can connect to the required Kubernetes cluster. +Complete the Trident deployment from the remote machine using the normal +installation steps. Trident Installation on Docker UCP 3.1 ====================================== @@ -63,7 +93,7 @@ The Trident 19.10 release is built on a production-ready CSI 1.1 provisioner imp Trident to absorb standardized features like snapshots, while still retaining its ability to innovate on the storage model. To setup Trident as a CSI provisioner, refer to the :ref:`Deployment Guide `. Ensure -that the required :ref:`Feature Gates ` are enabled. +that the required :ref:`Feature Gates ` are enabled. After deploying, you should consider :ref:`Upgrading existing PVs to CSI volumes ` if you would like to use new features such as :ref:`On-demand snapshots `. diff --git a/docs/dag/kubernetes/index.rst b/docs/dag/kubernetes/index.rst index 0f630aab1..8b7feafdb 100644 --- a/docs/dag/kubernetes/index.rst +++ b/docs/dag/kubernetes/index.rst @@ -17,4 +17,3 @@ Design and Architecture Guide integrating_trident backup_disaster_recovery security_recommendations - frequently_asked_questions diff --git a/docs/dag/kubernetes/integrating_trident.rst b/docs/dag/kubernetes/integrating_trident.rst index 18973aeea..44809225b 100644 --- a/docs/dag/kubernetes/integrating_trident.rst +++ b/docs/dag/kubernetes/integrating_trident.rst @@ -31,15 +31,15 @@ Note that, in the tables below, not all of the capabilities are exposed through .. table:: ONTAP NAS driver capabilities :align: left - +-----------------------------+---------------+-----------------+--------------+---------------+--------+---------------+ - | ONTAP NFS Drivers | Snapshots | Clones | Multi-attach | QoS | Resize | Replication | - +=============================+===============+=================+==============+===============+========+===============+ - | ``ontap-nas`` | Yes | Yes | Yes | Yes\ :sup:`1` | Yes | Yes\ :sup:`1` | - +-----------------------------+---------------+-----------------+--------------+---------------+--------+---------------+ - | ``ontap-nas-economy`` | Yes\ :sup:`12`| Yes\ :sup:`12` | Yes | Yes\ :sup:`12`| Yes | Yes\ :sup:`12`| - +-----------------------------+---------------+-----------------+--------------+---------------+--------+---------------+ - | ``ontap-nas-flexgroup`` | Yes\ :sup:`1` | No | Yes | Yes\ :sup:`1` | Yes | Yes\ :sup:`1` | - +-----------------------------+---------------+-----------------+--------------+---------------+--------+---------------+ + +-----------------------------+---------------+-----------------+-------------------------+--------------+---------------+--------+---------------+ + | ONTAP NFS Drivers | Snapshots | Clones | Dynamic Export Policies | Multi-attach | QoS | Resize | Replication | + +=============================+===============+=================+=========================+==============+===============+========+===============+ + | ``ontap-nas`` | Yes | Yes | Yes\ :sup:`4` | Yes | Yes\ :sup:`1` | Yes | Yes\ :sup:`1` | + +-----------------------------+---------------+-----------------+-------------------------+--------------+---------------+--------+---------------+ + | ``ontap-nas-economy`` | Yes\ :sup:`12`| Yes\ :sup:`12` | Yes\ :sup:`4` | Yes | Yes\ :sup:`12`| Yes | Yes\ :sup:`12`| + +-----------------------------+---------------+-----------------+-------------------------+--------------+---------------+--------+---------------+ + | ``ontap-nas-flexgroup`` | Yes\ :sup:`1` | No | Yes\ :sup:`4` | Yes | Yes\ :sup:`1` | Yes | Yes\ :sup:`1` | + +-----------------------------+---------------+-----------------+-------------------------+--------------+---------------+--------+---------------+ Trident offers 2 SAN drivers for ONTAP, whose capabilities are shown below. @@ -48,19 +48,20 @@ Trident offers 2 SAN drivers for ONTAP, whose capabilities are shown below. :align: left - +-----------------------------+-----------+--------+--------------+---------------+---------------+---------------+ - | ONTAP SAN Driver | Snapshots | Clones | Multi-attach | QoS | Resize | Replication | - +=============================+===========+========+==============+===============+===============+===============+ - | ``ontap-san`` | Yes | Yes | Yes\ :sup:`3`| Yes\ :sup:`1` | Yes | Yes\ :sup:`1` | - +-----------------------------+-----------+--------+--------------+---------------+---------------+---------------+ - | ``ontap-san-economy`` | Yes | Yes | Yes\ :sup:`3`| Yes\ :sup:`12`| Yes\ :sup:`1` | Yes\ :sup:`12`| - +-----------------------------+-----------+--------+--------------+---------------+---------------+---------------+ + +-----------------------------+-----------+--------+--------------+--------------------+---------------+---------------+---------------+ + | ONTAP SAN Driver | Snapshots | Clones | Multi-attach | Bidirectional CHAP | QoS | Resize | Replication | + +=============================+===========+========+==============+====================+===============+===============+===============+ + | ``ontap-san`` | Yes | Yes | Yes\ :sup:`3`| Yes | Yes\ :sup:`1` | Yes | Yes\ :sup:`1` | + +-----------------------------+-----------+--------+--------------+--------------------+---------------+---------------+---------------+ + | ``ontap-san-economy`` | Yes | Yes | Yes\ :sup:`3`| Yes | Yes\ :sup:`12`| Yes\ :sup:`1` | Yes\ :sup:`12`| + +-----------------------------+-----------+--------+--------------+--------------------+---------------+---------------+---------------+ | Footnote for the above tables: | Yes\ :sup:`1` : Not Trident managed | Yes\ :sup:`2` : Trident managed, but not PV granular | Yes\ :sup:`12`: Not Trident managed and not PV granular | Yes\ :sup:`3` : Supported for raw-block volumes +| Yes\ :sup:`4` : Supported by CSI Trident The features that are not PV granular are applied to the entire FlexVolume and all of the PVs (i.e. qtrees or LUNs in shared FlexVols) will share a common schedule. @@ -81,11 +82,11 @@ The ``solidfire-san`` driver used with the HCI/SolidFire platforms, helps the ad .. table:: SolidFire SAN driver capabilities :align: left - +-------------------+----------------+--------+--------------+------+--------+---------------+ - | SolidFire Driver | Snapshots | Clones | Multi-attach | QoS | Resize | Replication | - +===================+================+========+==============+======+========+===============+ - | ``solidfire-san`` | Yes | Yes | Yes\ :sup:`2`| Yes | Yes | Yes\ :sup:`1` | - +-------------------+----------------+--------+--------------+------+--------+---------------+ + +-------------------+----------------+--------+--------------+------+------+--------+---------------+ + | SolidFire Driver | Snapshots | Clones | Multi-attach | CHAP | QoS | Resize | Replication | + +===================+================+========+==============+======+======+========+===============+ + | ``solidfire-san`` | Yes | Yes | Yes\ :sup:`2`| Yes | Yes | Yes | Yes\ :sup:`1` | + +-------------------+----------------+--------+--------------+------+------+--------+---------------+ | Footnote: | Yes\ :sup:`1`: Not Trident managed diff --git a/docs/dag/kubernetes/security_recommendations.rst b/docs/dag/kubernetes/security_recommendations.rst index a80612532..a6a98e6be 100644 --- a/docs/dag/kubernetes/security_recommendations.rst +++ b/docs/dag/kubernetes/security_recommendations.rst @@ -5,7 +5,7 @@ Security Recommendations ************************* Run Trident in its own namespace ---------------------------------- +================================ It is important to prevent applications, application admins, users, and management applications from accessing Trident object definitions or the @@ -20,11 +20,33 @@ in the namespaced CRD objects. Allow only administrators access to the Trident namespace and thus access to `tridentctl` application. CHAP authentication -------------------- +=================== + +Trident supports CHAP-based authentication for HCI/SolidFire backends and +ONTAP SAN workloads (using the ``ontap-san`` and ``ontap-san-economy`` +drivers). NetApp recommends using bidirectional CHAP with Trident for +authentication between a host and the storage backend. + +CHAP with ONTAP SAN backends +---------------------------- + +For ONTAP backends that use the SAN storage drivers, Trident can set up +bidirectional CHAP and manage CHAP usernames and secrets through ``tridentctl``. +Refer to :ref:`Using CHAP with ONTAP SAN drivers ` +to understand how Trident configures CHAP on ONTAP backends. + +.. note:: + + CHAP support for ONTAP backends is available with Trident 20.04 and above. + +CHAP with HCI/SolidFire backends +-------------------------------- .. note:: - Trident will only use CHAP when installed as a CSI Provisioner. + For HCI/SolidFire backends, CSI Trident will use CHAP to authenticate + connections. The volumes that are created by CSI Trident will not be + associated with any Volume Access Group. NetApp recommends deploying bi-directional CHAP to ensure authentication between a host and the HCI/SolidFire backend. Trident uses a secret diff --git a/docs/dag/kubernetes/storage_configuration_trident.rst b/docs/dag/kubernetes/storage_configuration_trident.rst index cb3a5dfbd..085cff87a 100644 --- a/docs/dag/kubernetes/storage_configuration_trident.rst +++ b/docs/dag/kubernetes/storage_configuration_trident.rst @@ -88,6 +88,17 @@ To configure the maximum size for volumes that can be created by Trident, use th In addition to controlling the volume size at the storage array, Kubernetes capabilities should also be leveraged as explained in the next chapter. +Configure Trident to use bidirectional CHAP +------------------------------------------- + +You can specify the CHAP initiator and target usernames and passwords in +your backend definition and have Trident enable CHAP on the SVM. +Using the ``useCHAP`` parameter in your backend configuration, Trident +will authenticate iSCSI connections for ONTAP backends with CHAP. +Bidirectional CHAP support is available with Trident 20.04 and above. Refer to the +:ref:`Using CHAP with ONTAP SAN drivers ` section +to get started. + Create and use an SVM QoS policy -------------------------------- @@ -128,12 +139,24 @@ For volumes where access is desired from both Kubernetes and external hosts, the For deployments which have dedicated infrastructure nodes (e.g. OpenShift), or other nodes which are not schedulable for user applications, separate export policies should be used to further limit access to storage resources. This includes creating an export policy for services which are deployed to those infrastructure nodes, such as, the OpenShift Metrics and Logging services, and standard applications which are deployed to non-infrastructure nodes. -Create an export policy ------------------------ - -Create appropriate export policies for Storage Virtual Machines. Allow only Kubernetes nodes access to the NFS volumes. - -Export policies contain one or more export rules that process each node access request. Use the ``vserver export-policy create`` ONTAP CLI to create the export policy. Add rules to the export policy using the ``vserver export-policy rule create`` ONTAP CLI command. Performing the above commands enables you to restrict which Kubernetes nodes have access to data. +Use a dedicated export policy +----------------------------- + +It is important to ensure that an export policy exists for each backend +that only allows access to the nodes present in the Kubernetes cluster. +Trident can automatically create and manage export policies from the +``20.04`` release. This is covered in detail in the +:ref:`Dynamic Export Policies ` +section of the documentation. This way, Trident limits access to the +volumes it provisions to the nodes in the Kubernetes cluster and simplifies +the addition/deletion of nodes. + +Alternatively, you can also create an export policy manually and populate it +with one or more export rules that process each node access request. +Use the ``vserver export-policy create`` ONTAP CLI to create the export policy. +Add rules to the export policy using the ``vserver export-policy rule create`` +ONTAP CLI command. Performing the above commands enables you to restrict which +Kubernetes nodes have access to data. Disable ``showmount`` for the application SVM --------------------------------------------- diff --git a/docs/dag/kubernetes/frequently_asked_questions.rst b/docs/frequently_asked_questions.rst similarity index 95% rename from docs/dag/kubernetes/frequently_asked_questions.rst rename to docs/frequently_asked_questions.rst index 9ac8a1a5d..9bfbd396a 100644 --- a/docs/dag/kubernetes/frequently_asked_questions.rst +++ b/docs/frequently_asked_questions.rst @@ -20,7 +20,7 @@ This section covers Trident Installation on a Kubernetes cluster. What are the supported versions of etcd? ---------------------------------------- -Trident v19.10 does not require an etcd. It uses CRDs to maintain +From the 19.07 release, Trident no longer needs an etcd. It uses CRDs to maintain state. @@ -112,7 +112,7 @@ What versions of Kubernetes support Trident as an enhanced CSI Provisioner? --------------------------------------------------------------------------- Kubernetes versions ``1.13`` and above support running Trident as a CSI Provisioner. Before installing -Trident, ensure the required :ref:`feature gates ` are enabled. +Trident, ensure the required :ref:`feature gates ` are enabled. Refer to :ref:`Requirements ` for a list of supported orchestrators. @@ -130,7 +130,7 @@ How do I install Trident to work as a CSI Provisioner? ------------------------------------------------------ The installation procedure is detailed under the :ref:`Deployment ` section. -Ensure that the :ref:`feature gates ` are enabled. +Ensure that the :ref:`feature gates ` are enabled. How does Trident maintain state if it doesn't use etcd? ------------------------------------------------------- @@ -160,6 +160,25 @@ absolutely mandatory. Refer to :ref:`ONTAP (AFF/FAS/Select/Cloud)` for more information on backend definition files. +Can Trident configure CHAP for ONTAP backends? +---------------------------------------------- + +Yes. Beginning with Trident 20.04, Trident supports bidirectional CHAP for ONTAP backends. This +requires setting ``useCHAP=true`` in your backend configuration. Refer to the +:ref:`Using CHAP with ONTAP SAN drivers ` section +to understand how it works. + +How do I manage export policies with Trident? +--------------------------------------------- + +Trident can dynamically create and manage export policies from 20.04 onwards. +This enables the storage admin to provide one or more CIDR blocks in their +backend config and have Trident add node IPs that fall within these ranges +to an export policy it creates. In this manner, Trident automatically +manages the addition and deletion of rules for nodes with IPs within the +given CIDRs. This feature requires CSI Trident. Refer to +:ref:`Dynamic Export Policies with ONTAP NAS ` for more +information. Can we specify a port in the DataLIF? ------------------------------------- diff --git a/docs/index.rst b/docs/index.rst index 1933ac590..575564f0c 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -11,9 +11,9 @@ Storage Orchestrator for Containers :target: https://goreportcard.com/report/github.com/netapp/trident .. toctree:: - :caption: Introduction + :caption: Introduction - introduction + introduction .. toctree:: :caption: Kubernetes @@ -21,6 +21,12 @@ Storage Orchestrator for Containers kubernetes/index dag/kubernetes/index +.. toctree:: + :maxdepth: 2 + :caption: Frequently Asked Questions + + frequently_asked_questions + .. toctree:: :caption: Docker diff --git a/docs/kubernetes/deploying.rst b/docs/kubernetes/deploying.rst index 98d1f3724..bfb017743 100644 --- a/docs/kubernetes/deploying.rst +++ b/docs/kubernetes/deploying.rst @@ -3,440 +3,146 @@ Deploying ^^^^^^^^^ -This guide will take you through the process of deploying Trident and -provisioning your first volume automatically. If you are a new user, this -is the place to get started with using Trident. +This guide will take you through the process of deploying Trident for the +first time and provisioning your first volume automatically. If you are a +new user, this is the place to get started with using Trident. If you are an existing user looking to upgrade, head on over to the :ref:`Upgrading Trident ` section. -Before you begin -================ +There are two ways you can deploy Trident: -If you have not already familiarized yourself with the -:ref:`basic concepts `, now is a great time to do that. Go -ahead, we'll be here when you get back. +1. **Using the Trident Operator:** The ``20.04`` release provides a + `Kubernetes Operator `_ + to deploy Trident. The Trident Operator controls the installation of + Trident, taking care to **self-heal the install and manage changes as + well as upgrades to the Trident installation**. Take a look at + :ref:`Deploying with the Trident Operator `! -To deploy Trident you need: +2. **Deploying Trident with tridentctl:** If you have already deployed + previous releases, this is the method of deployment that you would have + used. :ref:`This page ` explains all the steps + involved in deploying Trident in this manner. -.. sidebar:: Need Kubernetes? +.. important:: - If you do not already have a Kubernetes cluster, you can easily create one for - demonstration purposes using our - :ref:`simple Kubernetes install guide `. + The 20.04 release limits the Trident Operator to + **greenfield installations only**. -* Full privileges to a - :ref:`supported Kubernetes cluster ` -* Access to a - :ref:`supported NetApp storage system ` -* :ref:`Volume mount capability ` from all of the - Kubernetes worker nodes -* A Linux host with ``kubectl`` (or ``oc``, if you're using OpenShift) installed - and configured to manage the Kubernetes cluster you want to use -* Enable the :ref:`Feature Gates ` required by Trident -* If you are using Kubernetes with Docker EE 2.1, `follow their steps - to enable CLI access `_. +Choosing the right option +========================= -Got all that? Great! Let's get started. +To determine which deployment option to use, you must consider the following: -1: Qualify your Kubernetes cluster -================================== +Why should I use the Trident Operator? +************************************** -You made sure that you have everything in hand from the -:ref:`previous section `, right? Right. +If you are a new user testing Trident (or) deploying a fresh installation of +Trident in your cluster, the Trident Operator is a great way to dynamically +manage Trident resources and automate the setup phase. There are some +prerequisites that must be satisfied. Please refer to the :ref:`Requirements ` +section to identify the necessary requirements to deploy with the Trident +Operator. -The first thing you need to do is log into the Linux host and verify that it is -managing a *working*, -:ref:`supported Kubernetes cluster ` that -you have the necessary privileges to. +The Trident Operator offers a number of benefits such as: -.. note:: - With OpenShift, you will use ``oc`` instead of ``kubectl`` in all of the - examples that follow, and you need to login as **system:admin** first by - running ``oc login -u system:admin``. - -.. code-block:: bash - - # Are you running a supported Kubernetes server version? - kubectl version - - # Are you a Kubernetes cluster administrator? - kubectl auth can-i '*' '*' --all-namespaces - - # Can you launch a pod that uses an image from Docker Hub and can reach your - # storage system over the pod network? - kubectl run -i --tty ping --image=busybox --restart=Never --rm -- \ - ping - -Identify your Kubernetes server version. You will be using it when you -:ref:`Install Trident <3: Install Trident>`. - -2: Download & extract the installer -=================================== - -.. note:: - Trident's installer is responsible for creating a Trident pod, configuring - the CRD objects that are used to maintain its state and to - initialize the CSI Sidecars that perform actions such as provisioning and - attaching volumes to the cluster hosts. - -Download the latest version of the `Trident installer bundle`_ from the -*Downloads* section and extract it. - -For example, if the latest version is 20.01.0: - -.. code-block:: console - - wget https://github.com/NetApp/trident/releases/download/v20.01.0/trident-installer-20.01.0.tar.gz - tar -xf trident-installer-20.01.0.tar.gz - cd trident-installer - -.. _Trident installer bundle: https://github.com/NetApp/trident/releases/latest +Self-Healing +"""""""""""" -3: Install Trident -================== +The biggest advantage that the operator provides is +the ability to monitor a Trident installation and actively take measures +to address issues, such as when the Trident deployment is deleted or if +the installation is modified accidentally. When the operator is set +up as a deployment, a ``trident-operator-`` pod is created. +This pod associates a TridentProvisioner CR with a Trident installation and always +ensures there exists only one active TridentProvisioner. In other words, the +operator makes sure there's only one instance of Trident in the cluster and +controls its setup, making sure the installation is idempotent. When changes +are made to the Trident install [such as deleting the Trident deployment or +node daemonset], the operator identifies them and fixes them +individually. -Install Trident in the desired namespace by executing the -:ref:`tridentctl install ` command. The installation procedure -slightly differs depending on the version of Kubernetes being used. +Updating existing installations +""""""""""""""""""""""""""""""" -Installing Trident on Kubernetes 1.13 -------------------------------------- +With the operator it is easy to update an existing Trident deployment. Since +the Trident install is initiated by the creation of a ``TridentProvisioner`` +CR, you can edit it to make updates to an already created Trident installation. +Wherein installations done with ``tridentctl`` will require an +uninstall/reinstall to perform something similar, the operator only requires +editing the TridentProvisioner CR. -On Kubernetes ``1.13``, there are a couple of options when installing Trident: - -- Install Trident in the desired namespace by executing the - ``tridentctl install`` command with the ``--csi`` flag. This is the preferred - method of installation and will support all features provided by Trident. The output observed - will be similar to that shown :ref:`below ` - -- If for some reason the :ref:`feature gates ` required by Trident - cannot be enabled, you can install Trident without the ``--csi`` flag. This will - configure Trident to work in its traditional format without using the CSI - specification. Keep in mind that new features introduced by Trident, such as - :ref:`On-Demand Volume Snapshots ` will not be available - in this installation mode. - -Installing Trident on Kubernetes 1.14 and above ------------------------------------------------ - -Install Trident in the desired namespace by executing the -``tridentctl install`` command. +As an example, consider a scenario where you need to enable Trident to generate +debug logs. To do this, you will need to patch your TridentProvisioner to set +``spec.debug`` to ``true``. .. code-block:: console - $ ./tridentctl install -n trident - .... - INFO Starting Trident installation. namespace=trident - INFO Created service account. - INFO Created cluster role. - INFO Created cluster role binding. - INFO Added finalizers to custom resource definitions. - INFO Created Trident service. - INFO Created Trident secret. - INFO Created Trident deployment. - INFO Created Trident daemonset. - INFO Waiting for Trident pod to start. - INFO Trident pod started. namespace=trident pod=trident-csi-679648bd45-cv2mx - INFO Waiting for Trident REST interface. - INFO Trident REST interface is up. version=20.01.0 - INFO Trident installation succeeded. - .... - -It will look like this when the installer is complete. Depending on -the number of nodes in your Kubernetes cluster, you may observe more pods: - -.. code-block:: console - - $ kubectl get pod -n trident - NAME READY STATUS RESTARTS AGE - trident-csi-679648bd45-cv2mx 4/4 Running 0 5m29s - trident-csi-vgc8n 2/2 Running 0 5m29s - - $ ./tridentctl -n trident version - +----------------+----------------+ - | SERVER VERSION | CLIENT VERSION | - +----------------+----------------+ - | 20.01.0 | 20.01.0 | - +----------------+----------------+ + kubectl patch tprov -n trident --type=merge -p '{"spec":{"debug":true}}' -If that's what you see, you're done with this step, but **Trident is not -yet fully configured.** Go ahead and continue to the next step. +After the TridentProvisioner is updated, the operator processes the updates and +patches the existing installation. This may triggers the creation of new pods +to modify the installation accordingly. -However, if the installer does not complete successfully or you don't see -a **Running** ``trident-csi-``, then Trident had a problem and the platform was *not* -installed. +Handling Kubernetes upgrades +"""""""""""""""""""""""""""" -To help figure out what went wrong, you could run the installer again using the ``-d`` argument, -which will turn on debug mode and help you understand what the problem is: +When the Kubernetes version of the cluster is upgraded to a +:ref:`supported version `, the operator +updates an existing Trident installation automatically and changes it +to make sure it meets the requirements of the Kubernetes version. -.. code-block:: console - - ./tridentctl install -n trident -d - -After addressing the problem, you can clean up the installation and go back to -the beginning of this step by first running: - -.. code-block:: console - - ./tridentctl uninstall -n trident - INFO Deleted Trident deployment. - INFO Deleted cluster role binding. - INFO Deleted cluster role. - INFO Deleted service account. - INFO Removed Trident user from security context constraint. - INFO Trident uninstallation succeeded. - -If you continue to have trouble, visit the -:ref:`troubleshooting guide ` for more advice. - -Customized Installation ------------------------ - -Trident's installer allows you to customize attributes. For example, if you have -copied the Trident image to a private repository, you can specify the image name by using -``--trident-image``. If you have copied the Trident image as well as the needed CSI -sidecar images to a private repository, it may be preferable to specify the location -of that repository by using the ``--image-registry`` switch, which takes the form -``[:port]``. - -If you are using a distribution of Kubernetes where kubelet keeps its data on a path -other than the usual ``/var/lib/kubelet``, you can specify the alternate path by using -``--kubelet-dir``. - -As a last resort, if you need to customize Trident's installation beyond what the -installer's arguments allow, you can also customize Trident's deployment files. Using -the ``--generate-custom-yaml`` parameter will create the following YAML files in the -installer's ``setup`` directory: - -- trident-clusterrolebinding.yaml -- trident-deployment.yaml -- trident-crds.yaml -- trident-clusterrole.yaml -- trident-daemonset.yaml -- trident-service.yaml -- trident-namespace.yaml -- trident-serviceaccount.yaml - -Once you have generated these files, you can modify them according to your needs and -then use the ``--use-custom-yaml`` to install your custom deployment of Trident. +If the cluster is upgraded to an unsupported version (``1.19 and above``): -.. code-block:: console +* the operator prevents installing Trident. - ./tridentctl install -n trident --use-custom-yaml +* If Trident has already been installed with the operator, a warning is + displayed to indicate that Trident is installed on an unsupported Kubernetes + version. -4: Create and Verify your first backend -======================================= +When should I use tridentctl? +***************************** -You can now go ahead and create a backend that will be used by Trident -to provision volumes. To do this, create a ``backend.json`` file that -contains the necessary parameters. Sample configuration files for -different backend types can be found in the ``sample-input`` directory. +If you have an existing Trident deployment that must be upgraded to or if +you are looking to highly customize your Trident install, you should take a +look at using ``tridentctl`` to setup Trident. This is the conventional method +of installing Trident, as well as the supported method of upgrading your Trident +deployment to the ``20.04`` release. Take a look at the :ref:`Upgrading ` +page to upgrade Trident. -Visit the :ref:`backend configuration ` of this -guide for more details about how to craft the configuration file for -your backend type. +Ultimately, the environment in question will determine the choice of deployment. +**It is important to note that the 20.04 Trident Operator is meant for +new deployments only**. .. note:: - Many of the backends require some - :ref:`basic preparation `, so make sure that's been - done before you try to use it. Also, we don't recommend an - ontap-nas-economy backend or ontap-nas-flexgroup backend for this step as - volumes of these types have specialized and limited capabilities relative to - the volumes provisioned on other types of backends. - -.. code-block:: bash + The ``20.04`` release of the Trident Operator is only meant to be used for + greenfield deployments. For existing installations, you must use ``tridentctl`` + to upgrade to the latest release. - cp sample-input/.json backend.json - # Fill out the template for your backend - vi backend.json - -.. code-block:: console - - ./tridentctl -n trident create backend -f backend.json - +-------------+----------------+--------------------------------------+--------+---------+ - | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | - +-------------+----------------+--------------------------------------+--------+---------+ - | nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 | - +-------------+----------------+--------------------------------------+--------+---------+ - -If the creation fails, something was wrong with the backend configuration. You -can view the logs to determine the cause by running: - -.. code-block:: console - - ./tridentctl -n trident logs - -After addressing the problem, simply go back to the beginning of this step -and try again. If you continue to have trouble, visit the -:ref:`troubleshooting guide ` for more advice on how to -determine what went wrong. - -5: Add your first storage class -=============================== - -Kubernetes users provision volumes using persistent volume claims (PVCs) that -specify a `storage class`_ by name. The details are hidden from users, but a -storage class identifies the provisioner that will be used for that class (in -this case, Trident) and what that class means to the provisioner. - -.. sidebar:: Basic too basic? - - This is just a basic storage class to get you started. There's an art to - :ref:`crafting differentiated storage classes ` - that you should explore further when you're looking at building them for - production. - -Create a storage class Kubernetes users will specify when they want a volume. -The configuration of the class needs to model the backend that you created -in the previous step so that Trident will use it to provision new volumes. - -The simplest storage class to start with is one based on the -``sample-input/storage-class-csi.yaml.templ`` file that comes with the -installer, replacing ``__BACKEND_TYPE__`` with the storage driver name. - -.. code-block:: bash - - ./tridentctl -n trident get backend - +-------------+----------------+--------------------------------------+--------+---------+ - | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | - +-------------+----------------+--------------------------------------+--------+---------+ - | nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 | - +-------------+----------------+--------------------------------------+--------+---------+ - - cp sample-input/storage-class-basic.yaml.templ sample-input/storage-class-basic.yaml - - # Modify __BACKEND_TYPE__ with the storage driver field above (e.g., ontap-nas) - vi sample-input/storage-class-basic.yaml - -This is a Kubernetes object, so you will use ``kubectl`` to create it in -Kubernetes. - -.. code-block:: console - - kubectl create -f sample-input/storage-class-basic.yaml - -You should now see a **basic** storage class in both Kubernetes and Trident, -and Trident should have discovered the pools on the backend. - -.. code-block:: console - - kubectl get sc basic - NAME PROVISIONER AGE - basic csi.trident.netapp.io 15h - - ./tridentctl -n trident get storageclass basic -o json - { - "items": [ - { - "Config": { - "version": "1", - "name": "basic", - "attributes": { - "backendType": "ontap-nas" - }, - "storagePools": null, - "additionalStoragePools": null - }, - "storage": { - "ontapnas_10.0.0.1": [ - "aggr1", - "aggr2", - "aggr3", - "aggr4" - ] - } - } - ] - } - -.. _storage class: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses - -6: Provision your first volume -============================== - -Now you're ready to dynamically provision your first volume. How exciting! This -is done by creating a Kubernetes `persistent volume claim`_ (PVC) object, and -this is exactly how your users will do it too. - -.. _persistent volume claim: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims - -Create a persistent volume claim (PVC) for a volume that uses the storage -class that you just created. - -See ``sample-input/pvc-basic.yaml`` for an example. Make sure the storage -class name matches the one that you created in 6. - -.. code-block:: bash - - kubectl create -f sample-input/pvc-basic.yaml - - kubectl get pvc --watch - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - basic Pending basic 1s - basic Pending pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 0 basic 5s - basic Bound pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 1Gi RWO basic 7s - -7: Mount the volume in a pod -============================ - -Now that you have a volume, let's mount it. We'll launch an nginx pod that -mounts the PV under ``/usr/share/nginx/html``. - -.. code-block:: bash - - cat << EOF > task-pv-pod.yaml - kind: Pod - apiVersion: v1 - metadata: - name: task-pv-pod - spec: - volumes: - - name: task-pv-storage - persistentVolumeClaim: - claimName: basic - containers: - - name: task-pv-container - image: nginx - ports: - - containerPort: 80 - name: "http-server" - volumeMounts: - - mountPath: "/usr/share/nginx/html" - name: task-pv-storage - EOF - kubectl create -f task-pv-pod.yaml - -.. code-block:: bash - - # Wait for the pod to start - kubectl get pod --watch - - # Verify that the volume is mounted on /usr/share/nginx/html - kubectl exec -it task-pv-pod -- df -h /usr/share/nginx/html - Filesystem Size Used Avail Use% Mounted on - 10.xx.xx.xx:/trid_1907_pvc_3acb0d1c_b1ae_11e9_8d9f_5254004dfdb7 1.0G 256K 1.0G 1% /usr/share/nginx/html - - - # Delete the pod - kubectl delete pod task-pv-pod - -At this point the pod (application) no longer exists but the volume is still -there. You could use it from another pod if you wanted to. - -To delete the volume, simply delete the claim: - -.. code-block:: console - - kubectl delete pvc basic - -**Check you out! You did it!** Now you're dynamically provisioning -Kubernetes volumes like a boss. - -.. - Where do you go from here? you can do things like: +Moving between installation methods +=================================== - * Configure additional backends - * Model additional storage classes - * Review considerations for moving this into production +It is not hard to imagine a scenario where moving between deployment methods is +desired. Here's what you must know before attempting to move from a ``tridentctl`` +install to an operator-based deployment, or vice versa: + +1. Always use the same method for uninstalling Trident. If you have deployed Trident + with ``tridentctl``, you must use the appropriate version of the ``tridentctl`` + binary to uninstall Trident. Similarly, if deploying Trident with the operator, + you must edit the ``TridentProvisioner`` CR and set ``spec.uninstall=true`` + to uninstall Trident. + +2. **The 20.04 release limits the Trident Operator to greenfield installations only**. + As a result, you can only use the Trident Operator to create a fresh install. For + existing Trident installs that must be moved to ``20.04``, you will need to use + ``tridentctl`` to uninstall and reinstall Trident. Future releases of the operator are + targeted to support upgrades. + +3. If you have a Trident Operator deployment that you want to remove and use ``tridentctl`` + to deploy Trident, you must first edit the ``TridentProvisioner`` and set + ``spec.uninstall=true`` to uninstall Trident. You will then have delete the + ``TridentProvisioner`` and the operator deployment. + You can then install Trident with ``tridentctl``. + +NetApp **does not recommend downgrading Trident releases** unless absolutely necessary. diff --git a/docs/kubernetes/index.rst b/docs/kubernetes/index.rst index 8461d3956..ffc163760 100644 --- a/docs/kubernetes/index.rst +++ b/docs/kubernetes/index.rst @@ -101,6 +101,8 @@ and is a great way to get started on the Trident journey. upgrading deploying + tridentctl-install + operator-install operations/tasks/index concepts/index known-issues diff --git a/docs/kubernetes/known-issues.rst b/docs/kubernetes/known-issues.rst index 5069cbafa..fdca78282 100644 --- a/docs/kubernetes/known-issues.rst +++ b/docs/kubernetes/known-issues.rst @@ -3,6 +3,12 @@ Known issues This page contains a list of known issues that may be observed when using Trident. +* When installing Trident (using ``tridentctl`` or the Trident Operator) and + using ``tridentctl`` to manage Trident, you must ensure the + ``KUBECONFIG`` environment variable is set. This is necessary to indicate + the Kubernetes cluster that ``tridentctl`` must work against. When working + with multiple Kubernetes environments, care must be taken to ensure the + KUBECONFIG file is sourced accurately. * To perform online space reclamation for iSCSI PVs, the underlying OS on the worker node may require mount options to be passed to the volume. This is true for RHEL/RedHat CoreOS instances, which require the ``discard`` @@ -33,8 +39,9 @@ This page contains a list of known issues that may be observed when using Triden idempotent operation. Thus, if the ``storagePrefix`` or ``TenantName`` does not differ, there is a very slim chance to have name collisions for volumes created on the same backend. -* ONTAP cannot concurrently provision more than one FlexGroup at a time unless the set of aggregates are - unique to each provisioning request. -* The ``ontap-nas-flexgroup`` driver doesn't currently work with ONTAP 9.7. -* When using Trident over IPv6, the ``managementLIF`` option in the backend definition +* ONTAP cannot concurrently provision more than one FlexGroup at a time + unless the set of aggregates are unique to each provisioning request. +* When using Trident over IPv6, the ``managementLIF`` and ``dataLIF`` in the backend definition must be specified within square brackets, like ``[fd20:8b1e:b258:2000:f816:3eff:feec:0]``. +* If using CoreOS or Ubuntu on Kubernetes nodes, you must ensure ``rpc-statd`` is started + at boot time. diff --git a/docs/kubernetes/operations/tasks/backends/ontap.rst b/docs/kubernetes/operations/tasks/backends/ontap.rst index d5c0f4122..e3be0e21c 100644 --- a/docs/kubernetes/operations/tasks/backends/ontap.rst +++ b/docs/kubernetes/operations/tasks/backends/ontap.rst @@ -11,7 +11,7 @@ To create and use an ONTAP backend, you will need: * Credentials to an ONTAP SVM with :ref:`appropriate access ` Choosing a driver ------------------ +================= =================== ======== Driver Protocol @@ -55,10 +55,6 @@ file repositories, etc. Trident uses all aggregates assigned to an SVM when provisioning a FlexGroup Volume. FlexGroup support in Trident also has the following considerations: -.. note:: - - The ``ontap-nas-flexgroup`` driver currently does not work with ONTAP 9.7 - * Requires ONTAP version 9.2 or greater. * As of this writing, FlexGroups only support NFSv3 (required to set ``mountOptions: ["nfsvers=3"]`` in the Kubernetes storage class). @@ -79,7 +75,7 @@ uses the ``ontap-nas-economy`` one. .. _ONTAP backend preparation: Preparation ------------ +=========== For all ONTAP backends, Trident requires at least one `aggregate assigned to the SVM`_. @@ -87,29 +83,191 @@ For all ONTAP backends, Trident requires at least one .. _aggregate assigned to the SVM: https://library.netapp.com/ecmdocs/ECMP1368404/html/GUID-5255E7D8-F420-4BD3-AEFB-7EF65488C65C.html ontap-nas, ontap-nas-economy, ontap-nas-flexgroups -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------------------------- All of your Kubernetes worker nodes must have the appropriate NFS tools installed. See the :ref:`worker configuration guide ` for more details. Trident uses NFS `export policies`_ to control access to the volumes that it -provisions. It uses the ``default`` export policy unless a different export -policy name is specified in the configuration. +provisions. .. _export policies: https://library.netapp.com/ecmdocs/ECMP1196891/html/GUID-9A2B6C3E-C86A-4125-B778-6072A3A19657.html -While Trident associates new volumes (or qtrees) with the configured export -policy, it does not create or otherwise manage export policies themselves. +Trident provides two options when working with export policies: + +1. Trident can **dynamically manage the export policy itself**; in this mode of + operation, the storage admin specifies a list of CIDR blocks that + represent admissible IP addresses. Trident adds node IPs that fall in + these ranges to the export policy automatically. Alternatively, when no + CIDRs are specified, any global-scoped unicast IP found on the nodes will + be added to the export policy. + +2. Storage admins can create an export policy and add rules manually. Trident uses + the ``default`` export policy unless a different export policy name is specified + in the configuration. + +With (1), Trident automates the management of export policies, creating an export +policy and taking care of additions and deletions of rules to the export policy based +on the worker nodes it runs on. As and when nodes are removed or added to the +Kubernetes cluster, Trident can be set up to permit access to the nodes, thus +providing a more robust way of managing access to the PVs it creates. Trident +will create one export policy per backend. **This feature requires CSI Trident**. + +With (2), Trident does not create or otherwise manage export policies themselves. The export policy must exist before the storage backend is added to Trident, and it needs to be configured to allow access to every worker node in the -Kubernetes cluster. +Kubernetes cluster. If the export policy is locked down to specific hosts, +it will need to be updated when new nodes are added to the cluster +and that access should be removed when nodes are removed as well. + +Dynamic Export Policies with ONTAP NAS +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The 20.04 release of CSI Trident provides the ability to dynamically manage +export policies for ONTAP backends. This provides the storage administrator +the ability to specify a permissible address space for worker node +IPs, rather than defining explicit rules manually. Since Trident automates the +export policy creation and configuration, it greatly simplifies export policy management for +the storage admin and the Kubernetes admin; modifications to the export policy no +longer require manual intervention on the storage cluster. Moreover, this helps +restrict access to the storage cluster only to worker nodes that have IPs in the +range specified, supporting a finegrained and automated managment. + + +Prerequisites +""""""""""""" + +.. warning:: + + The auto-management of export policies is only available for CSI Trident. + It is important to ensure that **the worker nodes are not being NATed**. + For Trident to discover the node IPs and add rules to the export policy + dynamically, it **must be able to discover the node IPs**. + +There are two configuration options that must be used. Here's an example backend +definition: + +.. code:: + + { + "version": 1, + "storageDriverName": "ontap-nas", + "backendName": "ontap_nas_auto_export, + "managementLIF": "192.168.0.135", + "svm": "svm1", + "username": "vsadmin", + "password": "FaKePaSsWoRd", + "autoExportCIDRs": ["192.168.0.0/24"], + "autoExportPolicy": true + } + +.. warning:: + + When using auto export policies, you must ensure that the root junction + in your SVM has a pre-created export policy with an export rule that + permits the node CIDR block (such as the ``default`` export policy). All + volumes created by Trident are mounted under the root junction. Always + follow NetApp's recommended best practice of dedicating a SVM for Trident. + +How it works +"""""""""""" + +From the example shown above: + +1. ``autoExportPolicy`` is set to ``true``. This indicates that Trident will + create an export policy for the ``svm1`` SVM and handle the addition and + deletion of rules using the ``autoExportCIDRs`` address blocks. The export + policy will be named using the format ``trident-``. For example, a backend + with UUID ``403b5326-8482-40db-96d0-d83fb3f4daec`` and ``autoExportPolicy`` set + to ``true`` will see Trident create an export policy named + ``trident-403b5326-8482-40db-96d0-d83fb3f4daec`` on the SVM. + +2. ``autoExportCIDRs`` contains a list of address blocks. **This field is + optional and it defaults to** ``["0.0.0.0/0", "::/0"]``. **If not defined, + Trident adds all globally-scoped unicast addresses found on the worker + nodes**. + + In this example, the ``192.168.0.0/24`` address space is provided. + This indicates that Kubernetes node IPs that fall within this address range + will be added by Trident to the export policy it creates in (1). + When Trident registers a node it runs on, + it retrieves the IP addresses of the node and checks them against the address + blocks provided in ``autoExportCIDRs``. After filtering the IPs, Trident creates + export policy rules for the client IPs it discovers, with one rule for each node + it identifies. + + The ``autoExportPolicy`` and ``autoExportCIDRs`` parameters can be updated for + backends after they are created. You can append new CIDRs for a backend that's + automatically managed or delete existing CIDRs. Exercise care **when deleting + CIDRs to ensure that existing connections are not dropped**. You can also choose to disable + ``autoExportPolicy`` for a backend and fall back to a manually created export + policy. This will require setting the ``exportPolicy`` parameter in your backend + config. + +After Trident creates/updates a backend, you can check the backend using ``tridentctl`` +or the corresponding tridentbackend CRD: + +.. code-block:: bash + + $ ./tridentctl get backends ontap_nas_auto_export -n trident -o yaml + items: + - backendUUID: 403b5326-8482-40db-96d0-d83fb3f4daec + config: + aggregate: "" + autoExportCIDRs: + - 192.168.0.0/24 + autoExportPolicy: true + backendName: ontap_nas_auto_export + chapInitiatorSecret: "" + chapTargetInitiatorSecret: "" + chapTargetUsername: "" + chapUsername: "" + dataLIF: 192.168.0.135 + debug: false + debugTraceFlags: null + defaults: + encryption: "false" + exportPolicy: + fileSystemType: ext4 + +Updating your Kubernetes cluster configuration +"""""""""""""""""""""""""""""""""""""""""""""" + +As nodes are added to a Kubernetes cluster and registered with the Trident controller, +export policies of existing backends are updated (provided they fall in the address +range specified in the ``autoExportCIDRs`` for the backend). The CSI Trident daemonset +spins up a pod on all available nodes in the Kuberentes cluster. +Upon registering an eligible node, Trident checks if it contains IP addresses in the +CIDR block that is allowed on a per-backend basis. Trident then updates the export policies +of all possible backends, adding a rule for each node that meets the criteria. + +A similar workflow is observed when nodes are deregistered from the Kubernetes cluster. +When a node is removed, Trident checks all backends that are online to remove the access rule +for the node. By removing this node IP from the export policies of managed backends, Trident +prevents rogue mounts, unless this IP is reused by a new node in the cluster. + +Updating legacy backends +"""""""""""""""""""""""" + +For previously existing backends, updating the backend with ``tridentctl update backend`` +will ensure Trident manages the export policies automatically. This will create a new export +policy named after the backend's UUID and volumes that are present on the backend will use +the newly created export policy when they are mounted again. -If the export policy is locked down to specific hosts, it will need to be -updated when new nodes are added to the cluster, and that access should be -removed when nodes are removed as well. +.. note:: + + Deleting a backend with auto managed export policies will delete the dynamically + created export policy. If the backend is recreated, it is treated as a new backend + and will result in the creation of a new export policy. + +.. note:: + + If the IP address of a live node is updated, you must restart the Trident pod + on the node. Trident will then update the export policy for backends it manages + to reflect this IP change. ontap-san, ontap-san-economy -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------------- All of your Kubernetes worker nodes must have the appropriate iSCSI tools installed. See the :ref:`worker configuration guide ` for more details. @@ -134,8 +292,126 @@ to contain the iSCSI IQNs from every worker node in the Kubernetes cluster. The igroup needs to be updated when new nodes are added to the cluster, and they should be removed when nodes are removed as well. +Trident can authenticate iSCSI sessions with bidirectional CHAP beginning with 20.04 +for the ``ontap-san`` and ``ontap-san-economy`` drivers. This requires enabling the +``useCHAP`` option in your backend definition. When set to ``true``, Trident +configures the SVM's default initiator security to bidirectional CHAP and set +the username and secrets from the backend file. The section below explains this +in detail. + +Using CHAP with ONTAP SAN drivers +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Trident 20.04 introduces bidirectional CHAP support for the ``ontap-san`` and +``ontap-san-economy`` drivers. This simplifies the configuration of CHAP on the ONTAP +cluster and provides a convenient method of creating CHAP credentials and rotating +them using ``tridentctl``. Enabling CHAP on the ONTAP backend requires adding the +``useCHAP`` option and the CHAP secrets in your backend configuration as shown below: + +Configuration +""""""""""""" + +.. code:: + + { + "version": 1, + "storageDriverName": "ontap-san", + "backendName": "ontap_san_chap", + "managementLIF": "192.168.0.135", + "svm": "ontap_iscsi_svm", + "useCHAP": true, + "username": "vsadmin", + "password": "FaKePaSsWoRd", + "igroupName": "trident", + "chapInitiatorSecret": "cl9qxIm36DKyawxy", + "chapTargetInitiatorSecret": "rqxigXgkesIpwxyz", + "chapTargetUsername": "iJF4heBRT0TCwxyz", + "chapUsername": "uh2aNCLSd6cNwxyz", + } + +.. warning:: + + The ``useCHAP`` parameter is a Boolean option that can be configured only once. + It is set to ``false`` by default. Once set to ``true``, it cannot be set to + ``false``. NetApp recommends using Bidirectional CHAP to authenticate connections. + +In addition to ``useCHAP=true``, the ``chapInitiatorSecret``, +``chapTargetInitiatorSecret``, ``chapTargetUsername`` and ``chapUsername`` +fields **must be included** in the backend definition. The secrets can +be changed after a backend is created using ``tridentctl update``. + +How it works +"""""""""""" + +By setting ``useCHAP`` to ``true``, the storage administrator instructs Trident to +configure CHAP on the storage backend. This includes: + +1. Setting up CHAP on the SVM: + + a. If the SVM's default initiator security type is ``none`` (set by default) + **AND** there are no pre-existing LUNs already present in the volume, + Trident will set the default security type to ``CHAP`` and proceed to + step 2. + b. If the SVM contains LUNs, Trident **will not enable CHAP** on the SVM. + This ensures that access to LUNs that are already present on the SVM isn't + restricted. + +2. Configuring the CHAP initiator and target username and secrets; these options must + be specified in the backend configuration (as shown above). +3. Managing the addition of inititators to the ``igroupName`` given in the backend. If + unspecified, this defaults to ``trident``. + +Once the backend is created, Trident creates a corresponding ``tridentbackend`` CRD +and stores the CHAP secrets and usernames as Kubernetes secrets. All PVs that are created +by Trident on this backend will be mounted and attached over CHAP. + +Rotating credentials and updating backends +"""""""""""""""""""""""""""""""""""""""""" + +The CHAP credentials can be rotated by updating the CHAP parameters in +the ``backend.json`` file. This will require updating the CHAP secrets +and using the ``tridentctl update`` command to reflect these changes. + +.. warning:: + + When updating the CHAP secrets for a backend, you **must use** + ``tridentctl`` to update the backend. **Do not** update the credentials + on the storage cluster through the CLI/ONTAP UI as Trident + will not be able to pick up these changes. + +.. code-block:: console + + $ cat backend-san.json + { + "version": 1, + "storageDriverName": "ontap-san", + "backendName": "ontap_san_chap", + "managementLIF": "192.168.0.135", + "svm": "ontap_iscsi_svm", + "useCHAP": true, + "username": "vsadmin", + "password": "FaKePaSsWoRd", + "igroupName": "trident", + "chapInitiatorSecret": "cl9qxUpDaTeD", + "chapTargetInitiatorSecret": "rqxigXgkeUpDaTeD", + "chapTargetUsername": "iJF4heBRT0TCwxyz", + "chapUsername": "uh2aNCLSd6cNwxyz", + } + + $ ./tridentctl update backend ontap_san_chap -f backend-san.json -n trident + +----------------+----------------+--------------------------------------+--------+---------+ + | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | + +----------------+----------------+--------------------------------------+--------+---------+ + | ontap_san_chap | ontap-san | aa458f3b-ad2d-4378-8a33-1a472ffbeb5c | online | 7 | + +----------------+----------------+--------------------------------------+--------+---------+ + +Existing connections will remain unaffected; they will continue to remain active if the credentials +are updated by Trident on the SVM. New connections will use the updated credentials and existing +connections continue to remain active. Disconnecting and reconnecting old PVs will result in them +using the updated credentials. + Backend configuration options ------------------------------ +============================= ========================= ========================================================================================= ================================================ Parameter Description Default @@ -144,9 +420,16 @@ version Always 1 storageDriverName "ontap-nas", "ontap-nas-economy", "ontap-nas-flexgroup", "ontap-san", "ontap-san-economy" backendName Custom name for the storage backend Driver name + "_" + dataLIF managementLIF IP address of a cluster or SVM management LIF "10.0.0.1", "[2001:1234:abcd::fefe]" -dataLIF IP address of protocol LIF Derived by the SVM unless specified +dataLIF IP address of protocol LIF. **Use square brackets for IPv6** Derived by the SVM unless specified +useCHAP Use CHAP to authenticate iSCSI for ONTAP SAN drivers [Boolean] false +chapInitiatorSecret CHAP initiator secret. Required if ``useCHAP=true`` "" +chapTargetInitiatorSecret CHAP target initiator secret. Required if ``useCHAP=true`` "" +chapUsername Inbound username. Required if ``useCHAP=true`` "" +chapTargetUsername Target username. Required if ``useCHAP=true`` "" svm Storage virtual machine to use Derived if an SVM managementLIF is specified igroupName Name of the igroup for SAN volumes to use "trident" +autoExportPolicy Enable automatic export policy creation and updating [Boolean] false +autoExportCIDRs List of CIDRs to filter Kubernetes' node IPs against when autoExportPolicy is enabled ["0.0.0.0/0", "::/0"] username Username to connect to the cluster/SVM password Password to connect to the cluster/SVM storagePrefix Prefix used when provisioning new volumes in the SVM "trident" @@ -160,16 +443,36 @@ option. For the ``ontap-nas*`` drivers only, a FQDN may also be specified for the ``dataLIF`` option, in which case the FQDN will be used for the NFS mount operations. -The ``managementLIF`` and ``dataLIF`` options for all ONTAP drivers can +The ``managementLIF`` for all ONTAP drivers can also be set to IPv6 addresses. Make sure to install Trident with the ``--use-ipv6`` flag. Care must be taken to define the ``managementLIF`` -IPv6 address **within square brackets** as shown in the example above. +IPv6 address **within square brackets**. + +.. warning:: + + When using IPv6 addresses, make sure the ``managementLIF`` and ``dataLIF`` + [if included in your backend defition] are defined + within square brackets, such as ``[28e8:d9fb:a825:b7bf:69a8:d02f:9e7b:3555]``. + If the ``dataLIF`` is not provided, Trident will fetch the IPv6 data LIFs + from the SVM. For the ``ontap-san*`` drivers, the default is to use all data LIF IPs from the SVM and to use iSCSI multipath. Specifying an IP address for the ``dataLIF`` for the ``ontap-san*`` drivers forces them to disable multipath and use only the specified address. +Using the ``autoExportPolicy`` and ``autoExportCIDRs`` options, CSI Trident can +manage export policies automatically. This is supported for the ``ontap-nas-*`` +drivers and explained in the +:ref:`Dynamic Export Policies ` +section. + +To enable the ``ontap-san*`` drivers to use CHAP, set the ``useCHAP`` parameter to +``true`` in your backend definition. Trident will then configure and use +bidirectional CHAP as the default authentication for the SVM given in the backend. +The :ref:`CHAP with ONTAP SAN drivers` +section explains how this works. + For the ``ontap-nas-economy`` and the ``ontap-san-economy`` drivers, the ``limitVolumeSize`` option will also restrict the maximum size of the volumes it manages for qtrees and LUNs. @@ -203,11 +506,11 @@ tieringPolicy Tiering policy to use ========================= =============================================================== ================================================ Example configurations ----------------------- +====================== **Example 1 - Minimal backend configuration for ontap drivers** -**NFS Example for ontap-nas driver** +**NFS Example for ontap-nas driver with auto export policy** .. code-block:: json @@ -217,6 +520,8 @@ Example configurations "managementLIF": "10.0.0.1", "dataLIF": "10.0.0.2", "svm": "svm_nfs", + "autoExportPolicy": true, + "autoExportCIDRs": ["10.0.0.0/24"], "username": "admin", "password": "secret", "nfsMountOptions": "nfsvers=4", @@ -236,7 +541,19 @@ Example configurations "password": "secret", } +**NFS Example for ontap-nas driver with IPv6** + +.. code-block:: json + { + "version": 1, + "storageDriverName": "ontap-nas", + "backendName": "nas_ipv6_backend", + "managementLIF": "[5c5d:5edf:8f:7657:bef8:109b:1b41:d491]", + "svm": "nas_ipv6_svm", + "username": "vsadmin", + "password": "netapp123" + } **NFS Example for ontap-nas-economy driver** @@ -262,6 +579,7 @@ Example configurations "managementLIF": "10.0.0.1", "dataLIF": "10.0.0.3", "svm": "svm_iscsi", + "useCHAP": true, "igroupName": "trident", "username": "vsadmin", "password": "secret" @@ -276,6 +594,7 @@ Example configurations "storageDriverName": "ontap-san-economy", "managementLIF": "10.0.0.1", "svm": "svm_iscsi_eco", + "useCHAP": true, "igroupName": "trident", "username": "vsadmin", "password": "secret" @@ -482,6 +801,7 @@ pools are defined in the ``storage`` section. In this example, some of the stora "managementLIF": "10.0.0.1", "dataLIF": "10.0.0.3", "svm": "svm_iscsi", + "useCHAP": true, "igroupName": "trident", "username": "vsadmin", "password": "secret", @@ -529,6 +849,7 @@ pools are defined in the ``storage`` section. In this example, some of the stora "storageDriverName": "ontap-san-economy", "managementLIF": "10.0.0.1", "svm": "svm_iscsi_eco", + "useCHAP": true, "igroupName": "trident", "username": "vsadmin", "password": "secret", @@ -620,7 +941,7 @@ Trident will decide which virtual storage pool is selected and will ensure the s selector: "creditpoints=5000" User permissions ----------------- +================ Trident expects to be run as either an ONTAP or SVM administrator, typically using the ``admin`` cluster user or a ``vsadmin`` SVM user, or a user with a diff --git a/docs/kubernetes/operations/tasks/managing.rst b/docs/kubernetes/operations/tasks/managing.rst index 330a4064c..a2f9582b0 100644 --- a/docs/kubernetes/operations/tasks/managing.rst +++ b/docs/kubernetes/operations/tasks/managing.rst @@ -34,6 +34,61 @@ ServiceMonitor to obtain Trident's metrics. Uninstalling Trident -------------------- +Depending on how Trident is installed, there are multiple options to uninstall +Trident. + +Uninstalling with the Trident Operator +************************************** + +If you have installed Trident using the :ref:`operator `, +you can uninstall Trident by either: + +1. **Editing the TridentProvisioner to set the uninstall flag:** You can + edit the TridentProvisioner and set ``spec.uninstall=true`` to + uninstall Trident. + +2. **Deleting the TridentProvisioner:** By removing the ``TridentProvisioner`` + CR that was used to deploy Trident, you instruct the operator to + uninstall Trident. The operator processes the removal of the + TridentProvisioner and proceeds to remove the Trident deployment and + daemonset, deleting the Trident pods it had created on + installation. + +To uninstall Trident, edit the ``TridentProvisioner`` and set the +``uninstall`` flag as shown below: + +.. code-block:: bash + + $ kubectl patch tprov -n trident --type=merge -p '{"spec":{"uninstall":true}}' + +When the ``uninstall`` flag is set to ``true``, the Trident Operator +uninstalls Trident but doesn't remove the TridentProvisioner itself. You +must clean up the TridentProvisioner and create a new one if you want to +install Trident again. + +To completely remove Trident (including the CRDs it creates) and effectively +wipe the slate clean, you can edit the ``TridentProvisioner`` to pass the +``wipeout`` option. + +.. warning:: + + You must only consider wiping out the CRDs when performing a complete + uninstallation. This will completely uninstall Trident and cannot be + undone. **Do not wipeout the CRDs unless you are looking to start over + and create a fresh Trident install**. + +.. code-block:: bash + + $ kubectl patch tprov -n trident --type=merge -p '{"spec":{"wipeout":["crds"],"uninstall":true}}' + + +This will **completely uninstall Trident and clear all metadata related +to backends and volumes it manages**. Subsequent installations will +be treated as a fresh install. + +Uninstalling with tridentctl +**************************** + The uninstall command in tridentctl will remove all of the resources associated with Trident except for the CRDs and related objects, making it easy to run the installer again to update to a more recent version. diff --git a/docs/kubernetes/operations/tasks/worker.rst b/docs/kubernetes/operations/tasks/worker.rst index ebbcc5143..1cfe17822 100644 --- a/docs/kubernetes/operations/tasks/worker.rst +++ b/docs/kubernetes/operations/tasks/worker.rst @@ -10,7 +10,8 @@ your backends, your workers will need the :ref:`NFS` tools. Otherwise they require the :ref:`iSCSI` tools. .. note:: - Recent versions of CoreOS have both installed by default. + Recent versions of RedHat CoreOS have both installed by default. You must ensure + that the NFS and iSCSI services are started up during boot time. .. note:: When using worker nodes that run RHEL/RedHat CoreOS with iSCSI diff --git a/docs/kubernetes/operator-install.rst b/docs/kubernetes/operator-install.rst new file mode 100644 index 000000000..83fcd2e06 --- /dev/null +++ b/docs/kubernetes/operator-install.rst @@ -0,0 +1,561 @@ +.. _deploying-with-operator: + +Deploying with the Trident Operator +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you are looking to deploy Trident using the Trident Operator, you are +in the right place. This page contains all the steps required for getting +started with the Trident Operator to install and manage Trident. + +.. important:: + + The 20.04 release limits the Trident Operator to + **greenfield installations only**. + +Prerequisites +============= + +If you have not already familiarized yourself with the +:ref:`basic concepts `, now is a great time to do that. Go +ahead, we'll be here when you get back. + +To deploy Trident using the operator you need: + +.. sidebar:: Need Kubernetes? + + If you do not already have a Kubernetes cluster, you can easily create one for + demonstration purposes using our + :ref:`simple Kubernetes install guide `. + +* Full privileges to a + :ref:`supported Kubernetes cluster ` + running Kubernetes ``1.14`` and above. +* Access to a + :ref:`supported NetApp storage system ` +* :ref:`Volume mount capability ` from all of the + Kubernetes worker nodes +* A Linux host with ``kubectl`` (or ``oc``, if you're using OpenShift) installed + and configured to manage the Kubernetes cluster you want to use +* Set the ``KUBECONFIG`` environment variable to point to your Kubernetes + cluster configuration. +* Enable the :ref:`Feature Gates ` required by Trident +* If you are using Kubernetes with Docker Enterprise, `follow their steps + to enable CLI access `_. + +Got all that? Great! Let's get started. + +1: Qualify your Kubernetes cluster +================================== + +You made sure that you have everything in hand from the +:ref:`previous section `, right? Right. + +The first thing you need to do is log into the Linux host and verify that it is +managing a *working*, +:ref:`supported Kubernetes cluster ` that +you have the necessary privileges to. + +.. note:: + With OpenShift, you will use ``oc`` instead of ``kubectl`` in all of the + examples that follow, and you need to login as **system:admin** first by + running ``oc login -u system:admin`` or ``oc login -u kube-admin``. + +.. code-block:: bash + + # Is your Kubernetes version greater than 1.14? + kubectl version + + # Are you a Kubernetes cluster administrator? + kubectl auth can-i '*' '*' --all-namespaces + + # Can you launch a pod that uses an image from Docker Hub and can reach your + # storage system over the pod network? + kubectl run -i --tty ping --image=busybox --restart=Never --rm -- \ + ping + +2: Download & setup the operator +================================ + +.. note:: + + Using the Trident Operator to install Trident requires creating the + ``TridentProvisioner`` Custom Resource Definition and defining other + resources. You will need to perform these steps to setup the operator + before you can install Trident. + +Download the latest version of the `Trident installer bundle`_ from the +*Downloads* section and extract it. + +For example, if the latest version is 20.04.0: + +.. code-block:: console + + wget https://github.com/NetApp/trident/releases/download/v20.04.0/trident-installer-20.04.0.tar.gz + tar -xf trident-installer-20.04.0.tar.gz + cd trident-installer + +.. _Trident installer bundle: https://github.com/NetApp/trident/releases/latest + +Use the appropriate CRD manifest to create the ``TridentProvisioner`` Custom +Resource Definition. You will then create a ``TridentProvisioner`` Custom Resource +later on to instantiate a Trident install by the operator. + +.. code-block:: bash + + # Is your Kubernetes version < 1.16? + kubectl create -f deploy/crds/trident.netapp.io_tridentprovisioners_crd_pre1.16.yaml + + # If not, your Kubernetes version must be 1.16 and above + kubectl create -f deploy/crds/trident.netapp.io_tridentprovisioners_crd_post1.16.yaml + +Once the ``TridentProvisioner`` CRD is created, you will then have to create +the resources required for the operator deployment, such as: + +* a ServiceAccount for the operator. +* a ClusterRole and ClusterRoleBinding to the ServiceAccount. +* a dedicated PodSecurityPolicy. +* the Operator itself. + +The Trident Installer contains manifests for defining these resources. +If you would like to deploy the operator in a namespace other than +the default ``trident`` namespace, you will need to update the +``serviceaccount.yaml``, ``clusterrolebinding.yaml`` and ``operator.yaml`` +manifests and generate your ``bundle.yaml``. + +.. code-block:: bash + + # Have you updated the yaml manifests? Generate your bundle.yaml + # using the kustomization.yaml + kubectl kustomize deploy/ > deploy/bundle.yaml + + # Create the resources and deploy the operator + kubectl create -f deploy/bundle.yaml + +You can check the status of the operator once you have deployed. + +.. code-block:: console + + $ kubectl get deployment -n + NAME READY UP-TO-DATE AVAILABLE AGE + trident-operator 1/1 1 1 3m + + $ kubectl get pods -n + NAME READY STATUS RESTARTS AGE + trident-operator-54cb664d-lnjxh 1/1 Running 0 3m + +The operator deployment successfully creates a pod running on one of the +worker nodes in your cluster. + +.. important:: + + There must only be **one instance of the operator in a Kubernetes cluster**. + **Do not create multiple deployments of the Trident operator**. + +3: Creating a TridentProvisioner CR and installing Trident +========================================================== + +You are now ready to install Trident using the operator! This will require +creating a TridentProvisioner CR. The Trident installer comes with example +defintions for creating a TridentProvisioner CR. + +.. code-block:: console + + $ kubectl create -f deploy/crds/tridentprovisioner_cr.yaml + tridentprovisioner.trident.netapp.io/trident created + + $ kubectl get tprov -n trident + NAME AGE + trident 5s + $ kubectl describe tprov trident -n trident + Name: trident + Namespace: trident + Labels: + Annotations: + API Version: trident.netapp.io/v1 + Kind: TridentProvisioner + ... + Spec: + Debug: true + Status: + Message: Successfully installed Trident + Status: Installed + Version: v20.04 + Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal Installing 25s trident-operator.netapp.io Installing Trident + Normal Installed 1s (x4 over 59s) trident-operator.netapp.io Successfully installed Trident + +Observing the status of the operator +"""""""""""""""""""""""""""""""""""" + +The Status of the TridentProvisioner will indicate if the installation +was successful and will display the version of Trident installed. + ++-----------------+--------------------------------------------------------------------------+ +| Status | Description | ++=================+==========================================================================+ +| Installing | The operator is installing Trident using this ``TridentProvisioner`` CR.| ++-----------------+--------------------------------------------------------------------------+ +| Installed | Trident has successfully installed. | ++-----------------+--------------------------------------------------------------------------+ +| Uninstalling | The operator is uninstalling Trident, since ``spec.uninstall=true``. | ++-----------------+--------------------------------------------------------------------------+ +| Uninstalled | Trident is uninstalled. | ++-----------------+--------------------------------------------------------------------------+ +| Failed | The operator could not install, patch, update or uninstall Trident; the | ++-----------------+--------------------------------------------------------------------------+ +| | operator will automatically try to recover from this state. If this | ++-----------------+--------------------------------------------------------------------------+ +| | state persists you will require troubleshooting. | ++-----------------+--------------------------------------------------------------------------+ +| Updating | The operator is updating an existing Trident installation. | ++-----------------+--------------------------------------------------------------------------+ +| Error | The ``TridentProvisioner`` is not used. Another one already exists. | ++-----------------+--------------------------------------------------------------------------+ + +During the installation, the status of the ``TridentProvisioner`` +will change from ``Installing`` to ``Installed``. If you observe +the ``Failed`` status and the operator is unable to recover by +itself, there's probably something wrong and you +will need to check the logs of the operator by running +``tridentctl logs -l trident-operator``. + +You can also confirm if the Trident install completed +by taking a look at the pods that have been created: + +.. code-block:: console + + $ kubectl get pod -n trident + NAME READY STATUS RESTARTS AGE + trident-csi-7d466bf5c7-v4cpw 5/5 Running 0 1m + trident-csi-mr6zc 2/2 Running 0 1m + trident-csi-xrp7w 2/2 Running 0 1m + trident-csi-zh2jt 2/2 Running 0 1m + trident-operator-766f7b8658-ldzsv 1/1 Running 0 3m + + +You can also use ``tridentctl`` to check the version of Trident installed. + +.. code-block:: console + + $ ./tridentctl -n trident version + +----------------+----------------+ + | SERVER VERSION | CLIENT VERSION | + +----------------+----------------+ + | 20.04.0 | 20.04.0 | + +----------------+----------------+ + +If that's what you see, you're done with this step, but **Trident is not +yet fully configured.** Go ahead and continue to the +:ref:`next step <4: Creating a Trident backend>` to create +a Trident backend using ``tridentctl``. + +However, if the installer does not complete successfully or you don't see +a **Running** ``trident-csi-``, then Trident had a problem and the platform was *not* +installed. + +To understand why the installation of Trident was unsuccessful, you should +first take a look at the ``TridentProvisioner`` status. + +.. code-block:: console + + $ kubectl describe tprov trident-2 -n trident + Name: trident-2 + Namespace: trident + Labels: + Annotations: + API Version: trident.netapp.io/v1 + Kind: TridentProvisioner + Status: + Message: Trident is bound to another CR 'trident' in the same namespace + Status: Error + Version: + Events: + +This error indicates that there already exists a TridentProvisioner that was +used to install Trident. Since each Kubernetes cluster can only have one instance +of Trident, the operator ensures that at any given time there only exists one +active TridentProvisioner that it can create. + +Another thing to do is to check the operator logs. Trailing the logs of the +``trident-operator`` container can point to where the problem lies. + +.. code-block:: console + + $ tridentctl logs -l trident-operator + +For example, one such issue could be the inability to pull the required container +images from upstream registries in an airgapped environment. The logs from the +operator can help identify this problem and fix it. + +In addition, observing the status of the Trident pods can often indicate if +something is not right. + +.. code-block:: console + + $ kubectl get pods -n trident + + NAME READY STATUS RESTARTS AGE + trident-csi-4p5kq 1/2 ImagePullBackOff 0 5m18s + trident-csi-6f45bfd8b6-vfrkw 4/5 ImagePullBackOff 0 5m19s + trident-csi-9q5xc 1/2 ImagePullBackOff 0 5m18s + trident-csi-9v95z 1/2 ImagePullBackOff 0 5m18s + trident-operator-766f7b8658-ldzsv 1/1 Running 0 8m17s + +You can clearly see that the pods are not able to intialize completely as one +or more container images were not fetched. + +To address the problem, you must edit the TridentProvisioner CR. Alternatively, +you can delete the TridentProvisioner and create a new one with the modified, +accurate definition. + +If you continue to have trouble, visit the +:ref:`troubleshooting guide ` for more advice. + +Customizing your deployment +""""""""""""""""""""""""""" + +The Trident operator provides users the ability to customize the manner in which +Trident is installed, using the following attributes in the TridentProvisioner ``spec``: + +========================= ====================================================================== ================================================ +Parameter Description Default +========================= ====================================================================== ================================================ +debug Enable debugging for Trident 'false' +useIPv6 Install Trident over IPv6 'false' +logFormat Trident logging format to be used [text,json] "text" +kubeletDir Path to the kubelet directory on the host "/var/lib/kubelet" +imageRegistry Path to an internal registry, of the format ``[:port]`` "quay.io" +tridentImage Trident image to install "netapp/trident:20.04" +imagePullSecrets Secrets to pull images from an internal registry +uninstall A flag used to uninstall Trident 'false' +wipeout A list of resources to delete to perform a complete removal of Trident +========================= ====================================================================== ================================================ + +You can use the attributes mentioned above when defining a TridentProvisioner to +customize your Trident installation. Here's an example: + +.. code-block:: console + + $ cat deploy/crds/tridentprovisioner_cr_imagepullsecrets.yaml + apiVersion: trident.netapp.io/v1 + kind: TridentProvisioner + metadata: + name: trident + namespace: trident + spec: + debug: true + tridentImage: netapp/trident:20.04.0 + imagePullSecrets: + - thisisasecret + + +If you are looking to customize Trident's installation beyond what the TridentProvisioner's +arguments allow, you should consider using ``tridentctl`` to generate custom +yaml manifests that you can modify as desired. Head on over to the +:ref:`deployment guide for tridentctl ` to learn +how this works. + +4: Creating a Trident backend +============================= + +You can now go ahead and create a backend that will be used by Trident +to provision volumes. To do this, create a ``backend.json`` file that +contains the necessary parameters. Sample configuration files for +different backend types can be found in the ``sample-input`` directory. + +Visit the :ref:`backend configuration guide ` +for more details about how to craft the configuration file for +your backend type. + +.. code-block:: bash + + cp sample-input/.json backend.json + # Fill out the template for your backend + vi backend.json + +.. code-block:: console + + ./tridentctl -n trident create backend -f backend.json + +-------------+----------------+--------------------------------------+--------+---------+ + | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | + +-------------+----------------+--------------------------------------+--------+---------+ + | nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 | + +-------------+----------------+--------------------------------------+--------+---------+ + +If the creation fails, something was wrong with the backend configuration. You +can view the logs to determine the cause by running: + +.. code-block:: console + + ./tridentctl -n trident logs + +After addressing the problem, simply go back to the beginning of this step +and try again. If you continue to have trouble, visit the +:ref:`troubleshooting guide ` for more advice on how to +determine what went wrong. + +5: Creating a Storage Class +=========================== + +Kubernetes users provision volumes using persistent volume claims (PVCs) that +specify a `storage class`_ by name. The details are hidden from users, but a +storage class identifies the provisioner that will be used for that class (in +this case, Trident) and what that class means to the provisioner. + +.. sidebar:: Basic too basic? + + This is just a basic storage class to get you started. There's an art to + :ref:`crafting differentiated storage classes ` + that you should explore further when you're looking at building them for + production. + +Create a storage class Kubernetes users will specify when they want a volume. +The configuration of the class needs to model the backend that you created +in the previous step so that Trident will use it to provision new volumes. + +The simplest storage class to start with is one based on the +``sample-input/storage-class-csi.yaml.templ`` file that comes with the +installer, replacing ``__BACKEND_TYPE__`` with the storage driver name. + +.. code-block:: bash + + ./tridentctl -n trident get backend + +-------------+----------------+--------------------------------------+--------+---------+ + | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | + +-------------+----------------+--------------------------------------+--------+---------+ + | nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 | + +-------------+----------------+--------------------------------------+--------+---------+ + + cp sample-input/storage-class-csi.yaml.templ sample-input/storage-class-basic.yaml + + # Modify __BACKEND_TYPE__ with the storage driver field above (e.g., ontap-nas) + vi sample-input/storage-class-basic.yaml + +This is a Kubernetes object, so you will use ``kubectl`` to create it in +Kubernetes. + +.. code-block:: console + + kubectl create -f sample-input/storage-class-basic.yaml + +You should now see a **basic** storage class in both Kubernetes and Trident, +and Trident should have discovered the pools on the backend. + +.. code-block:: console + + kubectl get sc basic + NAME PROVISIONER AGE + basic csi.trident.netapp.io 15h + + ./tridentctl -n trident get storageclass basic -o json + { + "items": [ + { + "Config": { + "version": "1", + "name": "basic", + "attributes": { + "backendType": "ontap-nas" + }, + "storagePools": null, + "additionalStoragePools": null + }, + "storage": { + "ontapnas_10.0.0.1": [ + "aggr1", + "aggr2", + "aggr3", + "aggr4" + ] + } + } + ] + } + +.. _storage class: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses + +6: Provision your first volume +============================== + +Now you're ready to dynamically provision your first volume. How exciting! This +is done by creating a Kubernetes `persistent volume claim`_ (PVC) object, and +this is exactly how your users will do it too. + +.. _persistent volume claim: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +Create a persistent volume claim (PVC) for a volume that uses the storage +class that you just created. + +See ``sample-input/pvc-basic.yaml`` for an example. Make sure the storage +class name matches the one that you created in 6. + +.. code-block:: bash + + kubectl create -f sample-input/pvc-basic.yaml + + kubectl get pvc --watch + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + basic Pending basic 1s + basic Pending pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 0 basic 5s + basic Bound pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 1Gi RWO basic 7s + +7: Mount the volume in a pod +============================ + +Now that you have a volume, let's mount it. We'll launch an nginx pod that +mounts the PV under ``/usr/share/nginx/html``. + +.. code-block:: bash + + cat << EOF > task-pv-pod.yaml + kind: Pod + apiVersion: v1 + metadata: + name: task-pv-pod + spec: + volumes: + - name: task-pv-storage + persistentVolumeClaim: + claimName: basic + containers: + - name: task-pv-container + image: nginx + ports: + - containerPort: 80 + name: "http-server" + volumeMounts: + - mountPath: "/usr/share/nginx/html" + name: task-pv-storage + EOF + kubectl create -f task-pv-pod.yaml + +.. code-block:: bash + + # Wait for the pod to start + kubectl get pod --watch + + # Verify that the volume is mounted on /usr/share/nginx/html + kubectl exec -it task-pv-pod -- df -h /usr/share/nginx/html + Filesystem Size Used Avail Use% Mounted on + 10.xx.xx.xx:/trident_pvc_3acb0d1c_b1ae_11e9_8d9f_5254004dfdb7 1.0G 256K 1.0G 1% /usr/share/nginx/html + + + # Delete the pod + kubectl delete pod task-pv-pod + +At this point the pod (application) no longer exists but the volume is still +there. You could use it from another pod if you wanted to. + +To delete the volume, simply delete the claim: + +.. code-block:: console + + kubectl delete pvc basic + +Where do you go from here? you can do things like: + + * :ref:`Configure additional backends `. + * :ref:`Model additional storage classes `. + * Review considerations for moving this into production. diff --git a/docs/kubernetes/tridentctl-install.rst b/docs/kubernetes/tridentctl-install.rst new file mode 100644 index 000000000..15a533f6a --- /dev/null +++ b/docs/kubernetes/tridentctl-install.rst @@ -0,0 +1,431 @@ +.. _deploying-with-tridentctl: + +Deploying with tridentctl +^^^^^^^^^^^^^^^^^^^^^^^^^ + +Welcome to the Deployment Guide for installing Trident using +``tridentctl``! This page explains the various steps involved +in deploying Trident in your Kubernetes cluster using ``tridentctl``. + +Before you begin +================ + +If you have not already familiarized yourself with the +:ref:`basic concepts `, now is a great time to do that. Go +ahead, we'll be here when you get back. + +To deploy Trident you need: + +.. sidebar:: Need Kubernetes? + + If you do not already have a Kubernetes cluster, you can easily create one for + demonstration purposes using our + :ref:`simple Kubernetes install guide `. + +* Full privileges to a + :ref:`supported Kubernetes cluster ` +* Access to a + :ref:`supported NetApp storage system ` +* :ref:`Volume mount capability ` from all of the + Kubernetes worker nodes +* A Linux host with ``kubectl`` (or ``oc``, if you're using OpenShift) installed + and configured to manage the Kubernetes cluster you want to use +* Set the ``KUBECONFIG`` environment variable to point to your Kubernetes + cluster configuration. +* Enable the :ref:`Feature Gates ` required by Trident +* If you are using Kubernetes with Docker Enterprise, `follow their steps + to enable CLI access `_. + +Got all that? Great! Let's get started. + +1: Qualify your Kubernetes cluster +================================== + +You made sure that you have everything in hand from the +:ref:`previous section `, right? Right. + +The first thing you need to do is log into the Linux host and verify that it is +managing a *working*, +:ref:`supported Kubernetes cluster ` that +you have the necessary privileges to. + +.. note:: + With OpenShift, you will use ``oc`` instead of ``kubectl`` in all of the + examples that follow, and you need to login as **system:admin** first by + running ``oc login -u system:admin`` or ``oc login -u kube-admin``. + +.. code-block:: bash + + # Are you running a supported Kubernetes server version? + kubectl version + + # Are you a Kubernetes cluster administrator? + kubectl auth can-i '*' '*' --all-namespaces + + # Can you launch a pod that uses an image from Docker Hub and can reach your + # storage system over the pod network? + kubectl run -i --tty ping --image=busybox --restart=Never --rm -- \ + ping + +Identify your Kubernetes server version. You will be using it when you +:ref:`Install Trident <3: Install Trident>`. + +2: Download & extract the installer +=================================== + +.. note:: + Trident's installer is responsible for creating a Trident pod, configuring + the CRD objects that are used to maintain its state and to + initialize the CSI Sidecars that perform actions such as provisioning and + attaching volumes to the cluster hosts. + +Download the latest version of the `Trident installer bundle`_ from the +*Downloads* section and extract it. + +For example, if the latest version is 20.04.0: + +.. code-block:: console + + wget https://github.com/NetApp/trident/releases/download/v20.04.0/trident-installer-20.04.0.tar.gz + tar -xf trident-installer-20.04.0.tar.gz + cd trident-installer + +.. _Trident installer bundle: https://github.com/NetApp/trident/releases/latest + +3: Install Trident +================== + +Install Trident in the desired namespace by executing the +:ref:`tridentctl install ` command. The installation procedure +slightly differs depending on the version of Kubernetes being used. + +Installing Trident on Kubernetes 1.13 +------------------------------------- + +On Kubernetes ``1.13``, there are a couple of options when installing Trident: + +- Install Trident in the desired namespace by executing the + ``tridentctl install`` command with the ``--csi`` flag. The CSI interface is + `first included in Kubernetes 1.13 `_. + and requires activating :ref:`feature gates `. + The output observed when installing will be similar to that shown + :ref:`below `. + +- If for some reason the :ref:`feature gates ` required by Trident + cannot be enabled, you can install Trident without the ``--csi`` flag. This will + configure Trident to work in its traditional format without using the CSI + specification. + +Installing Trident on Kubernetes 1.14 and above +----------------------------------------------- + +Install Trident in the desired namespace by executing the +``tridentctl install`` command. + +.. code-block:: console + + $ ./tridentctl install -n trident + .... + INFO Starting Trident installation. namespace=trident + INFO Created service account. + INFO Created cluster role. + INFO Created cluster role binding. + INFO Added finalizers to custom resource definitions. + INFO Created Trident service. + INFO Created Trident secret. + INFO Created Trident deployment. + INFO Created Trident daemonset. + INFO Waiting for Trident pod to start. + INFO Trident pod started. namespace=trident pod=trident-csi-679648bd45-cv2mx + INFO Waiting for Trident REST interface. + INFO Trident REST interface is up. version=20.04.0 + INFO Trident installation succeeded. + .... + +It will look like this when the installer is complete. Depending on +the number of nodes in your Kubernetes cluster, you may observe more pods: + +.. code-block:: console + + $ kubectl get pod -n trident + NAME READY STATUS RESTARTS AGE + trident-csi-679648bd45-cv2mx 4/4 Running 0 5m29s + trident-csi-vgc8n 2/2 Running 0 5m29s + + $ ./tridentctl -n trident version + +----------------+----------------+ + | SERVER VERSION | CLIENT VERSION | + +----------------+----------------+ + | 20.04.0 | 20.04.0 | + +----------------+----------------+ + +If that's what you see, you're done with this step, but **Trident is not +yet fully configured.** Go ahead and continue to the next step. + +However, if the installer does not complete successfully or you don't see +a **Running** ``trident-csi-``, then Trident had a problem and the platform was *not* +installed. + +To help figure out what went wrong, you could run the installer again using the ``-d`` argument, +which will turn on debug mode and help you understand what the problem is: + +.. code-block:: console + + ./tridentctl install -n trident -d + +After addressing the problem, you can clean up the installation and go back to +the beginning of this step by first running: + +.. code-block:: console + + ./tridentctl uninstall -n trident + INFO Deleted Trident deployment. + INFO Deleted cluster role binding. + INFO Deleted cluster role. + INFO Deleted service account. + INFO Removed Trident user from security context constraint. + INFO Trident uninstallation succeeded. + +If you continue to have trouble, visit the +:ref:`troubleshooting guide ` for more advice. + +Customized Installation +----------------------- + +Trident's installer allows you to customize attributes. For example, if you have +copied the Trident image to a private repository, you can specify the image name by using +``--trident-image``. If you have copied the Trident image as well as the needed CSI +sidecar images to a private repository, it may be preferable to specify the location +of that repository by using the ``--image-registry`` switch, which takes the form +``[:port]``. + +If you are using a distribution of Kubernetes where kubelet keeps its data on a path +other than the usual ``/var/lib/kubelet``, you can specify the alternate path by using +``--kubelet-dir``. + +As a last resort, if you need to customize Trident's installation beyond what the +installer's arguments allow, you can also customize Trident's deployment files. Using +the ``--generate-custom-yaml`` parameter will create the following YAML files in the +installer's ``setup`` directory: + +- trident-clusterrolebinding.yaml +- trident-deployment.yaml +- trident-crds.yaml +- trident-clusterrole.yaml +- trident-daemonset.yaml +- trident-service.yaml +- trident-namespace.yaml +- trident-serviceaccount.yaml + +Once you have generated these files, you can modify them according to your needs and +then use the ``--use-custom-yaml`` to install your custom deployment of Trident. + +.. code-block:: console + + ./tridentctl install -n trident --use-custom-yaml + +4: Create and Verify your first backend +======================================= + +You can now go ahead and create a backend that will be used by Trident +to provision volumes. To do this, create a ``backend.json`` file that +contains the necessary parameters. Sample configuration files for +different backend types can be found in the ``sample-input`` directory. + +Visit the :ref:`backend configuration guide ` +for more details about how to craft the configuration file for +your backend type. + +.. code-block:: bash + + cp sample-input/.json backend.json + # Fill out the template for your backend + vi backend.json + +.. code-block:: console + + ./tridentctl -n trident create backend -f backend.json + +-------------+----------------+--------------------------------------+--------+---------+ + | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | + +-------------+----------------+--------------------------------------+--------+---------+ + | nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 | + +-------------+----------------+--------------------------------------+--------+---------+ + +If the creation fails, something was wrong with the backend configuration. You +can view the logs to determine the cause by running: + +.. code-block:: console + + ./tridentctl -n trident logs + +After addressing the problem, simply go back to the beginning of this step +and try again. If you continue to have trouble, visit the +:ref:`troubleshooting guide ` for more advice on how to +determine what went wrong. + +5: Add your first storage class +=============================== + +Kubernetes users provision volumes using persistent volume claims (PVCs) that +specify a `storage class`_ by name. The details are hidden from users, but a +storage class identifies the provisioner that will be used for that class (in +this case, Trident) and what that class means to the provisioner. + +.. sidebar:: Basic too basic? + + This is just a basic storage class to get you started. There's an art to + :ref:`crafting differentiated storage classes ` + that you should explore further when you're looking at building them for + production. + +Create a storage class Kubernetes users will specify when they want a volume. +The configuration of the class needs to model the backend that you created +in the previous step so that Trident will use it to provision new volumes. + +The simplest storage class to start with is one based on the +``sample-input/storage-class-csi.yaml.templ`` file that comes with the +installer, replacing ``__BACKEND_TYPE__`` with the storage driver name. + +.. code-block:: bash + + ./tridentctl -n trident get backend + +-------------+----------------+--------------------------------------+--------+---------+ + | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | + +-------------+----------------+--------------------------------------+--------+---------+ + | nas-backend | ontap-nas | 98e19b74-aec7-4a3d-8dcf-128e5033b214 | online | 0 | + +-------------+----------------+--------------------------------------+--------+---------+ + + cp sample-input/storage-class-csi.yaml.templ sample-input/storage-class-basic.yaml + + # Modify __BACKEND_TYPE__ with the storage driver field above (e.g., ontap-nas) + vi sample-input/storage-class-basic.yaml + +This is a Kubernetes object, so you will use ``kubectl`` to create it in +Kubernetes. + +.. code-block:: console + + kubectl create -f sample-input/storage-class-basic.yaml + +You should now see a **basic** storage class in both Kubernetes and Trident, +and Trident should have discovered the pools on the backend. + +.. code-block:: console + + kubectl get sc basic + NAME PROVISIONER AGE + basic csi.trident.netapp.io 15h + + ./tridentctl -n trident get storageclass basic -o json + { + "items": [ + { + "Config": { + "version": "1", + "name": "basic", + "attributes": { + "backendType": "ontap-nas" + }, + "storagePools": null, + "additionalStoragePools": null + }, + "storage": { + "ontapnas_10.0.0.1": [ + "aggr1", + "aggr2", + "aggr3", + "aggr4" + ] + } + } + ] + } + +.. _storage class: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses + +6: Provision your first volume +============================== + +Now you're ready to dynamically provision your first volume. How exciting! This +is done by creating a Kubernetes `persistent volume claim`_ (PVC) object, and +this is exactly how your users will do it too. + +.. _persistent volume claim: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims + +Create a persistent volume claim (PVC) for a volume that uses the storage +class that you just created. + +See ``sample-input/pvc-basic.yaml`` for an example. Make sure the storage +class name matches the one that you created in 6. + +.. code-block:: bash + + kubectl create -f sample-input/pvc-basic.yaml + + kubectl get pvc --watch + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + basic Pending basic 1s + basic Pending pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 0 basic 5s + basic Bound pvc-3acb0d1c-b1ae-11e9-8d9f-5254004dfdb7 1Gi RWO basic 7s + +7: Mount the volume in a pod +============================ + +Now that you have a volume, let's mount it. We'll launch an nginx pod that +mounts the PV under ``/usr/share/nginx/html``. + +.. code-block:: bash + + cat << EOF > task-pv-pod.yaml + kind: Pod + apiVersion: v1 + metadata: + name: task-pv-pod + spec: + volumes: + - name: task-pv-storage + persistentVolumeClaim: + claimName: basic + containers: + - name: task-pv-container + image: nginx + ports: + - containerPort: 80 + name: "http-server" + volumeMounts: + - mountPath: "/usr/share/nginx/html" + name: task-pv-storage + EOF + kubectl create -f task-pv-pod.yaml + +.. code-block:: bash + + # Wait for the pod to start + kubectl get pod --watch + + # Verify that the volume is mounted on /usr/share/nginx/html + kubectl exec -it task-pv-pod -- df -h /usr/share/nginx/html + Filesystem Size Used Avail Use% Mounted on + 10.xx.xx.xx:/trident_pvc_3acb0d1c_b1ae_11e9_8d9f_5254004dfdb7 1.0G 256K 1.0G 1% /usr/share/nginx/html + + + # Delete the pod + kubectl delete pod task-pv-pod + +At this point the pod (application) no longer exists but the volume is still +there. You could use it from another pod if you wanted to. + +To delete the volume, simply delete the claim: + +.. code-block:: console + + kubectl delete pvc basic + +**Check you out! You did it!** Now you're dynamically provisioning +Kubernetes volumes like a boss. + +Where do you go from here? you can do things like: + + * :ref:`Configure additional backends `. + * :ref:`Model additional storage classes `. + * Review considerations for moving this into production. diff --git a/docs/kubernetes/upgrading.rst b/docs/kubernetes/upgrading.rst index 1f71b655f..263d3135c 100644 --- a/docs/kubernetes/upgrading.rst +++ b/docs/kubernetes/upgrading.rst @@ -13,13 +13,13 @@ Initiate the upgrade .. note:: - Before upgrading Trident, ensure that the required :ref:`feature gates ` + Before upgrading Trident, ensure that the required :ref:`feature gates ` are enabled. .. warning:: - Trident 20.01 only supports the beta feature release of Volume Snapshots. When upgrading - to Trident 20.01, all previous alpha snapshot CRs and CRDs (Volume Snapshot Classes, + Trident only supports the beta feature release of Volume Snapshots. When upgrading + Trident, all previous alpha snapshot CRs and CRDs (Volume Snapshot Classes, Volume Snapshots and Volume Snapshot Contents) must be removed before the upgrade is performed. Refer to `this blog `_ to understand the steps involved in migrating alpha snapshots to the beta spec. @@ -51,15 +51,13 @@ On Kubernetes ``1.13``, there are a couple of options when upgrading Trident: - Install Trident in the desired namespace by executing the ``tridentctl install`` command with the ``--csi`` flag. This configures Trident - to function as an enhanced CSI provisioner and is the preferred way to upgrade if using - ``1.13``. + to function as an enhanced CSI provisioner. This will require enabling + some :ref:`feature gates `. -- If for some reason the :ref:`feature gates ` required by Trident +- If for some reason the :ref:`feature gates ` required by Trident cannot be enabled, you can install Trident without the ``--csi`` flag. This will configure Trident to work in its traditional format without using the CSI - specification. Keep in mind that new features introduced by Trident, such as - :ref:`On-Demand Volume Snapshots ` will not be available - in this installation mode. + specification. Upgrading Trident on Kubernetes 1.14 and above ---------------------------------------------- diff --git a/docs/reference/tridentctl.rst b/docs/reference/tridentctl.rst index 9060fb1ce..1411e9e38 100644 --- a/docs/reference/tridentctl.rst +++ b/docs/reference/tridentctl.rst @@ -62,6 +62,7 @@ Remove one or more resources from Trident Available Commands: backend Delete one or more storage backends from Trident + node Delete one or more csi nodes from Trident snapshot Delete one or more volume snapshots from Trident storageclass Delete one or more storage classes from Trident volume Delete one or more storage volumes from Trident @@ -138,7 +139,7 @@ Print the logs from Trident Flags: -a, --archive Create a support archive with all logs unless otherwise specified. -h, --help help for logs - -l, --log string Trident log to display. One of trident|auto|all (default "auto") + -l, --log string Trident log to display. One of trident|operator|auto|all (default "auto") --node string The kubernetes node name to gather node pod logs from. -p, --previous Get the logs for the previous container instance if it exists. --sidecars Get the logs for the sidecar containers as well. @@ -190,5 +191,9 @@ Print the version of tridentctl and the running Trident service .. code-block:: console - Usage: - tridentctl version + Usage: + tridentctl version [flags] + + Flags: + --client Client version only (no server required). + -h, --help help for version diff --git a/docs/support/requirements.rst b/docs/support/requirements.rst index 1f0dfe353..61c16b17d 100644 --- a/docs/support/requirements.rst +++ b/docs/support/requirements.rst @@ -7,10 +7,15 @@ Supported frontends (orchestrators) Trident supports multiple container engines and orchestrators, including: -* Kubernetes 1.11 or later (latest: 1.17) +* Kubernetes 1.11 or later (latest: 1.18) * OpenShift 3.11, 4.2 and 4.3 * Docker Enterprise 2.1 or 3.0 +The Trident Operator is supported with these releases: + +* Kubernetes 1.14 or later (latest 1.18) +* OpenShift 4.2 and 4.3. + In addition, Trident should work with any distribution of Docker or Kubernetes that uses one of the supported versions as a base, such as Rancher or Tectonic. @@ -27,30 +32,30 @@ To use Trident, you need one or more of the following supported backends: * Cloud Volumes Service for AWS * Cloud Volumes Service for GCP -Feature Gates -============= +Feature Requirements +==================== Trident requires some feature gates to be enabled for certain features to work. Refer to the table shown below to determine if you need to enable feature gates, based on your version of Trident and Kubernetes. -================================ =============== ========================== - Feature Trident version Kubernetes version -================================ =============== ========================== -CSI Trident 19.07 and above 1.13\ :sup:`1` and above -Volume Snapshots (beta) 20.01 and above 1.17 and above -PVC from Volume Snapshots (beta) 20.01 and above 1.17 and above -iSCSI PV resize 19.10 and above 1.16 and above -================================ =============== ========================== +================================ =============== ========================== =============================== + Feature Trident version Kubernetes version Feature Gates Required? +================================ =============== ========================== =============================== +CSI Trident 19.07 and above 1.13\ :sup:`1` and above Yes for ``1.13``\ :sup:`1` +Volume Snapshots (beta) 20.01 and above 1.17 and above No +PVC from Volume Snapshots (beta) 20.01 and above 1.17 and above No +iSCSI PV resize 19.10 and above 1.16 and above No +ONTAP Bidirectional CHAP 20.04 and above 1.11 and above No +Dynamic Export Policies 20.04 and above 1.13\ :sup:`1` and above Requires CSI Trident\ :sup:`1` +Trident Operator 20.04 and above 1.14 and above No +================================ =============== ========================== =============================== | Footnote: | `1`: Requires enabling ``CSIDriverRegistry`` and ``CSINodeInfo`` for Kubernetes 1.13. Install CSI Trident on Kubernetes 1.13 using the ``--csi`` switch when invoking ``tridentctl install``. -.. note:: - All features mentioned in the table above require CSI Trident. - Check with your Kubernetes vendor to determine the appropriate procedure for enabling feature gates. diff --git a/trident-installer/sample-input/backend-aws-cvs.json b/trident-installer/sample-input/backend-aws-cvs.json index 310b2790b..75e79b5bb 100644 --- a/trident-installer/sample-input/backend-aws-cvs.json +++ b/trident-installer/sample-input/backend-aws-cvs.json @@ -2,7 +2,7 @@ "version": 1, "storageDriverName": "aws-cvs", "apiRegion": "us-east-1", - "apiURL": "https://cds-aws-bundles.netapp.com:8080/v1 for us-east-1", + "apiURL": "https://cds-aws-bundles.netapp.com:8080/v1", "apiKey": "znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE", "secretKey": "rR0rUmWXfNioN1KhtHisiSAnoTherboGuskey6pU" }