From b7d4a037fe251a62fbc7542737e3e90eb3934e84 Mon Sep 17 00:00:00 2001 From: Allda Date: Mon, 11 Nov 2024 13:53:46 +0000 Subject: [PATCH] deploy: 5977b6134d918faf4663ffc207999996e94b3e90 --- search/search_index.json | 2 +- users/operator-ci-yaml/index.html | 60 ++++++++++++++++++++++++++++--- 2 files changed, 56 insertions(+), 6 deletions(-) diff --git a/search/search_index.json b/search/search_index.json index b5846696..4cd4c6f8 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Openshift Operators","text":""},{"location":"#about-this-repository","title":"About this repository","text":"

This repo is the canonical source for Kubernetes Operators that appear on OpenShift Container Platform and OKD.

NOTE The index catalogs:

are built from this repository and it is consumed by Openshift and OKD to create their sources and built their catalog. To know more about how Openshift catalog are built see the documentation.

See our documentation to find out more about Community, Certified and Marketplace operators and contribution.

"},{"location":"#add-your-operator","title":"Add your Operator","text":"

We would love to see your Operator added to this collection. We currently use automated vetting via continuous integration plus manual review to curate a list of high-quality, well-documented Operators. If you are new to Kubernetes Operators start here.

If you have an existing Operator read our contribution guidelines on how to open a PR. Then the community operator pipeline will be triggered to test your Operator and merge a Pull Request.

"},{"location":"#contributing-guide","title":"Contributing Guide","text":""},{"location":"#test-and-release-process-for-the-operator","title":"Test and release process for the Operator","text":"

Refer to the operator pipeline documentation .

"},{"location":"#important-notice","title":"IMPORTANT NOTICE","text":"

Some APIs versions are deprecated and are OR will no longer be served on the Kubernetes version 1.22/1.25/1.26 and consequently on vendors like Openshift 4.9/4.12/4.13.

What does it mean for you?

Operator bundle versions using the removed APIs can not work successfully from the respective releases. Therefore, it is recommended to check if your solutions are failing in these scenarios to stop using these versions OR by setting the \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<OCP version>\"}]' to block cluster admins upgrades when they have Operator versions installed that can not work well in OCP versions higher than the value informed. Also, by defining a valid OCP range via the annotation com.redhat.openshift.versions into the metadata/annotations.yaml for our solution does not end up shipped on OCP/OKD versions where it cannot be installed.

WARNING: olm.maxOpenShiftVersion should ONLY be used if you are 100% sure that your Operator bundle version cannot work in upper releases. Otherwise, you might provide a bad user experience. Be aware that cluster admins will be unable to upgrade their clusters with your solution installed. Then, suppose you do not provide any upper version and a valid upgrade path for those who have your Operator installed be able to upgrade it and consequently be allowed to upgrade their cluster version (i.e from OCP 4.10 to 4.11). In that case, cluster admins might choose to uninstall your Operator and no longer use it so that they can move forward and upgrade their cluster version without it.

Please, make sure you check the following announcements: - How to deal with removal of v1beta1 CRD removals in Kubernetes 1.22 / OpenShift 4.9 - Kubernetes API removals on 1.25/1.26 and Openshift 4.12/4.13 might impact your Operator. How to deal with it?

"},{"location":"#reporting-bugs","title":"Reporting Bugs","text":"

Use the issue tracker in this repository to report bugs.

"},{"location":"ci-cd/","title":"CI/CD","text":"

This project uses GitHub Actions and Ansible for CI (tests, linters) and CD (deployment).

"},{"location":"ci-cd/#secrets","title":"Secrets","text":"

Both deployment and integration tests need GitHub secrets to work properly. The following secrets should be kept in the repository:

Secret name Secret value Purpose VAULT_PASSWORD Password to the preprod Ansible Vault stored in the repository Deployment of preprod and integration test environments VAULT_PASSWORD_PROD Password to the prod Ansible Vault stored in the repository Deployment of the production environment REGISTRY_USERNAME Username for authentication to the container registry Building images REGISTRY_PASSWORD Password for authentication to the container registry Building images GITHUB_TOKEN GitHub authentication token Creation of GitHub tags and releases"},{"location":"ci-cd/#run-order","title":"Run order","text":""},{"location":"ci-cd/#integration-tests","title":"Integration tests","text":""},{"location":"ci-cd/#when-do-they-run","title":"When do they run?","text":"

Integration tests are a stage of the CI/CD that runs only in two cases: - On merge to main and before deployment - On desire (manual action- by clicking \"run workflow\" via GitHub UI)

"},{"location":"ci-cd/#running-integration-tests","title":"Running integration tests","text":"

The orchestration of the integration tests is handled by Ansible. A couple dependencies must be installed to get started:

To execute the integration tests in a custom environment:

ansible-pull \\\n  -U \"https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\" \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n

To manually run the integration tests from the local environment: - prerequisites: - logged-in to OC cluster - export NAMESPACE - new is created if not exist, careful for duplicity (can override existing projects) - SSH key need to be set in GitHub account for local user (Ansible use SSH to clone/manipulate repositories) - Python dependencies (mentioned above) need to be installed globally

ansible-playbook -v \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\ \n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n

Tags can be used to run select portions of the playbook. For example, the test resources will be cleaned up at the end of every run. Skipping the clean tag will leave the resources behind for debugging.

ansible-pull \\\n  --skip-tags clean \\\n  -U \"https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\" \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n

It may be necessary to provide your own project and bundle to test certain aspects of the pipelines. This can be accomplished with the addition of a few extra vars (and proper configuration of the project).

ansible-pull \\\n  -U \"https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\" \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  -e \"src_operator_git_branch=$SRC_BRANCH\" \\\n  -e \"src_operator_bundle_version=$SRC_VERSION\" \\\n  -e \"operator_package_name=$PACKAGE_NAME\" \\\n  -e \"operator_bundle_version=$NEW_VERSION\" \\\n  -e \"ci_pipeline_pyxis_api_key=$API_KEY\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n
"},{"location":"cluster-config/","title":"Cluster Configuration","text":"

All OpenShift clusters should share a common configuration for our pipelines. There are cluster-wide resources which require modification, such as the TektonConfig. But there is also a custom EventListener which reports PipelineRun events to Slack and a pipeline that uploads the metrics of other pipelines for monitoring purposes. This configuration must be applied manually for now.

To apply these cluster-wide configurations, run the Ansible playbook. To only apply the cluster-wide resources, the following command will suffice.

ansible-playbook \\\n    -i inventory/clusters \\\n    -e \"clusters={INSERT ANSIBLE HOST LIST}\" \\\n    -e \"ocp_token={INSERT TOKEN}\" \\\n    -e \"k8s_validate_certs={yes|no}\" \\\n    --vault-password-file \"{INSERT FILE}\" \\\n    playbooks/config-ocp-cluster.yml\n

If you want to deploy the metrics pipeline, add --tags metrics to the above command. To deploy the Chat Webhook, add --tags chat. If you wish to deploy both, add --tags metrics,chat.

"},{"location":"developer-guide/","title":"Developer Guide","text":""},{"location":"developer-guide/#workflow","title":"Workflow","text":"
  1. Run through the setup at least once.
  2. Make changes to the pipeline image, if desired.
  3. Make changes to the Tekton pipelines and/or tasks, if desired.
  4. Test all impacted pipelines.
  5. Override the pipeline image as necessary.
  6. Submit a pull request with your changes.
"},{"location":"developer-guide/#setup","title":"Setup","text":"
  1. Git leaks detection
  2. Prepare a development environment
  3. Prepare a certification project
  4. Prepare an Operator bundle
  5. Prepare your ci.yaml
  6. Create a bundle pull request (optional)
  7. Required for testing hosted or release pipelines
  8. Create an API key (optional)
  9. Required for testing submission with the CI pipeline
  10. Prepare the CI to run from your fork (optional)
  11. Required to run integration testing on forks of this repo.
"},{"location":"developer-guide/#git-leaks-detection","title":"Git leaks detection","text":"

Since the repository contains secret information in form of encrypted Ansible Vault there is high chance that developer may push a commit with decrypted secrets by mistake. To avoid this problem we recommend to use Gitleaks tool that prevent you from commit secret code into git history.

The repository is already pre-configured but each developer has to make final config changes in his/her environment.

Follow the documentation to configure Gitleaks on your computer.

"},{"location":"developer-guide/#prepare-a-development-environment","title":"Prepare a Development Environment","text":"

You may use any OpenShift 4.7+ cluster (including CodeReady Containers).

The hosted and release pipelines require a considerable amount of dependencies which are tedious to configure manually. Luckily these steps have been automated and can be executed by anyone with access to the Ansible vault password.

Before running this you should ensure you're logged into the correct OpenShift cluster using oc. If already logged into the OpenShift console, an oc login command can be obtained by clicking on your username in upper right corner, and selecting copy login command.

ansible-playbook -v \\\n  -i \"ansible/inventory/operator-pipeline-$ENV\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  -e \"ocp_token=`oc whoami -t`\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/deploy.yml\n

:warning: Conflicts may occur if the project already contains some resources. They may need to be removed first.

Cleanup can be performed by specifying the absent state for some of the resources.

ansible-playbook -v \\\n  -i \"ansible/inventory/operator-pipeline-$ENV\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  -e \"ocp_token=`oc whoami -t`\" \\\n  -e \"namespace_state=absent\" \\\n  -e \"github_webhook_state=absent\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/deploy.yml\n
"},{"location":"developer-guide/#integration-tests","title":"Integration tests","text":"

See integration tests section in ci-cd.md

"},{"location":"developer-guide/#install-tkn","title":"Install tkn","text":"

You should install the tkn CLI which corresponds to the version of the cluster you're utilizing.

"},{"location":"developer-guide/#using-codeready-containers","title":"Using CodeReady Containers","text":"

It's possible to deploy and test the pipelines from a CodeReady Containers (CRC) cluster for development/testing, purposes.

  1. Install CodeReady Containers

  2. Install OpenShift Pipelines

  3. Login to your cluster with oc CLI.

    You can run crc console --credentials to get the admin login command.

  4. Create a test project in your cluster

    bash oc new-project playground

  5. Grant the privileged SCC to the default pipeline service account.

    The buildah task requires the privileged security context constraint in order to call newuidmap/newgidmap. This is only necessary because runAsUser:0 is defined in templates/crc-pod-template.yml.

    bash oc adm policy add-scc-to-user privileged -z pipeline

"},{"location":"developer-guide/#running-a-pipeline-with-crc","title":"Running a Pipeline with CRC","text":"

It may be necessary to pass the following tkn CLI arg to avoid permission issues with the default CRC PersistentVolumes.

--pod-template templates/crc-pod-template.yml\n
"},{"location":"developer-guide/#prepare-a-certification-project","title":"Prepare a Certification Project","text":"

A certification project is required for executing all pipelines. In order to avoid collisions with other developers, it's best to create a new one in the corresponding Pyxis environment.

The pipelines depend on the following certification project fields:

{\n  \"project_status\": \"active\",\n  \"type\": \"Containers\",\n\n  // Arbitrary name for the project - can be almost anything\n  \"name\": \"<insert-project-name>\",\n\n  /*\n   Either \"connect\", \"marketplace\" or \"undistributed\".\n   This maps to the `organization` field in the bundle submission repo's config.yaml.\n     connect -> certified-operators\n     marketplace -> redhat-marketplace\n     undistributed -> certified-operators (certified against, but not distributed to)\n  */\n  \"operator_distribution\": \"<insert-distribution>\",\n\n  // Must correspond to a containerVendor record with the same org_id value.\n  \"org_id\": <insert-org-id>,\n\n  \"container\": {\n    \"type\": \"operator bundle image\",\n\n    \"build_catagories\":\"Operator bundle\",\n\n    // Required but always \"rhcc\"\n    \"distribution_method\": \"rhcc\",\n\n    // Always set to true to satisfy the publishing checklist\n    \"distribution_approval\": true,\n\n    // Must match the github user(s) which opened the test pull requests\n    \"github_usernames\": [\"<insert-github-username>\"],\n\n    // Must be unique for the vendor\n    \"repository_name\": \"<insert-repo-name>\"\n  }\n}\n
"},{"location":"developer-guide/#prepare-an-operator-bundle","title":"Prepare an Operator Bundle","text":"

You can use a utility script to copy an existing bundle. By default it will copy a bundle that should avoid common failure conditions such as digest pinning. View all the customization options by passing -h to this command.

./scripts/copy-bundle.sh\n

You may wish to tweak the generated output to influence the behavior of the pipelines. For example, Red Hat Marketplace Operator bundles may require additional annotations. The pipeline should provide sufficient error messages to indicate what is missing. If such errors are unclear, that is likely a bug which should be fixed.

"},{"location":"developer-guide/#prepare-your-ciyaml","title":"Prepare Your ci.yaml","text":"

At the root of your operator package directory (note: not the bundle version directory) there needs to be a ci.yaml file. For development purposes, it should follow this format in most cases.

---\n# Copy this value from the _id field of the certification project in Pyxis.\ncert_project_id: <pyxis-cert-project-id>\n# Set this to true to allow the hosted pipeline to merge pull requests.\nmerge: false\n
"},{"location":"developer-guide/#create-a-bundle-pull-request","title":"Create a Bundle Pull Request","text":"

It's recommended to open bundle pull requests against the operator-pipelines-test repo. The pipeline GitHub bot account has permissions to manage it.

Note: This repository is only configured for testing certified operators, NOT Red Hat Marketplace operators (see config.yaml).

# Checkout the pipelines test repo\ngit clone https://github.com/redhat-openshift-ecosystem/operator-pipelines-test\ncd operator-pipelines-test\n\n# Create a new branch\ngit checkout -b <insert-branch-name>\n\n# Copy your package directory\ncp -R <package-dir>/ operators/\n\n# Commit changes\ngit add -A\n\n# Use this commit pattern so it defaults to the pull request title.\n# This is critical to the success of the pipelines.\ngit commit -m \"operator <operator-package-name> (<bundle-version>)\"\n\n# Push your branch. Open the pull request using the output.\ngit push origin <insert-branch-name>\n

Note: You may need to merge the pull request to use it for testing the release pipeline.

"},{"location":"developer-guide/#create-an-api-key","title":"Create an API Key","text":"

File a ticket with Pyxis admins to assist with this request. It must correspond to the org_id for the certification project under test.

"},{"location":"developer-guide/#making-changes-to-the-pipelines","title":"Making Changes to the Pipelines","text":""},{"location":"developer-guide/#guiding-principles","title":"Guiding Principles","text":""},{"location":"developer-guide/#applying-pipeline-changes","title":"Applying Pipeline Changes","text":"

You can use the following command to apply all local changes to your OCP project. It will add all the Tekton resources used across all the pipelines.

oc apply -R -f ansible/roles/operator-pipeline/templates/openshift\n
"},{"location":"developer-guide/#making-changes-to-the-pipeline-image","title":"Making Changes to the Pipeline Image","text":""},{"location":"developer-guide/#dependency","title":"Dependency","text":"

Operator pipelines project is configured to automatically manage Python dependencies using PDM tool. The pdm automates definition, installation, upgrades and the whole lifecycle of dependency in a project. All dependencies are stored in pyproject.toml file in a groups that corresponds to individual applications within the Operator pipelines project.

Adding, removing and updating of dependency needs to be always done using pdm cli.

pdm add -G operator-pipelines gunicorn==20.1.0\n

After a dependency is installed it is added to pdm.lock file. The lock file is always part of git repository.

If you want to install specific group set of dependencies use following command:

pdm install -G operator-pipelines\n

Dependencies are stored into virtual environment (.venv) which is automatically created after pdm install. If .venv wasn't created, configure pdm to automatically create it during installation with pdm config python.use_venv true.

"},{"location":"developer-guide/#run-unit-tests-code-style-checkers-etc","title":"Run Unit Tests, Code Style Checkers, etc.","text":"

Before running the tests locally, the environment needs to be prepared. Choose the preparation process according to your Linux version.

"},{"location":"developer-guide/#preparation-on-rpm-based-linux","title":"Preparation on RPM-based Linux","text":"
sudo dnf -y install hadolint\npython3 -m pip install pdm\npdm venv create 3.12\npdm install\nsource .venv/bin/activate\npython3 -m pip install ansible-lint\n
"},{"location":"developer-guide/#preparation-on-other-linux-systems","title":"Preparation on other Linux systems","text":"

Before starting, make sure you have installed the Brew package manager.

brew install hadolint\npython3 -m pip install pdm\npdm venv create 3.12\npdm install\nsource .venv/bin/activate\npython3 -m pip install ansible-lint\n
"},{"location":"developer-guide/#run-the-local-tests","title":"Run the local tests","text":"

To run unit tests and code style checkers:

tox\n
"},{"location":"developer-guide/#local-development","title":"Local development","text":"

Setup python virtual environment using pdm.

pdm venv create 3.12\npdm install\nsource .venv/bin/activate\n
"},{"location":"developer-guide/#build-push","title":"Build & Push","text":"
  1. Ensure you have buildah installed

  2. Build the image

    bash buildah bud

  3. Push the image to a remote registry, eg. Quay.io.

    bash buildah push <image-digest-from-build-step> <remote-repository>

    This step may require login, eg.

    bash buildah login quay.io

"},{"location":"index-signature-verification/","title":"Index Signature Verification","text":"

This repository contains a special Tekton pipeline for checking the signature status of the production index images. For now, it is only intended to be deployed manually on a single cluster. The pipeline is regularly scheduled via a CronJob and runs to completion without sending a direct notification upon success or failure. Instead, it relies on other resources to handle reporting.

The pipeline should be deployed using Ansible.

ansible-playbook \\\n    -i inventory/clusters \\\n    -e \"clusters={INSERT ANSIBLE HOST LIST}\" \\\n    -e \"ocp_token={INSERT TOKEN}\" \\\n    -e \"k8s_validate_certs={yes|no}\" \\\n    --vault-password-file \"{INSERT FILE}\" \\\n    playbooks/deploy-index-signature-verification.yml\n
"},{"location":"ocp-namespace-config/","title":"OpenShift namespaces configuration","text":"

Operator pipelines are deployed and run in OpenShift Dedicated clusters. The deployment of all resources including pipelines, tasks, secrets and others is managed using Ansible playbooks. In order to be able run the ansible automated way the initial setup of OpenShift namespaces needs to be executed. This process is also automated and requires access to a cluster with the cluster-admin privileges.

To initially create and configure namespaces for each environment use following script:

cd ansible\n# Store Ansible vault password in ./vault-password\necho $VAULT_PASSWD > ./vault-password\n\n# Login to a cluster using oc\noc login --token=$TOKEN --server=$OCP_SERVER\n\n# Trigger an automation\n./init.sh stage\n

This command triggers Ansible that automates creation of OCP namespace for given environment and store admin service account token into vault.

"},{"location":"pipeline-admin-guide/","title":"Operator pipeline admin guide","text":"

This document aims to provide information needed for maintenance and troubleshooting of operator pipelines.

"},{"location":"pipeline-admin-guide/#operator-repositories","title":"Operator repositories","text":"

Pre-production repositories are used for all pre-prod environments (stage, dev, qa). Each environment has a dedicated git branch. By selecting a target branch you can select an environment where the operator will be tested.

"},{"location":"pipeline-admin-guide/#ocp-environments","title":"OCP environments","text":""},{"location":"pipeline-admin-guide/#pipelines","title":"Pipelines","text":"

Testing and certification of OpenShift operators from ISV and Community sources is handled by OpenShift Pipelines (Tekton)

"},{"location":"pipeline-admin-guide/#isv-pipelines","title":"ISV pipelines","text":""},{"location":"pipeline-admin-guide/#community-pipelines","title":"Community pipelines","text":""},{"location":"pipeline-admin-guide/#troubleshooting","title":"Troubleshooting","text":""},{"location":"pipeline-admin-guide/#pipeline-states","title":"Pipeline states","text":"

After an operator is submitted to any of the repositories mentioned above a operator pipeline kicks in. The current state of the pipeline is indicated by the PR labels. Right after a pipeline starts a label operator-hosted-pipeline/started is added. Based on the result of the pipeline one of the following labels is added and */started label is removed: - operator-hosted-pipeline/passed - operator-hosted-pipeline/failed

If the hosted pipeline finished successfully and PR has been approved the pipeline merges the PR. The merge event is a trigger for the release pipeline. The release pipeline also applies labels based on the current pipeline status. - operator-release-pipeline/started - operator-release-pipeline/passed - operator-release-pipeline/failed

In the best case scenario at the end of a process a PR should have both hosted and release */passed labels.

"},{"location":"pipeline-admin-guide/#re-trigger-mechanism","title":"Re-trigger mechanism","text":"

In case of pipeline failure user or repository owner can re-trigger a pipeline using PR labels. Since the labels can't be set by external contributor a pipeline can be also re-triggered using PR comments. The re-trigger mechanism allows user to re-trigger pipeline only when previous pipeline ended up in failed state.

The pipeline summary provides a description of the failure and a hint of how to re-trigger the pipeline.

The command that re-triggers a pipeline is in a following format:

/pipeline restart <pipeline name>

Based on which pipeline fails one of these command can be used to re-trigger it again:

After a pipeline is re-triggered using the command a few labels will be added and removed from the PR. First a new labels pipeline/trigger-hosted or pipeline/trigger-release is added. This label kick in the pipeline and pipeline itself start adding a labels based on the pipeline status.

A script called bulk-retrigger is provided in the operator-pipeline container image to help re-triggering a pipeline on multiple PRs: it takes the repository name, a CSV file containing a list of PRs to process and automates the re-triggering of the pipeline one PR at a time. See the help text for details on how to run it.

"},{"location":"pipeline-admin-guide/#pipeline-logs","title":"Pipeline logs","text":"

Pipelines interacts with user using a Github Pull request interface. There are a slight differences between ISV and community repositories, but overall concept is the same.

At the end of pipeline run a pipeline submits a pipeline summary comment with a basis pipeline metrics and overview of individual tasks.

The community pipeline also directly attaches a link to a Github Gist with a pipeline logs. The ISV pipeline uploads logs and artifacts to Pyxis and logs are available to partner through Red Hat Connect.

"},{"location":"pipeline-admin-guide/#skip-tests","title":"Skip tests","text":"

In certain corner cases there is a real need to skip a subset of tests and force a pipeline to pass even though not all checks are green. This is usually initiated by submitting an exception from ISV or community members. In case an exception is reviewed and approved a pipeline has a mechanism to skip selected tests.

To to skip a static or dynamic test a repository administrator needs to apply a PR label in the following format:

tests/skip/<name of the test>

So for example if case an operator can't be installed with a default settings and requires a special environment we can skip DeployableByOLM by adding tests/skip/DeployableByOLM label to a PR.

"},{"location":"pipeline-env-setup/","title":"Pipeline Environment Setup","text":"

Common for all the pipelines

Only CI Pipeline

Only Hosted Pipeline

Only Release Pipeline

"},{"location":"pipeline-env-setup/#common-for-all-the-pipelines","title":"Common for all the pipelines:","text":""},{"location":"pipeline-env-setup/#red-hat-catalog-imagestreams","title":"Red Hat Catalog Imagestreams","text":"

The pipelines must pull the parent index images through the internal OpenShift registry to take advantage of the built-in credentials for Red Hat's terms-based registry (registry.redhat.io). This saves the user from needing to provide such credentials. The index generation task will always pull published index images through imagestreams of the same name in the current namespace. As a result, there is a one time configuration for each desired distribution catalog. Replace the from argument when configuring this for pre-production environments.

# Must be run once before certifying against the certified catalog.\noc --request-timeout 10m import-image certified-operator-index \\\n  --from=registry.redhat.io/redhat/certified-operator-index \\\n  --reference-policy local \\\n  --scheduled \\\n  --confirm \\\n  --all\n\n# Must be run once before certifying against the Red Hat Marketplace catalog.\noc --request-timeout 10m import-image redhat-marketplace-index \\\n  --from=registry.redhat.io/redhat/redhat-marketplace-index \\\n  --reference-policy local \\\n  --scheduled \\\n  --confirm \\\n  --all\n
"},{"location":"pipeline-env-setup/#only-ci-pipeline","title":"Only CI pipeline:","text":""},{"location":"pipeline-env-setup/#registry-credentials","title":"Registry Credentials","text":"

The CI pipeline can optionally be configured to push and pull images to/from a remote private registry. The user must create an auth secret containing the docker config. This secret can then be passed as a workspace named registry-credentials when invoking the pipeline.

oc create secret generic registry-dockerconfig-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=config.json\n
"},{"location":"pipeline-env-setup/#git-ssh-secret","title":"Git SSH Secret","text":"

The pipelines requires git SSH credentials with write access to the repository if automatic digest pinning is enabled using the pin_digests param. This is disabled by default. Before executing the pipeline the user must create a secret in the same namespace as the pipeline.

To create the secret run the following commands (substituting your key):

cat << EOF > ssh-secret.yml\nkind: Secret\napiVersion: v1\nmetadata:\n  name: github-ssh-credentials\ndata:\n  id_rsa: |\n    < PRIVATE SSH KEY >\nEOF\n\noc create -f ssh-secret.yml\n
"},{"location":"pipeline-env-setup/#container-api-access","title":"Container API access","text":"

CI pipelines automatically upload a test results, logs and artifacts using Red Hat container API. This requires a partner's API key and the key needs to be created as a secret in OpenShift cluster before running a Tekton pipeline.

oc create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >\n
"},{"location":"pipeline-env-setup/#kubeconfig","title":"Kubeconfig","text":"

The CI pipeline requires a kubeconfig with admin credentials. This can be created by logging into said cluster as an admin user.

KUBECONFIG=kubeconfig oc login -u <username> -p <password>\noc create secret generic kubeconfig --from-file=kubeconfig=kubeconfig\n
"},{"location":"pipeline-env-setup/#github-api-token","title":"GitHub API token","text":"

To automatically open the PR with submission, pipeline must authenticate to GitHub. Secret containing api token should be created.

oc create secret generic github-api-token --from-literal GITHUB_TOKEN=< GITHUB TOKEN >\n
"},{"location":"pipeline-env-setup/#only-hosted-pipeline","title":"Only Hosted pipeline:","text":""},{"location":"pipeline-env-setup/#registry-credentials_1","title":"Registry Credentials","text":"

The hosted pipeline requires credentials to push/pull bundle and index images from a pre-release registry (quay.io). A registry auth secret must be created. This secret can then be passed as a workspace named registry-credentials when invoking the pipeline.

oc create secret generic hosted-pipeline-registry-auth-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=config.json\n
"},{"location":"pipeline-env-setup/#container-api-access_1","title":"Container API access","text":"

The hosted pipeline communicates with internal Container API that requires cert + key. The corresponding secret needs to be created before running the pipeline.

oc create secret generic operator-pipeline-api-certs \\\n  --from-file operator-pipeline.pem \\\n  --from-file operator-pipeline.key\n
"},{"location":"pipeline-env-setup/#hydra-credentials","title":"Hydra credentials","text":"

To verify publishing checklist, Hosted pipeline uses Hydra API. To authenticate with Hydra over basic auth, secret containing service account credentials should be created.

oc create secret generic hydra-credentials \\\n  --from-literal username=<username>  \\\n  --from-literal password=<password>\n
"},{"location":"pipeline-env-setup/#github-bot-token","title":"GitHub Bot token","text":"

To automatically merge the PR, Hosted pipeline uses GitHub API. To authenticate when using this method, secret containing bot token should be created.

oc create secret generic github-bot-token --from-literal github_bot_token=< BOT TOKEN >\n
"},{"location":"pipeline-env-setup/#prow-kubeconfig","title":"Prow-kubeconfig","text":"

Hosted preflight tests are run on the separate cluster. To provision a cluster destined for the tests, the pipeline uses a Prowjob. Thus, to start the preflight test, there needs to be a prow-specific kubeconfig.

oc create secret generic prow-kubeconfig \\\n  --from-literal kubeconfig=<kubeconfig>\n
"},{"location":"pipeline-env-setup/#preflight-decryption-key","title":"Preflight decryption key","text":"

Results of the preflight tests are protected by encryption. In order to retrieve them from the preflight job, gpg decryption key should be supplied.

oc create secret generic preflight-decryption-key \\\n  --from-literal private=<private gpg key> \\\n  --from-literal public=<public gpg key>\n
"},{"location":"pipeline-env-setup/#quay-oauth-token","title":"Quay OAuth Token","text":"

A Quay OAuth token is required to set repo visibility to public.

oc create secret generic quay-oauth-token --from-literal token=<token>\n
"},{"location":"pipeline-env-setup/#only-release-pipeline","title":"Only Release pipeline:","text":""},{"location":"pipeline-env-setup/#registry-credentials_2","title":"Registry Credentials","text":"

The release pipeline requires credentials to push and pull the bundle image built by the hosted pipeline. Three registry auth secrets must be specified since different credentials may be required for the same registry when copying and serving the image. These secrets can then be passed as workspaces named registry-pull-credentials, registry-push-credentials and registry-serve-credentials when invoking the pipeline.

oc create secret generic release-pipeline-registry-auth-pull-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=pull-config.json\n\noc create secret generic release-pipeline-registry-auth-push-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=push-config.json\n\noc create secret generic release-pipeline-registry-auth-serve-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=serve-config.json\n
"},{"location":"pipeline-env-setup/#kerberos-credentials","title":"Kerberos credentials","text":"

For submitting the IIB build, you need kerberos keytab in a secret:

oc create secret generic kerberos-keytab \\\n  --from-file krb5.keytab\n
"},{"location":"pipeline-env-setup/#quay-credentials","title":"Quay credentials","text":"

Release pipeline uses Quay credentials to authenticate a push to an index image during the IIB build.

oc create secret generic iib-quay-credentials \\\n  --from-literal username=<QUAY_USERNAME> \\\n  --from-literal password=<QUAY_PASSWORD>\n
"},{"location":"pipeline-env-setup/#ocp-registry-kubeconfig","title":"OCP-registry-kubeconfig","text":"

OCP clusters contains the public registries for Operator Bundle Images. To publish the image to this registry, Pipeline connects to OCP cluster via kubeconfig. To create the secret which contains the OCP cluster kubeconfig:

oc create secret generic ocp-registry-kubeconfig \\\n  --from-literal kubeconfig=<kubeconfig>\n

Additional setup instructions for this cluster are documented here.

"},{"location":"preflight-invalidation/","title":"Preflight invalidation CronJob","text":"

The repository contains a CronJob that updates enabled preflight versions weekly. https://issues.redhat.com/browse/ISV-4964

After changes, the CronJob can be deployed using Ansible.

ansible-playbook \\\n    -i ansible/inventory/clusters \\\n    -e \"clusters=prod-cluster\" \\\n    -e \"ocp_token=[TOKEN]\" \\\n    -e \"env=prod\" \\\n    --vault-password-file [PWD_FILE] \\\n    playbooks/preflight-invalidation.yml\n
"},{"location":"release-and-rollback/","title":"Release Schedule","text":"

Every Monday and Wednesday, except for hotfixes.

"},{"location":"release-and-rollback/#hotfixes","title":"Hotfixes","text":"

Hotfixes are defined as changes that need to be quickly deployed to prod, outside of the regular release schedule, to address major issues that occur in prod. Hotfixes should still follow the release criteria and process, and should be announced on the team chat so that the rest of the team is aware.

"},{"location":"release-and-rollback/#release-criteria","title":"Release Criteria","text":""},{"location":"release-and-rollback/#release-process","title":"Release Process","text":"

Before deployments occur, a new container image will be built and with \u201clatest\u201d and the associated git commit sha, then pushed to quay.io.

Dev and qa deployment will happen automatically by Github Actions every time a change is merged into the main branch. The commit sha will be passed for identify the container image used by the pipelines as part of deployments.

During a scheduled release or hotfix, stage and prod deployment will only happen by manually triggering the \u201cdeploy-stage\u201d and \u201cdeploy-prod\u201d Github Actions respectively. In a scheduled release, changes that were previously deployed to dev and qa will be promoted to stage, and changes that were previously deployed to stage will be promoted to prod. The last container image used in dev and qa (identified by the git commit sha tag) will also be promoted to be used in the stage pipeline, while the container image last used in stage will be used in the prod pipeline.

"},{"location":"release-and-rollback/#rollback-process","title":"Rollback Process","text":""},{"location":"release-and-rollback/#short-term-rollbacks","title":"Short term rollbacks","text":"

For short term rollbacks: Re-run deployment from a previous stable release. Since the container image is identified by the git commit sha, re-running a previous deployment will also roll back the container image that\u2019s used to a previous one.

"},{"location":"release-and-rollback/#longer-term-rollbacks","title":"Longer term rollbacks","text":"

Revert commit(s) that need to be rolled back, then follow the regular release process to deploy.

"},{"location":"users/best-practices/","title":"Operator Best Practices","text":"

Check the sections Best Practices for OLM and SDK projects to know more about its best practices and common recommendations, suggestions and conventions, see:

"},{"location":"users/contributing-prerequisites/","title":"Before submitting your Operator","text":"

Important: \"First off, thanks for taking the time to contribute your Operator!\"

"},{"location":"users/contributing-prerequisites/#a-primer-to-openshift-community-operators","title":"A primer to Openshift Community Operators","text":"

This project collects Community Operators that work with OpenShift to be displayed in the embedded OperatorHub. If you are new to Operators, start here.

"},{"location":"users/contributing-prerequisites/#sign-your-work","title":"Sign Your Work","text":"

The contribution process works off standard git Pull Requests. Every PR needs to be signed. The sign-off is a simple line at the end of the explanation for a commit. Your signature certifies that you wrote the patch or otherwise have the right to contribute the material. The rules are pretty simple if you can certify the below (from developercertificate.org):

Developer Certificate of Origin\nVersion 1.1\n\nCopyright (C) 2004, 2006 The Linux Foundation and its contributors.\n1 Letterman Drive\nSuite D4700\nSan Francisco, CA, 94129\n\nEveryone is permitted to copy and distribute verbatim copies of this\nlicense document, but changing it is not allowed.\n\n\nDeveloper's Certificate of Origin 1.1\n\nBy making a contribution to this project, I certify that:\n\n(a) The contribution was created in whole or in part by me and I\n    have the right to submit it under the open source license\n    indicated in the file; or\n\n(b) The contribution is based upon previous work that, to the best\n    of my knowledge, is covered under an appropriate open source\n    license and I have the right under that license to submit that\n    work with modifications, whether created in whole or in part\n    by me, under the same open source license (unless I am\n    permitted to submit under a different license), as indicated\n    in the file; or\n\n(c) The contribution was provided directly to me by some other\n    person who certified (a), (b) or (c) and I have not modified\n    it.\n\n(d) I understand and agree that this project and the contribution\n    are public and that a record of the contribution (including all\n    personal information I submit with it, including my sign-off) is\n    maintained indefinitely and may be redistributed consistent with\n    this project or the open source license(s) involved.\n

Then you just add a line to every git commit message:

Signed-off-by: John Doe <john.doe@example.com>\n

Use your real name (sorry, no pseudonyms or anonymous contributions.)

If you set your user.name and user.email git configs, you can sign your commit automatically with git commit -s.

Note: If your git config information is set properly then viewing the git log information for your commit will look something like this:

Author: John Doe <john.doe@example.com>\nDate:   Mon Oct 21 12:23:17 2019 -0800\n\n    Update README\n\n    Signed-off-by: John Doe <john.doe@example.com>\n

Notice the Author and Signed-off-by lines must match.

"},{"location":"users/contributing-via-pr/","title":"Submitting your Operator via Pull Requests (PR)","text":""},{"location":"users/contributing-via-pr/#overview","title":"Overview","text":"

To submit an operator one has to do these steps

  1. Fork project based on desired Operator Repository
  2. Place the operator in the target directory. More info
    • operators
  3. Configure ci.yaml file. More info
    • Setup reviewers
    • Enable FBC mode
  4. Make a pull request with a new operator bundle or catalog changes
  5. Verify tests and fix problems, if possible
  6. Ask for help in the PR in case of problems
"},{"location":"users/contributing-via-pr/#pull-request","title":"Pull request","text":"

When a pull request is created, a number of tests are executed via community hosted pipeline. One can see the results in the comment section of conversation tab.

"},{"location":"users/contributing-via-pr/#you-are-done","title":"You are done","text":"

User is done when all tests are green. When the PR is merged, the community release pipeline will be triggered.

"},{"location":"users/contributing-via-pr/#test-results-failed","title":"Test results failed?","text":"

When operator tests are failing, one can see a following picture

In case of failures, please have a look at the logs of specific tests. If an error is not clear to you, please ask in the PR. Maintainers will be happy to help you with it.

"},{"location":"users/contributing-via-pr/#useful-commands-interacting-with-the-pipeline","title":"Useful commands interacting with the pipeline","text":"

You can post the following comment/command:

Command Functionality /pipeline restart operator-hosted-pipeline The hosted pipeline will be re-triggered and PR will be merged if possible. The command only works if a previous pipeline failed. /pipeline restart operator-release-pipeline The release pipeline will be re-triggered. The command only works if a previous pipeline failed. /test skip {test_case_name} test_case_name test will be skipped. Please consider that only a subset of tests (currently only pruned graph test) can be skipped."},{"location":"users/contributing-where-to/","title":"Where to contribute","text":"

Once you have forked the upstream repo, you will require to add your Operator Bundle to the forked repo. The forked repo will have directory structure similar to the structure outlined below.

\u251c\u2500\u2500 config.yaml\n\u251c\u2500\u2500 operators\n\u2502   \u2514\u2500\u2500 new-operator\n\u2502       \u251c\u2500\u2500 0.0.102\n\u2502       \u2502   \u251c\u2500\u2500 manifests\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator.clusterserviceversion.yaml\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator-controller-manager-metrics-service_v1_service.yaml\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator-manager-config_v1_configmap.yaml\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 tools.opdev.io_demoresources.yaml\n\u2502       \u2502   \u251c\u2500\u2500 metadata\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 annotations.yaml\n\u2502       \u2502   \u2514\u2500\u2500 tests\n\u2502       \u2502       \u2514\u2500\u2500 scorecard\n\u2502       \u2502           \u2514\u2500\u2500 config.yaml\n\u2502       \u251c\u2500\u2500 catalog-templates\n\u2502       \u2502   \u251c\u2500\u2500 v4.14.yaml\n\u2502       \u2502   \u251c\u2500\u2500 v4.15.yaml\n\u2502       \u2502   \u2514\u2500\u2500 v4.16.yaml\n\u2502       \u251c\u2500\u2500 ci.yaml\n\u2502       \u2514\u2500\u2500 Makefile\n\u2514\u2500\u2500 README.md\n

Follow the operators directory in the forked repo. Add your Operator Bundle under this operators directory following the example format.

  1. Under the operators directory, create a new directory with the name of your operator.
  2. Inside of this newly created directory add your ci.yaml and set its content based on doc.
  3. Also, under the new directory create a subdirectory for each version of your Operator.
  4. In each version directory there should be a manifests/ directory containing your OpenShift yaml files, a metadata/ directory containing your annotations.yaml file, and a tests/ directory containing the required config.yaml file for the preflight tests.
  5. Create a catalog-templates/ directory under the operator directory and add a yaml file for each OpenShift version you want to support. The yaml file should contain the catalog template for the operator. More information on how to create the catalog template can be found here.
  6. Download the template Makefile from here and place it in the root of the operator directory.

Note To learn more about preflight tests please follow this link.

For partners and ISVs, certified operators can now be submitted via connect.redhat.com. If you have submitted your Operator there already, please ensure your submission here uses a different package name (refer to the README for more details).

"},{"location":"users/dynamic_checks/","title":"Dynamic checks","text":"

The preflight tests are designed to test and verify the the operator bundle content and the format of the operator bundle and if a bundle can be installed on OCP cluster.

The result link for the logs of the preflight test runs will be posted to the PR as shown below.

In case of failures, please have a look at the logs of specific tests. If an error is not clear to you, please ask in the PR. Maintainers will be happy to help you with it.

Once all of the tests will be passed successfully, the PR will be merged automatically based on the conditions are met by operator-hosted-pipeline.

The PR will not merge automatically in the following cases:

If there are any of the above cases, the PR needs to be reviewed by the Repositoy maintainers or authorized reviewers. After the approval, the PR needs to be merged manually. Once the PR will be merged, the operator-release-pipeline will be triggered automatically.

NOTE: The operator hosted pipeline run results will be posted in the github PR comment.

"},{"location":"users/fbc_onboarding/","title":"File Based Catalog onboarding","text":"

Note: The File Based Catalog support is now in an alpha phase. We welcome any feedback you have for this new feature.

Operators in certified, marketplace, or community repositories are defined in a declarative way. This means a user provides all necessary information in advance about the operator bundle and how it should be released in a catalog and OPM automation injects a bundle into the correct place in the upgrade path.

This is however very limited solution that doesn't allow any further modification of upgrade paths after a bundle is already released. Due to this limitation, a concept of FBC (File-based catalog) is now available and allows users to modify the operator upgrade path in a separate step without the need to release a new bundle.

To enable FBC for a given operator the operator owner needs to convert existing operator into FBC format.

We want to help with this process and we prepared a tooling that helps with this transition.

"},{"location":"users/fbc_onboarding/#convert-existing-operator-to-fbc","title":"Convert existing operator to FBC","text":"

As a prerequisite to this process, you need to download a Makefile that automates the migration process.

An initial system requirement is to have following dependencies installed: - podman - make

# Go to the operator repo directory (certified-operators, marketplace-operators, community-operators-prod)\ncd <operator-repo>/operators/<operator-name>\nwget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile\n

Now we can convert existing operator into FBC. The initial run takes a while because a local cache is generated during a run.

Note A user executing the conversion script needs to be authenticated to registries used by OLM catalog. Use podman login to log in into all registries. A conversion script assumes you have $(XDG_RUNTIME_DIR)/containers/auth.json or ~/.docker/config.json present with valid registry tokens.

To convert existing operator to FBC format you need to execute following command:

$ make fbc-onboarding\n\n2024-04-24 15:53:05,537 [operator-cert] INFO Generating FBC templates for the following versions: ['4.12', '4.13', '4.14', '4.15', '4.16']\n2024-04-24 15:53:07,632 [operator-cert] INFO Processing catalog: v4.12\n2024-04-24 15:53:07,633 [operator-cert] DEBUG Building cache for registry.redhat.io/redhat/community-operator-index:v4.12\n...\n

[!IMPORTANT] In case an operator isn't shipped to all OCP catalog versions manually update OCP_VERSIONS variable in the Makefile and include only versions supported by an operator.

The Makefile will execute following steps:

After a script is finished you should see a template and generated fbc in the repository.

$ tree operators/aqua\n\noperators/aqua\n\u251c\u2500\u2500 0.0.1\n...\n\u251c\u2500\u2500 catalog-templates\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.12.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.13.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.14.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.15.yaml\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 v4.16.yaml\n\u251c\u2500\u2500 ci.yaml\n

... and File-based catalog in catalogs directory

$ tree (repository root)/catalogs\ncatalogs\n\u251c\u2500\u2500 v4.12\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.13\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.14\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.15\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u2514\u2500\u2500 v4.16\n    \u2514\u2500\u2500 aqua\n        \u2514\u2500\u2500 catalog.yaml\n\n
"},{"location":"users/fbc_onboarding/#submit-fbc-changes","title":"Submit FBC changes","text":"

Artifacts generated in the previous step need to be added to a git and submitted via pull request. The operator pipeline validates the content of the catalogs and releases changes into ocp catalogs.

$ git add operators/aqua/{catalog-templates,ci.yaml,Makefile}\n\n$ git add catalogs/{v4.12,v4.13,v4.14,v4.15,v4.16}/aqua\n\n$ git commit --signoff -m \"Add FBC resources for aqua operator\"\n
"},{"location":"users/fbc_onboarding/#generating-catalogs-from-templates","title":"Generating catalogs from templates","text":"

Catalog templates are used to simplify a view of a catalog and allow easier manipulation of catalogs. The automated conversion pre-generates a basic template that can be turned into full FBC using the following command:

make catalogs\n

Of course, you can choose any type of template that you prefer by modifying the Makefile target. More information about catalog templates can be found here

"},{"location":"users/fbc_workflow/","title":"FBC workflow","text":"

If you already have an existing non-FBC operator please continue with the onboarding documentation to convert it to FBC. Once you have converted your operator, or you want to introduce a brand new operator, you can start with the FBC workflow.

"},{"location":"users/fbc_workflow/#fbc-operator-config","title":"FBC operator config","text":"

To indicate the operator is using fbc workflow an operator owner needs to indicate this fact in the ci.yaml file.

Example of the ci.yaml with FBC config:

---\nfbc:\n  enabled: true\n
"},{"location":"users/fbc_workflow/#fbc-templates","title":"FBC templates","text":"

File-based catalog templates serve as a simplified view of a catalog that can be updated by the user. The OPM currently supports 2 types of templates and it is up to the user which template the operator will be using.

More information about each template can be found at opm doc.

The recommended template from the maintainability point of view is SemVer.

"},{"location":"users/fbc_workflow/#generate-catalogs-using-templates","title":"Generate catalogs using templates","text":"

To generate a final catalog for an operator a user needs to execute different opm commands based on the template type. We as operator pipeline maintainers want to simplify this process and we prepared a Makefile with all pre-configured targets.

To get the Makefile follow these steps (In case you converted the existing operator and followed the onboarding guide the Makefile should be already in your operator directory and you can skip the step.)

cd <operator-repo>/operator/<operator-name>\nwget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile\n

The right place for the Makefile is in the operator's root directory

.\n\u251c\u2500\u2500 0.0.1\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 manifests\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 metadata\n\u251c\u2500\u2500 catalog-templates\n\u251c\u2500\u2500 ci.yaml\n\u2514\u2500\u2500 Makefile\n\n

You can modify the Makefile based on your needs and use it to generate catalogs by running make catalogs.

[!IMPORTANT] In case an operator isn't shipped to all OCP catalog versions manually update OCP_VERSIONS variable in the Makefile and include only versions supported by an operator.

The command uses the opm and converts templates into catalogs. The generated catalogs can be submitted as a PR in Github and once the PR is processed changes will be released to the OCP index.

$ tree (repository-root)/catalogs\ncatalogs\n\u251c\u2500\u2500 v4.12\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.13\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.14\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.15\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u2514\u2500\u2500 v4.16\n    \u2514\u2500\u2500 aqua\n        \u2514\u2500\u2500 catalog.yaml\n\n
"},{"location":"users/fbc_workflow/#adding-new-bundle-to-catalog","title":"Adding new bundle to Catalog","text":"

To add a bundle to the catalog you need to first submit the new version of the operator using traditional PR workflow. The operator pipeline builds, tests, and releases the bundle into the registry. At this point, the operator is not available in the catalog yet. To add the bundle to the catalog you need to update catalog templates and add a bundle pullspec given by pull request comment and open a new pull request with catalog changes.

[!NOTE] Currently a workflow requires a 2-step process to release a new bundle into the catalog. In the first step, the operator bundle is released and in the second step, the catalog is updated with the new bundle. We are working on a solution to automate this process and make it a single step. However, this will require a usage of SemVer catalog template. In case you would like to use this feature once available please consider using SemVer template.

"},{"location":"users/fbc_workflow/#semver","title":"SemVer","text":"

For example if I want to add v1.1.0 bundle into Fast channel of a specific catalog I'll add it as mentioned in the example below:

---\nSchema: olm.semver\nGenerateMajorChannels: true\nGenerateMinorChannels: true\nCandidate:\n  Bundles:\n  - Image: quay.io/foo/olm:testoperator.v0.1.0\n  - Image: quay.io/foo/olm:testoperator.v0.1.1\n  - Image: quay.io/foo/olm:testoperator.v0.1.2\n  - Image: quay.io/foo/olm:testoperator.v0.1.3\n  - Image: quay.io/foo/olm:testoperator.v0.2.0\n  - Image: quay.io/foo/olm:testoperator.v0.2.1\n  - Image: quay.io/foo/olm:testoperator.v0.2.2\n  - Image: quay.io/foo/olm:testoperator.v0.3.0\n  - Image: quay.io/foo/olm:testoperator.v1.0.0\n  - Image: quay.io/foo/olm:testoperator.v1.0.1\n  - Image: quay.io/foo/olm:testoperator.v1.1.0\nFast:\n  Bundles:\n  - Image: quay.io/foo/olm:testoperator.v0.2.1\n  - Image: quay.io/foo/olm:testoperator.v0.2.2\n  - Image: quay.io/foo/olm:testoperator.v0.3.0\n  - Image: quay.io/foo/olm:testoperator.v1.0.0\n  - Image: quay.io/foo/olm:testoperator.v1.1.0 # <-- Add new bundle into fast channel\nStable:\n  Bundles:\n  - Image: quay.io/foo/olm:testoperator.v1.0.0\n

Also see opm doc for automate-able step.

"},{"location":"users/fbc_workflow/#basic","title":"Basic","text":"

For example, if I want to add v0.2.0 bundle into stable channel of specific catalog I'll add it as mentioned in the example below.

  1. Add a new olm.bundle entry with bundle pullspec
  2. Add bundle into the stable channel
---\nschema: olm.template.basic\nentries:\n  - schema: olm.package\n    name: example-operator\n    defaultChannel: stable\n\n  - schema: olm.channel\n    package: example-operator\n    name: stable\n    entries:\n      - name: example-operator.v0.1.0\n      - name: example-operator.v0.2.0 # <-- Add bundle into channel\n        replaces: example-operator.v0.1.0\n\n  - schema: olm.bundle\n    image: docker.io/example/example-operator-bundle:0.1.0\n\n  - schema: olm.bundle # <-- Add new bundle entry\n    image: docker.io/example-operator-bundle:0.2.0\n

Also see opm doc for automate-able step.

"},{"location":"users/fbc_workflow/#updating-existing-catalogs","title":"Updating existing catalogs","text":"

A great benefit of FBC is that users can update operator update graphs independently of operator releases. This allows any post-release modification of the catalogs. If you want to change the order of updates, remove an invalid bundle, or do any other modification you are free to do that.

After updating catalog templates don't forget to run make catalogs to generate a catalog from templates and submit the resulting catalog using PR workflow.

"},{"location":"users/isv_pipelines/","title":"ISV operators","text":""},{"location":"users/isv_pipelines/#ciyaml-config","title":"ci.yaml config","text":"

Each operator submitted as a certified or marketplace operator needs to contains a ci.yaml config file that is used during the certification.

The correct location of this file is at operators/operator-XYZ/ci.yaml and needs to contains at least following values:

---\n# The ID of certification component as stated in Red Hat Connect\ncert_project_id: <certification project id>\n\n

Other optional value is merge: false that prevents from automatically merging a pull request with an operator if all tests passes. The default behavior is to merge a PR automatically.

"},{"location":"users/operator-ci-yaml/","title":"Operator Publishing / Review settings","text":"

Each operator might have ci.yaml configuration file to be present in an operator directory (for example operators/aqua/ci.yaml). This configuration file is used by community-operators pipeline to setup various features like reviewers, fbc mode or operator versioning.

Note: One can create or modify ci.yaml file with a new operator version. This operation can be done in the same PR with other operator changes.

"},{"location":"users/operator-ci-yaml/#reviewers","title":"Reviewers","text":"

Note: This option is only valid for community operators. The certified or marketplace reviewer are configure using Red Hat Connect.

If you want to accelerate publishing your changes, consider adding yourself and others you trust to the reviewers list. If the author of PR will be in that list, changes she/he made will be taken as authorized changes. This will be the indicator for our pipeline that the PR is ready to merge automatically.

Note: If an author of PR is not in reviewers list or not in ci.yaml on main branch, PR will not be merged automatically.

Note: If an author of PR is not in reviewers list and reviewers are present in ci.yaml file. All reviewers will be mentioned in PR comment to check for upcoming changes.

For this to work, it is required to setup reviewers in ci.yaml file. It can be done by adding reviewers tag with a list of GitHub usernames. For example

"},{"location":"users/operator-ci-yaml/#example","title":"Example","text":"
$ cat <path-to-operator>/ci.yaml\n---\nreviewers:\n  - user1\n  - user2\n\n
"},{"location":"users/operator-ci-yaml/#fbc-mode","title":"FBC mode","text":"

The fbc.enabled flag enables the File-Based catalog feature. It is highly recommended to use the FBC mode in order to have better control over the operator's catalog.

The fbc.version_promotion_strategy option defines the strategy for promoting the operator into a next OCP version. When a new OCP version becomes available an automated process will promote the operator from a version N to a version N+1. The fbc.version_promotion_strategy option can have the following values: - never - the operator will not be promoted to the next OCP version automatically - always - the operator will be promoted to the next OCP version automatically - review-needed - the operator will be promoted to the next OCP version automatically, but the PR will be created and the reviewers will be asked to review the changes

"},{"location":"users/operator-ci-yaml/#example_1","title":"Example","text":"
---\nfbc:\n    enabled: true\n    version_promotion_strategy: never\n
"},{"location":"users/operator-ci-yaml/#operator-versioning","title":"Operator versioning","text":"

NOTE: This option is only available for the non-FBC operators where user doesn't have a direct control over the catalog.

Operators have multiple versions. When a new version is released, OLM can update an operator automatically. There are 2 update strategies possible, which are defined in ci.yaml at the operator top level.

"},{"location":"users/operator-ci-yaml/#replaces-mode","title":"replaces-mode","text":"

Every next version defines which version will be replaced using replaces key in the CSV file. It means, that there is a possibility to omit some versions from the update graph. The best practice is to put them in a separate channel then.

"},{"location":"users/operator-ci-yaml/#semver-mode","title":"semver-mode","text":"

Every version will be replaced by the next higher version according to semantic versioning.

"},{"location":"users/operator-ci-yaml/#restrictions","title":"Restrictions","text":"

A contributor can decide if semver-mode or replaces-mode mode will be used for a specific operator. By default, replaces-mode is activated, when ci.yaml file is present and contains updateGraph: replaces-mode. When a contributor decides to switch and use semver-mode, it will be specified in ci.yaml file or the key updateGraph will be missing.

"},{"location":"users/operator-ci-yaml/#example_2","title":"Example","text":"
$ cat <path-to-operator>/ci.yaml\n---\n# Use `replaces-mode` or `semver-mode`.\nupdateGraph: replaces-mode\n
"},{"location":"users/operator-ci-yaml/#kubernetes-max-version-in-csv","title":"Kubernetes max version in CSV","text":"

Starting from kubernetes 1.22 some old APIs were deprecated (Deprecated API Migration Guide from v1.22. Users can set operatorhub.io/ui-metadata-max-k8s-version: \"<version>\" in its CSV file to inform its maximum supported Kubernetes version. The following example will inform that operator can handle 1.21 as maximum Kubernetes version

$ cat <path-to-operators>/<name>/<version>/.../my.clusterserviceversion.yaml\nmetadata:\n  annotations:\n    operatorhub.io/ui-metadata-max-k8s-version: \"1.21\"\n
"},{"location":"users/packaging-required-criteria-ocp/","title":"OKD/OpenShift Catalogs criteria and options","text":""},{"location":"users/packaging-required-criteria-ocp/#okdopenshift-catalogs-criteria-and-options","title":"OKD/OpenShift Catalogs criteria and options","text":""},{"location":"users/packaging-required-criteria-ocp/#overview","title":"Overview","text":"

To distribute on OpenShift Catalogs, you will need to comply with the same standard criteria defined for OperatorHub.io (see Common recommendations and suggestions). Then, additionally, you have some requirements and options which follow.

IMPORTANT Kubernetes has been deprecating API(s) which will be removed and no longer available in 1.22 and in the Openshift version 4.9. Note that your project will be unable to use them on OCP 4.9/K8s 1.22 and then, it is strongly recommended to check Deprecated API Migration Guide from v1.22 and ensure that your projects have them migrated and are not using any deprecated API.

Note that your operator using them will not work in 1.22 and in the Openshift version 4.9. OpenShift 4.8 introduces two new alerts that fire when an API that will be removed in the next release is in use. Check the event alerts of your Operators running on 4.8 and ensure that you will not find any warning about these API(s) still being used by it.

Also, to prevent workflow issues, its users will need to have installed in their OCP cluster a version of your operator compatible with 4.9 before they try to upgrade their cluster from any previous version to 4.9 or higher. In this way, it is recommended to ensure that your operators are no longer using these API(s) versions. However, If you still need to publish the operator bundles with any of these API(s) for use on earlier k8s/OCP versions, ensure that the operator bundle is configured accordingly.

Taking the actions below will help prevent users from installing versions of your operator on an incompatible version of OCP, and also prevent them from upgrading to a newer version of OCP that would be incompatible with the version of your operator that is currently installed on their cluster.

"},{"location":"users/packaging-required-criteria-ocp/#configure-the-max-openshift-version-compatible","title":"Configure the max OpenShift Version compatible","text":"

Use the olm.openShiftMaxVersion annotation in the CSV to prevent the user from upgrading their OCP cluster before upgrading the installed operator version to any distribution which is compatible with:

apiVersion: operators.coreos.com/v1alpha1\nkind: ClusterServiceVersion\nmetadata:\n  annotations:\n    # Prevent cluster upgrades to OpenShift Version 4.9 when this\n    # bundle is installed on the cluster\n    \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"4.8\"}]'\n

The CSV annotation will eventually prevent the user from upgrading their OCP cluster before they have installed a version of your operator which is compatible with 4.9. However, note that it is important to make these changes now as users running workloads with deprecated API(s) that are looking to upgrade to OCP 4.9 will need to be running operators that have this annotation set in order to prevent the cluster upgrade and potentially adversely impacting their crucial workloads.

This option is useful when you know that the current version of your project will not work well on some specific Openshift version.

"},{"location":"users/packaging-required-criteria-ocp/#configure-the-openshift-distribution","title":"Configure the Openshift distribution","text":"

Use the annotation com.redhat.openshift.versions in bundle/metadata/annotations.yaml to ensure that the index image will be generated with its OCP Label, to prevent the bundle from being distributed on to 4.9:

com.redhat.openshift.versions: \"v4.6-v4.8\"\n

This option is also useful when you know that the current version of your project will not work well on some specific OpenShift version. By using it you defined the Openshift versions where the Operator should be distributed and the Operator will not appear in a catalog of an Openshift version that is outside of the range. You must use it if you are distributing a solution that contains deprecated API(s) and will no longer be available in later versions. For more information see Managing OpenShift Versions.

"},{"location":"users/packaging-required-criteria-ocp/#validate-the-bundle-with-the-common-criteria-to-distribute-via-olm-with-sdk","title":"Validate the bundle with the common criteria to distribute via OLM with SDK","text":"

Also, you can check the bundle via operator-sdk bundle validate against the suite Validator Community Operators and the K8s Version that you are intended to publish:

operator-sdk bundle validate ./bundle --select-optional suite=operatorframework --optional-values=k8s-version=1.22\n

NOTE: The validators only check the manifests which are shipped in the bundle. They are unable to ensure that the project's code does not use the Deprecated/Removed API(s) in 1.22 and/or that it does not have as dependency another operator that uses them.

"},{"location":"users/packaging-required-criteria-ocp/#validate-the-bundle-with-the-specific-criteria-to-distribute-in-openshift-catalogs","title":"Validate the bundle with the specific criteria to distribute in Openshift catalogs","text":"

Pre-requirement Download the binary. You might want to keep it in your $GOPTH/bin

Then, we can use the experimental OpenShift OLM Catalog Validator to check your Operator bundle. In this case, we need to inform the bundle and the annotations.yaml file paths:

$ ocp-olm-catalog-validator my-bundle-path/bundle  --optional-values=\"file=bundle-path/bundle/metadata/annotations.yaml\"\n

Following is an example of an Operator bundle that uses the removed APIs in 1.22 and is not configured accordingly:

$ ocp-olm-catalog-validator bundle/ --optional-values=\"file=bundle/metadata/annotations.yaml\"\nWARN[0000] Warning: Value memcached-operator.v0.0.1: this bundle is using APIs which were deprecated and removed in v1.22. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22. Migrate the API(s) for CRD: ([\"memcacheds.cache.example.com\"])\nERRO[0000] Error: Value : (memcached-operator.v0.0.1) olm.maxOpenShiftVersion csv.Annotations not specified with an OCP version lower than 4.9. This annotation is required to prevent the user from upgrading their OCP cluster before they have installed a version of their operator which is compatible with 4.9. For further information see https://docs.openshift.com/container-platform/4.8/operators/operator_sdk/osdk-working-bundle-images.html#osdk-control-compat_osdk-working-bundle-images\nERRO[0000] Error: Value : (memcached-operator.v0.0.1) this bundle is using APIs which were deprecated and removed in v1.22. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22. Migrate the APIs for this bundle is using APIs which were deprecated and removed in v1.22. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22. Migrate the API(s) for CRD: ([\"memcacheds.cache.example.com\"]) or provide compatible version(s) via the labels. (e.g. LABEL com.redhat.openshift.versions='4.6-4.8')\n
"},{"location":"users/pipelines_overview/","title":"Overview","text":"

Operator pipelines is a Tekton based solution that serves as a CI/CD platform for Operators targeting Red Hat Openshift platform. The CI/CD process makes sure all operators available in the OCP met certain standards.

The series of pipelines validates the operator, tests it and make it available to all OCP users. In combination with Github repository users or partners are able to submit a new operator in Pull request workflow and release it.

"},{"location":"users/pipelines_overview/#operator-repositories","title":"Operator repositories","text":"

The Openshift platform include by default several catalogs from which users can install an operator. This CI/CD solution aims for Red Hat Partners or Community members. Thus there are 3 separate repositories where operator owner can submit operators. Each repository serves a different purpose and has its own specific rules but is shares the common CI/CD solution.

"},{"location":"users/pipelines_overview/#sequence-diagram","title":"Sequence diagram","text":"
sequenceDiagram\n    Operator Owner->>Github: Submits a PR with operator\n    Github->>Pipeline: Triggers a hosted pipeline\n    Pipeline->>Pipeline: Execute tests and validation\n    Pipeline->>Github: Merge PR\n    Github->>Pipeline: Triggers a release pipeline\n    Pipeline->>Pipeline: Release operator to OCP catalog\n    Pipeline->>Github: Notify user in PR\n\n
"},{"location":"users/static_checks/","title":"Static check","text":"

The operator pipelines want to make sure the operator that will be released to OpenShift operator catalog follows a best practices and meet certain standards that we expect from an operator.

In order to meet these standards a series of static checks have been created for each stream of operators. The static checks are executed for each operator submission and reports warning or failures with an description of what is wrong and suggestion on how to fix it.

Here is the example how the result will look like in the PR:

"},{"location":"users/static_checks/#isv-tests","title":"ISV tests","text":""},{"location":"users/static_checks/#check_pruned_graph-warning","title":"check_pruned_graph (Warning)","text":"

This test make sure the operator update graph is not accidentally pruned by introducing operator configuration that prunes graph as a unwanted side effect.

The unintentional graph pruning happens when olm.skipRange annotation is set but replaces field is not set in the CSV. The definition may lead to unintentional pruning of the update graph.

If this is intentional, you can skip the check by adding /test skip check_pruned_graph comment to a pull request.

"},{"location":"users/static_checks/#check_marketplace_annotation","title":"check_marketplace_annotation","text":"

The marketplace operators requires additional metadata in order to be properly displayed in the Marketplace ecosystem.

There are 2 required fields in the operator clusterserviceversion that need to be filled and need to have specific value:

Where {annotation_package} matches operators.operatorframework.io.bundle.package.v1 from metadata/annotation.yaml file.

The test is only executed for operators submitted inside the Red Hat marketplace repo.

"},{"location":"users/static_checks/#community-tests","title":"Community tests","text":""},{"location":"users/static_checks/#check_osdk_bundle_validate_operatorhub","title":"check_osdk_bundle_validate_operatorhub","text":"

The test is based on operator-sdk bundle validate command with name=operatorhub test suite (link).

"},{"location":"users/static_checks/#check_osdk_bundle_validate_operator_framework","title":"check_osdk_bundle_validate_operator_framework","text":"

The test is based on operator-sdk bundle validate command with suite=operatorframework test suite (link).

"},{"location":"users/static_checks/#check_required_fields","title":"check_required_fields","text":"Field name Validation Description spec.displayName .{3,50} A string with 3 - 50 characters spec.description .{20,} A bundle description with at least 20 characters spec.icon media A valid base64 content with a supported media type ({\"base64data\": <b64 content>, \"mediatype\": enum[\"image/png\", \"image/jpeg\", \"image/gif\", \"image/svg+xml\"]}) spec.version SemVer Valid semantic version spec.maintainers At least 1 maintainer contacts. Example: {\"name\": \"User 123\", \"email\": \"user@redhat.com\"} spec.provider.name .{3,} A string with at least 3 characters spec.links At least 1 link. Example: {\"name\": \"Documentation\", \"url\": \"https://redhat.com\"}"},{"location":"users/static_checks/#check_dangling_bundles","title":"check_dangling_bundles","text":"

The test prevents from releasing an operator and keeping any previous bundle dangling. A dangling bundle is a bundle that is not referenced by any other bundle and is not a HEAD of a channel.

In the example bellow the v1.3 bundle is dangling.

graph LR\n    A(v1.0) -->B(v1.1)\n    B --> C(v1.2)\n    B --> E(v1.3)\n    C --> D(v1.4 - HEAD)\n
"},{"location":"users/static_checks/#check_api_version_constraints","title":"check_api_version_constraints","text":"

The test verifies a consistency between value com.redhat.openshift.versions from annotation with spec.minKubeVersion. In case a an operator targets specific version of OpenShift and at the same time sets minimal kube version that is higher than the one supported by the OCP. The test raises an error.

Example:

Following combination is not valid since the OCP 4.9 is based on 1.22 Kubernetes.

spec.minKubeVersion: 1.23\n\ncom.redhat.openshift.versions: 4.9-4.15\n
"},{"location":"users/static_checks/#check_upgrade_graph_loop","title":"check_upgrade_graph_loop","text":"

The purpose of this test is to check whether there are any loops in the upgrade graph.

As stated on the graph below the edge between v1.2 and v1.0 introduces a loop in the graph.

graph LR\n    A(v1.0) -->B(v1.1)\n    B --> C(v1.2)\n    C --> D(v1.3)\n    C --> A\n
"},{"location":"users/static_checks/#check_replaces_availability","title":"check_replaces_availability","text":"

The test aims to verify if a bundle referenced by the replaces value is available in all catalog version where the given bundle is going to be released to. The list of catalog version is determined by the com.redhat.openshift.versions annotation if present. If the annotation is not present the bundle targets all supported ocp version.

To fix the issue either change a range of versions where a bundle is going to be released by updating the annotation or change the replaces value.

"},{"location":"users/static_checks/#check_operator_name_unique","title":"check_operator_name_unique","text":"

The test makes sure the operator is consistent when using operator names as defined in the clusterserviceversion. It is not allowed to have multiple bundle names for a single operator. The source of the value is at csv.metadata.name.

"},{"location":"users/static_checks/#check_ci_upgrade_graph","title":"check_ci_upgrade_graph","text":"

The test verifies a content of the ci.yaml file and make sure only allowed values are used for updateGraph key. The currently supported values are: [\"replaces-mode\", \"semver-mode\"].

"},{"location":"users/static_checks/#common-tests","title":"Common tests","text":""},{"location":"users/static_checks/#check_operator_name-warning","title":"check_operator_name (Warning)","text":"

The test verifies a consistency between operator name annotation and operator name in the CSV definition. The source of these values are:

"},{"location":"users/static_checks/#running-tests-locally","title":"Running tests locally","text":"
# Install the package with static checks\n$ pip install git+https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\n\n# Execute a test suite\n# In this example tests are executed for aqua operator with 2022.4.15 version\n$ python static-tests \\\n    --repo-path ~/community-operators-prod \\\n    --suites operatorcert.static_tests.community \\\n    --output-file /tmp/operator-test.json \\\n    --verbose\n    aqua 2022.4.15\n\n
$ cat /tmp/operator-test.json | jq\n\n{\n  \"passed\": false,\n  \"outputs\": [\n    {\n      \"type\": \"error\",\n      \"message\": \"Channel 2022.4.0 has dangling bundles: {Bundle(aqua/2022.4.14)}\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_dangling_bundles\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=ClusterConfigAuditReport: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=AquaStarboard: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=ConfigAuditReport: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: this bundle is using APIs which were deprecated and removed in v1.25. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25. Migrate the API(s) for podsecuritypolicies: ([\\\"ClusterServiceVersion.Spec.InstallStrategy.StrategySpec.ClusterPermissions[2].Rules[7]\\\"])\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: this bundle is using APIs which were deprecated and removed in v1.25. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25. Migrate the API(s) for podsecuritypolicies: ([\\\"ClusterServiceVersion.Spec.InstallStrategy.StrategySpec.ClusterPermissions[2].Rules[7]\\\"])\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: unable to find the resource requests for the container: (aqua-operator). It is recommended to ensure the resource request for CPU and Memory. Be aware that for some clusters configurations it is required to specify requests or limits for those values. Otherwise, the system or quota may reject Pod creation. More info: https://master.sdk.operatorframework.io/docs/best-practices/managing-resources/\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=ConfigAuditReport: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: this bundle is using APIs which were deprecated and removed in v1.25. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25. Migrate the API(s) for podsecuritypolicies: ([\\\"ClusterServiceVersion.Spec.InstallStrategy.StrategySpec.ClusterPermissions[2].Rules[7]\\\"])\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : The \\\"operatorhub\\\" validator is deprecated; for equivalent validation use \\\"operatorhub/v2\\\", \\\"standardcapabilities\\\" and \\\"standardcategories\\\" validators\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    }\n  ]\n}\n\n
"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Openshift Operators","text":""},{"location":"#about-this-repository","title":"About this repository","text":"

This repo is the canonical source for Kubernetes Operators that appear on OpenShift Container Platform and OKD.

NOTE The index catalogs:

are built from this repository and it is consumed by Openshift and OKD to create their sources and built their catalog. To know more about how Openshift catalog are built see the documentation.

See our documentation to find out more about Community, Certified and Marketplace operators and contribution.

"},{"location":"#add-your-operator","title":"Add your Operator","text":"

We would love to see your Operator added to this collection. We currently use automated vetting via continuous integration plus manual review to curate a list of high-quality, well-documented Operators. If you are new to Kubernetes Operators start here.

If you have an existing Operator read our contribution guidelines on how to open a PR. Then the community operator pipeline will be triggered to test your Operator and merge a Pull Request.

"},{"location":"#contributing-guide","title":"Contributing Guide","text":""},{"location":"#test-and-release-process-for-the-operator","title":"Test and release process for the Operator","text":"

Refer to the operator pipeline documentation .

"},{"location":"#important-notice","title":"IMPORTANT NOTICE","text":"

Some APIs versions are deprecated and are OR will no longer be served on the Kubernetes version 1.22/1.25/1.26 and consequently on vendors like Openshift 4.9/4.12/4.13.

What does it mean for you?

Operator bundle versions using the removed APIs can not work successfully from the respective releases. Therefore, it is recommended to check if your solutions are failing in these scenarios to stop using these versions OR by setting the \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"<OCP version>\"}]' to block cluster admins upgrades when they have Operator versions installed that can not work well in OCP versions higher than the value informed. Also, by defining a valid OCP range via the annotation com.redhat.openshift.versions into the metadata/annotations.yaml for our solution does not end up shipped on OCP/OKD versions where it cannot be installed.

WARNING: olm.maxOpenShiftVersion should ONLY be used if you are 100% sure that your Operator bundle version cannot work in upper releases. Otherwise, you might provide a bad user experience. Be aware that cluster admins will be unable to upgrade their clusters with your solution installed. Then, suppose you do not provide any upper version and a valid upgrade path for those who have your Operator installed be able to upgrade it and consequently be allowed to upgrade their cluster version (i.e from OCP 4.10 to 4.11). In that case, cluster admins might choose to uninstall your Operator and no longer use it so that they can move forward and upgrade their cluster version without it.

Please, make sure you check the following announcements: - How to deal with removal of v1beta1 CRD removals in Kubernetes 1.22 / OpenShift 4.9 - Kubernetes API removals on 1.25/1.26 and Openshift 4.12/4.13 might impact your Operator. How to deal with it?

"},{"location":"#reporting-bugs","title":"Reporting Bugs","text":"

Use the issue tracker in this repository to report bugs.

"},{"location":"ci-cd/","title":"CI/CD","text":"

This project uses GitHub Actions and Ansible for CI (tests, linters) and CD (deployment).

"},{"location":"ci-cd/#secrets","title":"Secrets","text":"

Both deployment and integration tests need GitHub secrets to work properly. The following secrets should be kept in the repository:

Secret name Secret value Purpose VAULT_PASSWORD Password to the preprod Ansible Vault stored in the repository Deployment of preprod and integration test environments VAULT_PASSWORD_PROD Password to the prod Ansible Vault stored in the repository Deployment of the production environment REGISTRY_USERNAME Username for authentication to the container registry Building images REGISTRY_PASSWORD Password for authentication to the container registry Building images GITHUB_TOKEN GitHub authentication token Creation of GitHub tags and releases"},{"location":"ci-cd/#run-order","title":"Run order","text":""},{"location":"ci-cd/#integration-tests","title":"Integration tests","text":""},{"location":"ci-cd/#when-do-they-run","title":"When do they run?","text":"

Integration tests are a stage of the CI/CD that runs only in two cases: - On merge to main and before deployment - On desire (manual action- by clicking \"run workflow\" via GitHub UI)

"},{"location":"ci-cd/#running-integration-tests","title":"Running integration tests","text":"

The orchestration of the integration tests is handled by Ansible. A couple dependencies must be installed to get started:

To execute the integration tests in a custom environment:

ansible-pull \\\n  -U \"https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\" \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n

To manually run the integration tests from the local environment: - prerequisites: - logged-in to OC cluster - export NAMESPACE - new is created if not exist, careful for duplicity (can override existing projects) - SSH key need to be set in GitHub account for local user (Ansible use SSH to clone/manipulate repositories) - Python dependencies (mentioned above) need to be installed globally

ansible-playbook -v \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\ \n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n

Tags can be used to run select portions of the playbook. For example, the test resources will be cleaned up at the end of every run. Skipping the clean tag will leave the resources behind for debugging.

ansible-pull \\\n  --skip-tags clean \\\n  -U \"https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\" \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n

It may be necessary to provide your own project and bundle to test certain aspects of the pipelines. This can be accomplished with the addition of a few extra vars (and proper configuration of the project).

ansible-pull \\\n  -U \"https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\" \\\n  -i \"ansible/inventory/operator-pipeline-integration-tests\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  -e \"src_operator_git_branch=$SRC_BRANCH\" \\\n  -e \"src_operator_bundle_version=$SRC_VERSION\" \\\n  -e \"operator_package_name=$PACKAGE_NAME\" \\\n  -e \"operator_bundle_version=$NEW_VERSION\" \\\n  -e \"ci_pipeline_pyxis_api_key=$API_KEY\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/operator-pipeline-integration-tests.yml\n
"},{"location":"cluster-config/","title":"Cluster Configuration","text":"

All OpenShift clusters should share a common configuration for our pipelines. There are cluster-wide resources which require modification, such as the TektonConfig. But there is also a custom EventListener which reports PipelineRun events to Slack and a pipeline that uploads the metrics of other pipelines for monitoring purposes. This configuration must be applied manually for now.

To apply these cluster-wide configurations, run the Ansible playbook. To only apply the cluster-wide resources, the following command will suffice.

ansible-playbook \\\n    -i inventory/clusters \\\n    -e \"clusters={INSERT ANSIBLE HOST LIST}\" \\\n    -e \"ocp_token={INSERT TOKEN}\" \\\n    -e \"k8s_validate_certs={yes|no}\" \\\n    --vault-password-file \"{INSERT FILE}\" \\\n    playbooks/config-ocp-cluster.yml\n

If you want to deploy the metrics pipeline, add --tags metrics to the above command. To deploy the Chat Webhook, add --tags chat. If you wish to deploy both, add --tags metrics,chat.

"},{"location":"developer-guide/","title":"Developer Guide","text":""},{"location":"developer-guide/#workflow","title":"Workflow","text":"
  1. Run through the setup at least once.
  2. Make changes to the pipeline image, if desired.
  3. Make changes to the Tekton pipelines and/or tasks, if desired.
  4. Test all impacted pipelines.
  5. Override the pipeline image as necessary.
  6. Submit a pull request with your changes.
"},{"location":"developer-guide/#setup","title":"Setup","text":"
  1. Git leaks detection
  2. Prepare a development environment
  3. Prepare a certification project
  4. Prepare an Operator bundle
  5. Prepare your ci.yaml
  6. Create a bundle pull request (optional)
  7. Required for testing hosted or release pipelines
  8. Create an API key (optional)
  9. Required for testing submission with the CI pipeline
  10. Prepare the CI to run from your fork (optional)
  11. Required to run integration testing on forks of this repo.
"},{"location":"developer-guide/#git-leaks-detection","title":"Git leaks detection","text":"

Since the repository contains secret information in form of encrypted Ansible Vault there is high chance that developer may push a commit with decrypted secrets by mistake. To avoid this problem we recommend to use Gitleaks tool that prevent you from commit secret code into git history.

The repository is already pre-configured but each developer has to make final config changes in his/her environment.

Follow the documentation to configure Gitleaks on your computer.

"},{"location":"developer-guide/#prepare-a-development-environment","title":"Prepare a Development Environment","text":"

You may use any OpenShift 4.7+ cluster (including CodeReady Containers).

The hosted and release pipelines require a considerable amount of dependencies which are tedious to configure manually. Luckily these steps have been automated and can be executed by anyone with access to the Ansible vault password.

Before running this you should ensure you're logged into the correct OpenShift cluster using oc. If already logged into the OpenShift console, an oc login command can be obtained by clicking on your username in upper right corner, and selecting copy login command.

ansible-playbook -v \\\n  -i \"ansible/inventory/operator-pipeline-$ENV\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  -e \"ocp_token=`oc whoami -t`\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/deploy.yml\n

:warning: Conflicts may occur if the project already contains some resources. They may need to be removed first.

Cleanup can be performed by specifying the absent state for some of the resources.

ansible-playbook -v \\\n  -i \"ansible/inventory/operator-pipeline-$ENV\" \\\n  -e \"oc_namespace=$NAMESPACE\" \\\n  -e \"ocp_token=`oc whoami -t`\" \\\n  -e \"namespace_state=absent\" \\\n  -e \"github_webhook_state=absent\" \\\n  --vault-password-file $VAULT_PASSWORD_PATH \\\n  ansible/playbooks/deploy.yml\n
"},{"location":"developer-guide/#integration-tests","title":"Integration tests","text":"

See integration tests section in ci-cd.md

"},{"location":"developer-guide/#install-tkn","title":"Install tkn","text":"

You should install the tkn CLI which corresponds to the version of the cluster you're utilizing.

"},{"location":"developer-guide/#using-codeready-containers","title":"Using CodeReady Containers","text":"

It's possible to deploy and test the pipelines from a CodeReady Containers (CRC) cluster for development/testing, purposes.

  1. Install CodeReady Containers

  2. Install OpenShift Pipelines

  3. Login to your cluster with oc CLI.

    You can run crc console --credentials to get the admin login command.

  4. Create a test project in your cluster

    bash oc new-project playground

  5. Grant the privileged SCC to the default pipeline service account.

    The buildah task requires the privileged security context constraint in order to call newuidmap/newgidmap. This is only necessary because runAsUser:0 is defined in templates/crc-pod-template.yml.

    bash oc adm policy add-scc-to-user privileged -z pipeline

"},{"location":"developer-guide/#running-a-pipeline-with-crc","title":"Running a Pipeline with CRC","text":"

It may be necessary to pass the following tkn CLI arg to avoid permission issues with the default CRC PersistentVolumes.

--pod-template templates/crc-pod-template.yml\n
"},{"location":"developer-guide/#prepare-a-certification-project","title":"Prepare a Certification Project","text":"

A certification project is required for executing all pipelines. In order to avoid collisions with other developers, it's best to create a new one in the corresponding Pyxis environment.

The pipelines depend on the following certification project fields:

{\n  \"project_status\": \"active\",\n  \"type\": \"Containers\",\n\n  // Arbitrary name for the project - can be almost anything\n  \"name\": \"<insert-project-name>\",\n\n  /*\n   Either \"connect\", \"marketplace\" or \"undistributed\".\n   This maps to the `organization` field in the bundle submission repo's config.yaml.\n     connect -> certified-operators\n     marketplace -> redhat-marketplace\n     undistributed -> certified-operators (certified against, but not distributed to)\n  */\n  \"operator_distribution\": \"<insert-distribution>\",\n\n  // Must correspond to a containerVendor record with the same org_id value.\n  \"org_id\": <insert-org-id>,\n\n  \"container\": {\n    \"type\": \"operator bundle image\",\n\n    \"build_catagories\":\"Operator bundle\",\n\n    // Required but always \"rhcc\"\n    \"distribution_method\": \"rhcc\",\n\n    // Always set to true to satisfy the publishing checklist\n    \"distribution_approval\": true,\n\n    // Must match the github user(s) which opened the test pull requests\n    \"github_usernames\": [\"<insert-github-username>\"],\n\n    // Must be unique for the vendor\n    \"repository_name\": \"<insert-repo-name>\"\n  }\n}\n
"},{"location":"developer-guide/#prepare-an-operator-bundle","title":"Prepare an Operator Bundle","text":"

You can use a utility script to copy an existing bundle. By default it will copy a bundle that should avoid common failure conditions such as digest pinning. View all the customization options by passing -h to this command.

./scripts/copy-bundle.sh\n

You may wish to tweak the generated output to influence the behavior of the pipelines. For example, Red Hat Marketplace Operator bundles may require additional annotations. The pipeline should provide sufficient error messages to indicate what is missing. If such errors are unclear, that is likely a bug which should be fixed.

"},{"location":"developer-guide/#prepare-your-ciyaml","title":"Prepare Your ci.yaml","text":"

At the root of your operator package directory (note: not the bundle version directory) there needs to be a ci.yaml file. For development purposes, it should follow this format in most cases.

---\n# Copy this value from the _id field of the certification project in Pyxis.\ncert_project_id: <pyxis-cert-project-id>\n# Set this to true to allow the hosted pipeline to merge pull requests.\nmerge: false\n
"},{"location":"developer-guide/#create-a-bundle-pull-request","title":"Create a Bundle Pull Request","text":"

It's recommended to open bundle pull requests against the operator-pipelines-test repo. The pipeline GitHub bot account has permissions to manage it.

Note: This repository is only configured for testing certified operators, NOT Red Hat Marketplace operators (see config.yaml).

# Checkout the pipelines test repo\ngit clone https://github.com/redhat-openshift-ecosystem/operator-pipelines-test\ncd operator-pipelines-test\n\n# Create a new branch\ngit checkout -b <insert-branch-name>\n\n# Copy your package directory\ncp -R <package-dir>/ operators/\n\n# Commit changes\ngit add -A\n\n# Use this commit pattern so it defaults to the pull request title.\n# This is critical to the success of the pipelines.\ngit commit -m \"operator <operator-package-name> (<bundle-version>)\"\n\n# Push your branch. Open the pull request using the output.\ngit push origin <insert-branch-name>\n

Note: You may need to merge the pull request to use it for testing the release pipeline.

"},{"location":"developer-guide/#create-an-api-key","title":"Create an API Key","text":"

File a ticket with Pyxis admins to assist with this request. It must correspond to the org_id for the certification project under test.

"},{"location":"developer-guide/#making-changes-to-the-pipelines","title":"Making Changes to the Pipelines","text":""},{"location":"developer-guide/#guiding-principles","title":"Guiding Principles","text":""},{"location":"developer-guide/#applying-pipeline-changes","title":"Applying Pipeline Changes","text":"

You can use the following command to apply all local changes to your OCP project. It will add all the Tekton resources used across all the pipelines.

oc apply -R -f ansible/roles/operator-pipeline/templates/openshift\n
"},{"location":"developer-guide/#making-changes-to-the-pipeline-image","title":"Making Changes to the Pipeline Image","text":""},{"location":"developer-guide/#dependency","title":"Dependency","text":"

Operator pipelines project is configured to automatically manage Python dependencies using PDM tool. The pdm automates definition, installation, upgrades and the whole lifecycle of dependency in a project. All dependencies are stored in pyproject.toml file in a groups that corresponds to individual applications within the Operator pipelines project.

Adding, removing and updating of dependency needs to be always done using pdm cli.

pdm add -G operator-pipelines gunicorn==20.1.0\n

After a dependency is installed it is added to pdm.lock file. The lock file is always part of git repository.

If you want to install specific group set of dependencies use following command:

pdm install -G operator-pipelines\n

Dependencies are stored into virtual environment (.venv) which is automatically created after pdm install. If .venv wasn't created, configure pdm to automatically create it during installation with pdm config python.use_venv true.

"},{"location":"developer-guide/#run-unit-tests-code-style-checkers-etc","title":"Run Unit Tests, Code Style Checkers, etc.","text":"

Before running the tests locally, the environment needs to be prepared. Choose the preparation process according to your Linux version.

"},{"location":"developer-guide/#preparation-on-rpm-based-linux","title":"Preparation on RPM-based Linux","text":"
sudo dnf -y install hadolint\npython3 -m pip install pdm\npdm venv create 3.12\npdm install\nsource .venv/bin/activate\npython3 -m pip install ansible-lint\n
"},{"location":"developer-guide/#preparation-on-other-linux-systems","title":"Preparation on other Linux systems","text":"

Before starting, make sure you have installed the Brew package manager.

brew install hadolint\npython3 -m pip install pdm\npdm venv create 3.12\npdm install\nsource .venv/bin/activate\npython3 -m pip install ansible-lint\n
"},{"location":"developer-guide/#run-the-local-tests","title":"Run the local tests","text":"

To run unit tests and code style checkers:

tox\n
"},{"location":"developer-guide/#local-development","title":"Local development","text":"

Setup python virtual environment using pdm.

pdm venv create 3.12\npdm install\nsource .venv/bin/activate\n
"},{"location":"developer-guide/#build-push","title":"Build & Push","text":"
  1. Ensure you have buildah installed

  2. Build the image

    bash buildah bud

  3. Push the image to a remote registry, eg. Quay.io.

    bash buildah push <image-digest-from-build-step> <remote-repository>

    This step may require login, eg.

    bash buildah login quay.io

"},{"location":"index-signature-verification/","title":"Index Signature Verification","text":"

This repository contains a special Tekton pipeline for checking the signature status of the production index images. For now, it is only intended to be deployed manually on a single cluster. The pipeline is regularly scheduled via a CronJob and runs to completion without sending a direct notification upon success or failure. Instead, it relies on other resources to handle reporting.

The pipeline should be deployed using Ansible.

ansible-playbook \\\n    -i inventory/clusters \\\n    -e \"clusters={INSERT ANSIBLE HOST LIST}\" \\\n    -e \"ocp_token={INSERT TOKEN}\" \\\n    -e \"k8s_validate_certs={yes|no}\" \\\n    --vault-password-file \"{INSERT FILE}\" \\\n    playbooks/deploy-index-signature-verification.yml\n
"},{"location":"ocp-namespace-config/","title":"OpenShift namespaces configuration","text":"

Operator pipelines are deployed and run in OpenShift Dedicated clusters. The deployment of all resources including pipelines, tasks, secrets and others is managed using Ansible playbooks. In order to be able run the ansible automated way the initial setup of OpenShift namespaces needs to be executed. This process is also automated and requires access to a cluster with the cluster-admin privileges.

To initially create and configure namespaces for each environment use following script:

cd ansible\n# Store Ansible vault password in ./vault-password\necho $VAULT_PASSWD > ./vault-password\n\n# Login to a cluster using oc\noc login --token=$TOKEN --server=$OCP_SERVER\n\n# Trigger an automation\n./init.sh stage\n

This command triggers Ansible that automates creation of OCP namespace for given environment and store admin service account token into vault.

"},{"location":"pipeline-admin-guide/","title":"Operator pipeline admin guide","text":"

This document aims to provide information needed for maintenance and troubleshooting of operator pipelines.

"},{"location":"pipeline-admin-guide/#operator-repositories","title":"Operator repositories","text":"

Pre-production repositories are used for all pre-prod environments (stage, dev, qa). Each environment has a dedicated git branch. By selecting a target branch you can select an environment where the operator will be tested.

"},{"location":"pipeline-admin-guide/#ocp-environments","title":"OCP environments","text":""},{"location":"pipeline-admin-guide/#pipelines","title":"Pipelines","text":"

Testing and certification of OpenShift operators from ISV and Community sources is handled by OpenShift Pipelines (Tekton)

"},{"location":"pipeline-admin-guide/#isv-pipelines","title":"ISV pipelines","text":""},{"location":"pipeline-admin-guide/#community-pipelines","title":"Community pipelines","text":""},{"location":"pipeline-admin-guide/#troubleshooting","title":"Troubleshooting","text":""},{"location":"pipeline-admin-guide/#pipeline-states","title":"Pipeline states","text":"

After an operator is submitted to any of the repositories mentioned above a operator pipeline kicks in. The current state of the pipeline is indicated by the PR labels. Right after a pipeline starts a label operator-hosted-pipeline/started is added. Based on the result of the pipeline one of the following labels is added and */started label is removed: - operator-hosted-pipeline/passed - operator-hosted-pipeline/failed

If the hosted pipeline finished successfully and PR has been approved the pipeline merges the PR. The merge event is a trigger for the release pipeline. The release pipeline also applies labels based on the current pipeline status. - operator-release-pipeline/started - operator-release-pipeline/passed - operator-release-pipeline/failed

In the best case scenario at the end of a process a PR should have both hosted and release */passed labels.

"},{"location":"pipeline-admin-guide/#re-trigger-mechanism","title":"Re-trigger mechanism","text":"

In case of pipeline failure user or repository owner can re-trigger a pipeline using PR labels. Since the labels can't be set by external contributor a pipeline can be also re-triggered using PR comments. The re-trigger mechanism allows user to re-trigger pipeline only when previous pipeline ended up in failed state.

The pipeline summary provides a description of the failure and a hint of how to re-trigger the pipeline.

The command that re-triggers a pipeline is in a following format:

/pipeline restart <pipeline name>

Based on which pipeline fails one of these command can be used to re-trigger it again:

After a pipeline is re-triggered using the command a few labels will be added and removed from the PR. First a new labels pipeline/trigger-hosted or pipeline/trigger-release is added. This label kick in the pipeline and pipeline itself start adding a labels based on the pipeline status.

A script called bulk-retrigger is provided in the operator-pipeline container image to help re-triggering a pipeline on multiple PRs: it takes the repository name, a CSV file containing a list of PRs to process and automates the re-triggering of the pipeline one PR at a time. See the help text for details on how to run it.

"},{"location":"pipeline-admin-guide/#pipeline-logs","title":"Pipeline logs","text":"

Pipelines interacts with user using a Github Pull request interface. There are a slight differences between ISV and community repositories, but overall concept is the same.

At the end of pipeline run a pipeline submits a pipeline summary comment with a basis pipeline metrics and overview of individual tasks.

The community pipeline also directly attaches a link to a Github Gist with a pipeline logs. The ISV pipeline uploads logs and artifacts to Pyxis and logs are available to partner through Red Hat Connect.

"},{"location":"pipeline-admin-guide/#skip-tests","title":"Skip tests","text":"

In certain corner cases there is a real need to skip a subset of tests and force a pipeline to pass even though not all checks are green. This is usually initiated by submitting an exception from ISV or community members. In case an exception is reviewed and approved a pipeline has a mechanism to skip selected tests.

To to skip a static or dynamic test a repository administrator needs to apply a PR label in the following format:

tests/skip/<name of the test>

So for example if case an operator can't be installed with a default settings and requires a special environment we can skip DeployableByOLM by adding tests/skip/DeployableByOLM label to a PR.

"},{"location":"pipeline-env-setup/","title":"Pipeline Environment Setup","text":"

Common for all the pipelines

Only CI Pipeline

Only Hosted Pipeline

Only Release Pipeline

"},{"location":"pipeline-env-setup/#common-for-all-the-pipelines","title":"Common for all the pipelines:","text":""},{"location":"pipeline-env-setup/#red-hat-catalog-imagestreams","title":"Red Hat Catalog Imagestreams","text":"

The pipelines must pull the parent index images through the internal OpenShift registry to take advantage of the built-in credentials for Red Hat's terms-based registry (registry.redhat.io). This saves the user from needing to provide such credentials. The index generation task will always pull published index images through imagestreams of the same name in the current namespace. As a result, there is a one time configuration for each desired distribution catalog. Replace the from argument when configuring this for pre-production environments.

# Must be run once before certifying against the certified catalog.\noc --request-timeout 10m import-image certified-operator-index \\\n  --from=registry.redhat.io/redhat/certified-operator-index \\\n  --reference-policy local \\\n  --scheduled \\\n  --confirm \\\n  --all\n\n# Must be run once before certifying against the Red Hat Marketplace catalog.\noc --request-timeout 10m import-image redhat-marketplace-index \\\n  --from=registry.redhat.io/redhat/redhat-marketplace-index \\\n  --reference-policy local \\\n  --scheduled \\\n  --confirm \\\n  --all\n
"},{"location":"pipeline-env-setup/#only-ci-pipeline","title":"Only CI pipeline:","text":""},{"location":"pipeline-env-setup/#registry-credentials","title":"Registry Credentials","text":"

The CI pipeline can optionally be configured to push and pull images to/from a remote private registry. The user must create an auth secret containing the docker config. This secret can then be passed as a workspace named registry-credentials when invoking the pipeline.

oc create secret generic registry-dockerconfig-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=config.json\n
"},{"location":"pipeline-env-setup/#git-ssh-secret","title":"Git SSH Secret","text":"

The pipelines requires git SSH credentials with write access to the repository if automatic digest pinning is enabled using the pin_digests param. This is disabled by default. Before executing the pipeline the user must create a secret in the same namespace as the pipeline.

To create the secret run the following commands (substituting your key):

cat << EOF > ssh-secret.yml\nkind: Secret\napiVersion: v1\nmetadata:\n  name: github-ssh-credentials\ndata:\n  id_rsa: |\n    < PRIVATE SSH KEY >\nEOF\n\noc create -f ssh-secret.yml\n
"},{"location":"pipeline-env-setup/#container-api-access","title":"Container API access","text":"

CI pipelines automatically upload a test results, logs and artifacts using Red Hat container API. This requires a partner's API key and the key needs to be created as a secret in OpenShift cluster before running a Tekton pipeline.

oc create secret generic pyxis-api-secret --from-literal pyxis_api_key=< API KEY >\n
"},{"location":"pipeline-env-setup/#kubeconfig","title":"Kubeconfig","text":"

The CI pipeline requires a kubeconfig with admin credentials. This can be created by logging into said cluster as an admin user.

KUBECONFIG=kubeconfig oc login -u <username> -p <password>\noc create secret generic kubeconfig --from-file=kubeconfig=kubeconfig\n
"},{"location":"pipeline-env-setup/#github-api-token","title":"GitHub API token","text":"

To automatically open the PR with submission, pipeline must authenticate to GitHub. Secret containing api token should be created.

oc create secret generic github-api-token --from-literal GITHUB_TOKEN=< GITHUB TOKEN >\n
"},{"location":"pipeline-env-setup/#only-hosted-pipeline","title":"Only Hosted pipeline:","text":""},{"location":"pipeline-env-setup/#registry-credentials_1","title":"Registry Credentials","text":"

The hosted pipeline requires credentials to push/pull bundle and index images from a pre-release registry (quay.io). A registry auth secret must be created. This secret can then be passed as a workspace named registry-credentials when invoking the pipeline.

oc create secret generic hosted-pipeline-registry-auth-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=config.json\n
"},{"location":"pipeline-env-setup/#container-api-access_1","title":"Container API access","text":"

The hosted pipeline communicates with internal Container API that requires cert + key. The corresponding secret needs to be created before running the pipeline.

oc create secret generic operator-pipeline-api-certs \\\n  --from-file operator-pipeline.pem \\\n  --from-file operator-pipeline.key\n
"},{"location":"pipeline-env-setup/#hydra-credentials","title":"Hydra credentials","text":"

To verify publishing checklist, Hosted pipeline uses Hydra API. To authenticate with Hydra over basic auth, secret containing service account credentials should be created.

oc create secret generic hydra-credentials \\\n  --from-literal username=<username>  \\\n  --from-literal password=<password>\n
"},{"location":"pipeline-env-setup/#github-bot-token","title":"GitHub Bot token","text":"

To automatically merge the PR, Hosted pipeline uses GitHub API. To authenticate when using this method, secret containing bot token should be created.

oc create secret generic github-bot-token --from-literal github_bot_token=< BOT TOKEN >\n
"},{"location":"pipeline-env-setup/#prow-kubeconfig","title":"Prow-kubeconfig","text":"

Hosted preflight tests are run on the separate cluster. To provision a cluster destined for the tests, the pipeline uses a Prowjob. Thus, to start the preflight test, there needs to be a prow-specific kubeconfig.

oc create secret generic prow-kubeconfig \\\n  --from-literal kubeconfig=<kubeconfig>\n
"},{"location":"pipeline-env-setup/#preflight-decryption-key","title":"Preflight decryption key","text":"

Results of the preflight tests are protected by encryption. In order to retrieve them from the preflight job, gpg decryption key should be supplied.

oc create secret generic preflight-decryption-key \\\n  --from-literal private=<private gpg key> \\\n  --from-literal public=<public gpg key>\n
"},{"location":"pipeline-env-setup/#quay-oauth-token","title":"Quay OAuth Token","text":"

A Quay OAuth token is required to set repo visibility to public.

oc create secret generic quay-oauth-token --from-literal token=<token>\n
"},{"location":"pipeline-env-setup/#only-release-pipeline","title":"Only Release pipeline:","text":""},{"location":"pipeline-env-setup/#registry-credentials_2","title":"Registry Credentials","text":"

The release pipeline requires credentials to push and pull the bundle image built by the hosted pipeline. Three registry auth secrets must be specified since different credentials may be required for the same registry when copying and serving the image. These secrets can then be passed as workspaces named registry-pull-credentials, registry-push-credentials and registry-serve-credentials when invoking the pipeline.

oc create secret generic release-pipeline-registry-auth-pull-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=pull-config.json\n\noc create secret generic release-pipeline-registry-auth-push-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=push-config.json\n\noc create secret generic release-pipeline-registry-auth-serve-secret \\\n  --type kubernetes.io/dockerconfigjson \\\n  --from-file .dockerconfigjson=serve-config.json\n
"},{"location":"pipeline-env-setup/#kerberos-credentials","title":"Kerberos credentials","text":"

For submitting the IIB build, you need kerberos keytab in a secret:

oc create secret generic kerberos-keytab \\\n  --from-file krb5.keytab\n
"},{"location":"pipeline-env-setup/#quay-credentials","title":"Quay credentials","text":"

Release pipeline uses Quay credentials to authenticate a push to an index image during the IIB build.

oc create secret generic iib-quay-credentials \\\n  --from-literal username=<QUAY_USERNAME> \\\n  --from-literal password=<QUAY_PASSWORD>\n
"},{"location":"pipeline-env-setup/#ocp-registry-kubeconfig","title":"OCP-registry-kubeconfig","text":"

OCP clusters contains the public registries for Operator Bundle Images. To publish the image to this registry, Pipeline connects to OCP cluster via kubeconfig. To create the secret which contains the OCP cluster kubeconfig:

oc create secret generic ocp-registry-kubeconfig \\\n  --from-literal kubeconfig=<kubeconfig>\n

Additional setup instructions for this cluster are documented here.

"},{"location":"preflight-invalidation/","title":"Preflight invalidation CronJob","text":"

The repository contains a CronJob that updates enabled preflight versions weekly. https://issues.redhat.com/browse/ISV-4964

After changes, the CronJob can be deployed using Ansible.

ansible-playbook \\\n    -i ansible/inventory/clusters \\\n    -e \"clusters=prod-cluster\" \\\n    -e \"ocp_token=[TOKEN]\" \\\n    -e \"env=prod\" \\\n    --vault-password-file [PWD_FILE] \\\n    playbooks/preflight-invalidation.yml\n
"},{"location":"release-and-rollback/","title":"Release Schedule","text":"

Every Monday and Wednesday, except for hotfixes.

"},{"location":"release-and-rollback/#hotfixes","title":"Hotfixes","text":"

Hotfixes are defined as changes that need to be quickly deployed to prod, outside of the regular release schedule, to address major issues that occur in prod. Hotfixes should still follow the release criteria and process, and should be announced on the team chat so that the rest of the team is aware.

"},{"location":"release-and-rollback/#release-criteria","title":"Release Criteria","text":""},{"location":"release-and-rollback/#release-process","title":"Release Process","text":"

Before deployments occur, a new container image will be built and with \u201clatest\u201d and the associated git commit sha, then pushed to quay.io.

Dev and qa deployment will happen automatically by Github Actions every time a change is merged into the main branch. The commit sha will be passed for identify the container image used by the pipelines as part of deployments.

During a scheduled release or hotfix, stage and prod deployment will only happen by manually triggering the \u201cdeploy-stage\u201d and \u201cdeploy-prod\u201d Github Actions respectively. In a scheduled release, changes that were previously deployed to dev and qa will be promoted to stage, and changes that were previously deployed to stage will be promoted to prod. The last container image used in dev and qa (identified by the git commit sha tag) will also be promoted to be used in the stage pipeline, while the container image last used in stage will be used in the prod pipeline.

"},{"location":"release-and-rollback/#rollback-process","title":"Rollback Process","text":""},{"location":"release-and-rollback/#short-term-rollbacks","title":"Short term rollbacks","text":"

For short term rollbacks: Re-run deployment from a previous stable release. Since the container image is identified by the git commit sha, re-running a previous deployment will also roll back the container image that\u2019s used to a previous one.

"},{"location":"release-and-rollback/#longer-term-rollbacks","title":"Longer term rollbacks","text":"

Revert commit(s) that need to be rolled back, then follow the regular release process to deploy.

"},{"location":"users/best-practices/","title":"Operator Best Practices","text":"

Check the sections Best Practices for OLM and SDK projects to know more about its best practices and common recommendations, suggestions and conventions, see:

"},{"location":"users/contributing-prerequisites/","title":"Before submitting your Operator","text":"

Important: \"First off, thanks for taking the time to contribute your Operator!\"

"},{"location":"users/contributing-prerequisites/#a-primer-to-openshift-community-operators","title":"A primer to Openshift Community Operators","text":"

This project collects Community Operators that work with OpenShift to be displayed in the embedded OperatorHub. If you are new to Operators, start here.

"},{"location":"users/contributing-prerequisites/#sign-your-work","title":"Sign Your Work","text":"

The contribution process works off standard git Pull Requests. Every PR needs to be signed. The sign-off is a simple line at the end of the explanation for a commit. Your signature certifies that you wrote the patch or otherwise have the right to contribute the material. The rules are pretty simple if you can certify the below (from developercertificate.org):

Developer Certificate of Origin\nVersion 1.1\n\nCopyright (C) 2004, 2006 The Linux Foundation and its contributors.\n1 Letterman Drive\nSuite D4700\nSan Francisco, CA, 94129\n\nEveryone is permitted to copy and distribute verbatim copies of this\nlicense document, but changing it is not allowed.\n\n\nDeveloper's Certificate of Origin 1.1\n\nBy making a contribution to this project, I certify that:\n\n(a) The contribution was created in whole or in part by me and I\n    have the right to submit it under the open source license\n    indicated in the file; or\n\n(b) The contribution is based upon previous work that, to the best\n    of my knowledge, is covered under an appropriate open source\n    license and I have the right under that license to submit that\n    work with modifications, whether created in whole or in part\n    by me, under the same open source license (unless I am\n    permitted to submit under a different license), as indicated\n    in the file; or\n\n(c) The contribution was provided directly to me by some other\n    person who certified (a), (b) or (c) and I have not modified\n    it.\n\n(d) I understand and agree that this project and the contribution\n    are public and that a record of the contribution (including all\n    personal information I submit with it, including my sign-off) is\n    maintained indefinitely and may be redistributed consistent with\n    this project or the open source license(s) involved.\n

Then you just add a line to every git commit message:

Signed-off-by: John Doe <john.doe@example.com>\n

Use your real name (sorry, no pseudonyms or anonymous contributions.)

If you set your user.name and user.email git configs, you can sign your commit automatically with git commit -s.

Note: If your git config information is set properly then viewing the git log information for your commit will look something like this:

Author: John Doe <john.doe@example.com>\nDate:   Mon Oct 21 12:23:17 2019 -0800\n\n    Update README\n\n    Signed-off-by: John Doe <john.doe@example.com>\n

Notice the Author and Signed-off-by lines must match.

"},{"location":"users/contributing-via-pr/","title":"Submitting your Operator via Pull Requests (PR)","text":""},{"location":"users/contributing-via-pr/#overview","title":"Overview","text":"

To submit an operator one has to do these steps

  1. Fork project based on desired Operator Repository
  2. Place the operator in the target directory. More info
    • operators
  3. Configure ci.yaml file. More info
    • Setup reviewers
    • Enable FBC mode
  4. Make a pull request with a new operator bundle or catalog changes
  5. Verify tests and fix problems, if possible
  6. Ask for help in the PR in case of problems
"},{"location":"users/contributing-via-pr/#pull-request","title":"Pull request","text":"

When a pull request is created, a number of tests are executed via community hosted pipeline. One can see the results in the comment section of conversation tab.

"},{"location":"users/contributing-via-pr/#you-are-done","title":"You are done","text":"

User is done when all tests are green. When the PR is merged, the community release pipeline will be triggered.

"},{"location":"users/contributing-via-pr/#test-results-failed","title":"Test results failed?","text":"

When operator tests are failing, one can see a following picture

In case of failures, please have a look at the logs of specific tests. If an error is not clear to you, please ask in the PR. Maintainers will be happy to help you with it.

"},{"location":"users/contributing-via-pr/#useful-commands-interacting-with-the-pipeline","title":"Useful commands interacting with the pipeline","text":"

You can post the following comment/command:

Command Functionality /pipeline restart operator-hosted-pipeline The hosted pipeline will be re-triggered and PR will be merged if possible. The command only works if a previous pipeline failed. /pipeline restart operator-release-pipeline The release pipeline will be re-triggered. The command only works if a previous pipeline failed. /test skip {test_case_name} test_case_name test will be skipped. Please consider that only a subset of tests (currently only pruned graph test) can be skipped."},{"location":"users/contributing-where-to/","title":"Where to contribute","text":"

Once you have forked the upstream repo, you will require to add your Operator Bundle to the forked repo. The forked repo will have directory structure similar to the structure outlined below.

\u251c\u2500\u2500 config.yaml\n\u251c\u2500\u2500 operators\n\u2502   \u2514\u2500\u2500 new-operator\n\u2502       \u251c\u2500\u2500 0.0.102\n\u2502       \u2502   \u251c\u2500\u2500 manifests\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator.clusterserviceversion.yaml\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator-controller-manager-metrics-service_v1_service.yaml\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator-manager-config_v1_configmap.yaml\n\u2502       \u2502   \u2502   \u251c\u2500\u2500 new-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 tools.opdev.io_demoresources.yaml\n\u2502       \u2502   \u251c\u2500\u2500 metadata\n\u2502       \u2502   \u2502   \u2514\u2500\u2500 annotations.yaml\n\u2502       \u2502   \u2514\u2500\u2500 tests\n\u2502       \u2502       \u2514\u2500\u2500 scorecard\n\u2502       \u2502           \u2514\u2500\u2500 config.yaml\n\u2502       \u251c\u2500\u2500 catalog-templates\n\u2502       \u2502   \u251c\u2500\u2500 v4.14.yaml\n\u2502       \u2502   \u251c\u2500\u2500 v4.15.yaml\n\u2502       \u2502   \u2514\u2500\u2500 v4.16.yaml\n\u2502       \u251c\u2500\u2500 ci.yaml\n\u2502       \u2514\u2500\u2500 Makefile\n\u2514\u2500\u2500 README.md\n

Follow the operators directory in the forked repo. Add your Operator Bundle under this operators directory following the example format.

  1. Under the operators directory, create a new directory with the name of your operator.
  2. Inside of this newly created directory add your ci.yaml and set its content based on doc.
  3. Also, under the new directory create a subdirectory for each version of your Operator.
  4. In each version directory there should be a manifests/ directory containing your OpenShift yaml files, a metadata/ directory containing your annotations.yaml file, and a tests/ directory containing the required config.yaml file for the preflight tests.
  5. Create a catalog-templates/ directory under the operator directory and add a yaml file for each OpenShift version you want to support. The yaml file should contain the catalog template for the operator. More information on how to create the catalog template can be found here.
  6. Download the template Makefile from here and place it in the root of the operator directory.

Note To learn more about preflight tests please follow this link.

For partners and ISVs, certified operators can now be submitted via connect.redhat.com. If you have submitted your Operator there already, please ensure your submission here uses a different package name (refer to the README for more details).

"},{"location":"users/dynamic_checks/","title":"Dynamic checks","text":"

The preflight tests are designed to test and verify the the operator bundle content and the format of the operator bundle and if a bundle can be installed on OCP cluster.

The result link for the logs of the preflight test runs will be posted to the PR as shown below.

In case of failures, please have a look at the logs of specific tests. If an error is not clear to you, please ask in the PR. Maintainers will be happy to help you with it.

Once all of the tests will be passed successfully, the PR will be merged automatically based on the conditions are met by operator-hosted-pipeline.

The PR will not merge automatically in the following cases:

If there are any of the above cases, the PR needs to be reviewed by the Repositoy maintainers or authorized reviewers. After the approval, the PR needs to be merged manually. Once the PR will be merged, the operator-release-pipeline will be triggered automatically.

NOTE: The operator hosted pipeline run results will be posted in the github PR comment.

"},{"location":"users/fbc_onboarding/","title":"File Based Catalog onboarding","text":"

Note: The File Based Catalog support is now in an alpha phase. We welcome any feedback you have for this new feature.

Operators in certified, marketplace, or community repositories are defined in a declarative way. This means a user provides all necessary information in advance about the operator bundle and how it should be released in a catalog and OPM automation injects a bundle into the correct place in the upgrade path.

This is however very limited solution that doesn't allow any further modification of upgrade paths after a bundle is already released. Due to this limitation, a concept of FBC (File-based catalog) is now available and allows users to modify the operator upgrade path in a separate step without the need to release a new bundle.

To enable FBC for a given operator the operator owner needs to convert existing operator into FBC format.

We want to help with this process and we prepared a tooling that helps with this transition.

"},{"location":"users/fbc_onboarding/#convert-existing-operator-to-fbc","title":"Convert existing operator to FBC","text":"

As a prerequisite to this process, you need to download a Makefile that automates the migration process.

An initial system requirement is to have following dependencies installed: - podman - make

# Go to the operator repo directory (certified-operators, marketplace-operators, community-operators-prod)\ncd <operator-repo>/operators/<operator-name>\nwget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile\n

Now we can convert existing operator into FBC. The initial run takes a while because a local cache is generated during a run.

Note A user executing the conversion script needs to be authenticated to registries used by OLM catalog. Use podman login to log in into all registries. A conversion script assumes you have $(XDG_RUNTIME_DIR)/containers/auth.json or ~/.docker/config.json present with valid registry tokens.

To convert existing operator to FBC format you need to execute following command:

$ make fbc-onboarding\n\n2024-04-24 15:53:05,537 [operator-cert] INFO Generating FBC templates for the following versions: ['4.12', '4.13', '4.14', '4.15', '4.16']\n2024-04-24 15:53:07,632 [operator-cert] INFO Processing catalog: v4.12\n2024-04-24 15:53:07,633 [operator-cert] DEBUG Building cache for registry.redhat.io/redhat/community-operator-index:v4.12\n...\n

[!IMPORTANT] In case an operator isn't shipped to all OCP catalog versions manually update OCP_VERSIONS variable in the Makefile and include only versions supported by an operator.

The Makefile will execute following steps:

After a script is finished you should see a template and generated fbc in the repository.

$ tree operators/aqua\n\noperators/aqua\n\u251c\u2500\u2500 0.0.1\n...\n\u251c\u2500\u2500 catalog-templates\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.12.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.13.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.14.yaml\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 v4.15.yaml\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 v4.16.yaml\n\u251c\u2500\u2500 ci.yaml\n

... and File-based catalog in catalogs directory

$ tree (repository root)/catalogs\ncatalogs\n\u251c\u2500\u2500 v4.12\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.13\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.14\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.15\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u2514\u2500\u2500 v4.16\n    \u2514\u2500\u2500 aqua\n        \u2514\u2500\u2500 catalog.yaml\n\n
"},{"location":"users/fbc_onboarding/#submit-fbc-changes","title":"Submit FBC changes","text":"

Artifacts generated in the previous step need to be added to a git and submitted via pull request. The operator pipeline validates the content of the catalogs and releases changes into ocp catalogs.

$ git add operators/aqua/{catalog-templates,ci.yaml,Makefile}\n\n$ git add catalogs/{v4.12,v4.13,v4.14,v4.15,v4.16}/aqua\n\n$ git commit --signoff -m \"Add FBC resources for aqua operator\"\n
"},{"location":"users/fbc_onboarding/#generating-catalogs-from-templates","title":"Generating catalogs from templates","text":"

Catalog templates are used to simplify a view of a catalog and allow easier manipulation of catalogs. The automated conversion pre-generates a basic template that can be turned into full FBC using the following command:

make catalogs\n

Of course, you can choose any type of template that you prefer by modifying the Makefile target. More information about catalog templates can be found here

"},{"location":"users/fbc_workflow/","title":"FBC workflow","text":"

If you already have an existing non-FBC operator please continue with the onboarding documentation to convert it to FBC. Once you have converted your operator, or you want to introduce a brand new operator, you can start with the FBC workflow.

"},{"location":"users/fbc_workflow/#fbc-operator-config","title":"FBC operator config","text":"

To indicate the operator is using fbc workflow an operator owner needs to indicate this fact in the ci.yaml file.

Example of the ci.yaml with FBC config:

---\nfbc:\n  enabled: true\n
"},{"location":"users/fbc_workflow/#fbc-templates","title":"FBC templates","text":"

File-based catalog templates serve as a simplified view of a catalog that can be updated by the user. The OPM currently supports 2 types of templates and it is up to the user which template the operator will be using.

More information about each template can be found at opm doc.

The recommended template from the maintainability point of view is SemVer.

"},{"location":"users/fbc_workflow/#generate-catalogs-using-templates","title":"Generate catalogs using templates","text":"

To generate a final catalog for an operator a user needs to execute different opm commands based on the template type. We as operator pipeline maintainers want to simplify this process and we prepared a Makefile with all pre-configured targets.

To get the Makefile follow these steps (In case you converted the existing operator and followed the onboarding guide the Makefile should be already in your operator directory and you can skip the step.)

cd <operator-repo>/operator/<operator-name>\nwget https://raw.githubusercontent.com/redhat-openshift-ecosystem/operator-pipelines/main/fbc/Makefile\n

The right place for the Makefile is in the operator's root directory

.\n\u251c\u2500\u2500 0.0.1\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 manifests\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 metadata\n\u251c\u2500\u2500 catalog-templates\n\u251c\u2500\u2500 ci.yaml\n\u2514\u2500\u2500 Makefile\n\n

You can modify the Makefile based on your needs and use it to generate catalogs by running make catalogs.

[!IMPORTANT] In case an operator isn't shipped to all OCP catalog versions manually update OCP_VERSIONS variable in the Makefile and include only versions supported by an operator.

The command uses the opm and converts templates into catalogs. The generated catalogs can be submitted as a PR in Github and once the PR is processed changes will be released to the OCP index.

$ tree (repository-root)/catalogs\ncatalogs\n\u251c\u2500\u2500 v4.12\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.13\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.14\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u251c\u2500\u2500 v4.15\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 aqua\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 catalog.yaml\n\u2514\u2500\u2500 v4.16\n    \u2514\u2500\u2500 aqua\n        \u2514\u2500\u2500 catalog.yaml\n\n
"},{"location":"users/fbc_workflow/#adding-new-bundle-to-catalog","title":"Adding new bundle to Catalog","text":"

To add a bundle to the catalog you need to first submit the new version of the operator using traditional PR workflow. The operator pipeline builds, tests, and releases the bundle into the registry. At this point, the operator is not available in the catalog yet. To add the bundle to the catalog you need to update catalog templates and add a bundle pullspec given by pull request comment and open a new pull request with catalog changes.

[!NOTE] Currently a workflow requires a 2-step process to release a new bundle into the catalog. In the first step, the operator bundle is released and in the second step, the catalog is updated with the new bundle. We are working on a solution to automate this process and make it a single step. However, this will require a usage of SemVer catalog template. In case you would like to use this feature once available please consider using SemVer template.

"},{"location":"users/fbc_workflow/#semver","title":"SemVer","text":"

For example if I want to add v1.1.0 bundle into Fast channel of a specific catalog I'll add it as mentioned in the example below:

---\nSchema: olm.semver\nGenerateMajorChannels: true\nGenerateMinorChannels: true\nCandidate:\n  Bundles:\n  - Image: quay.io/foo/olm:testoperator.v0.1.0\n  - Image: quay.io/foo/olm:testoperator.v0.1.1\n  - Image: quay.io/foo/olm:testoperator.v0.1.2\n  - Image: quay.io/foo/olm:testoperator.v0.1.3\n  - Image: quay.io/foo/olm:testoperator.v0.2.0\n  - Image: quay.io/foo/olm:testoperator.v0.2.1\n  - Image: quay.io/foo/olm:testoperator.v0.2.2\n  - Image: quay.io/foo/olm:testoperator.v0.3.0\n  - Image: quay.io/foo/olm:testoperator.v1.0.0\n  - Image: quay.io/foo/olm:testoperator.v1.0.1\n  - Image: quay.io/foo/olm:testoperator.v1.1.0\nFast:\n  Bundles:\n  - Image: quay.io/foo/olm:testoperator.v0.2.1\n  - Image: quay.io/foo/olm:testoperator.v0.2.2\n  - Image: quay.io/foo/olm:testoperator.v0.3.0\n  - Image: quay.io/foo/olm:testoperator.v1.0.0\n  - Image: quay.io/foo/olm:testoperator.v1.1.0 # <-- Add new bundle into fast channel\nStable:\n  Bundles:\n  - Image: quay.io/foo/olm:testoperator.v1.0.0\n

Also see opm doc for automate-able step.

"},{"location":"users/fbc_workflow/#basic","title":"Basic","text":"

For example, if I want to add v0.2.0 bundle into stable channel of specific catalog I'll add it as mentioned in the example below.

  1. Add a new olm.bundle entry with bundle pullspec
  2. Add bundle into the stable channel
---\nschema: olm.template.basic\nentries:\n  - schema: olm.package\n    name: example-operator\n    defaultChannel: stable\n\n  - schema: olm.channel\n    package: example-operator\n    name: stable\n    entries:\n      - name: example-operator.v0.1.0\n      - name: example-operator.v0.2.0 # <-- Add bundle into channel\n        replaces: example-operator.v0.1.0\n\n  - schema: olm.bundle\n    image: docker.io/example/example-operator-bundle:0.1.0\n\n  - schema: olm.bundle # <-- Add new bundle entry\n    image: docker.io/example-operator-bundle:0.2.0\n

Also see opm doc for automate-able step.

"},{"location":"users/fbc_workflow/#updating-existing-catalogs","title":"Updating existing catalogs","text":"

A great benefit of FBC is that users can update operator update graphs independently of operator releases. This allows any post-release modification of the catalogs. If you want to change the order of updates, remove an invalid bundle, or do any other modification you are free to do that.

After updating catalog templates don't forget to run make catalogs to generate a catalog from templates and submit the resulting catalog using PR workflow.

"},{"location":"users/isv_pipelines/","title":"ISV operators","text":""},{"location":"users/isv_pipelines/#ciyaml-config","title":"ci.yaml config","text":"

Each operator submitted as a certified or marketplace operator needs to contains a ci.yaml config file that is used during the certification.

The correct location of this file is at operators/operator-XYZ/ci.yaml and needs to contains at least following values:

---\n# The ID of certification component as stated in Red Hat Connect\ncert_project_id: <certification project id>\n\n

Other optional value is merge: false that prevents from automatically merging a pull request with an operator if all tests passes. The default behavior is to merge a PR automatically.

"},{"location":"users/operator-ci-yaml/","title":"Operator Publishing / Review settings","text":"

Each operator might have ci.yaml configuration file to be present in an operator directory (for example operators/aqua/ci.yaml). This configuration file is used by the pipeline automation to control a way how the operator will be published and reviewed.

A content of the file depends on the operator source type. There are a different set of options for community operators and certified operators.

Note: One can create or modify ci.yaml file with a new operator version. This operation can be done in the same PR with other operator changes.

"},{"location":"users/operator-ci-yaml/#reviewers","title":"Reviewers","text":"

Note: This option is only valid for community operators. The certified or marketplace reviewer are configure using Red Hat Connect.

If you want to accelerate publishing your changes, consider adding yourself and others you trust to the reviewers list. If the author of PR will be in that list, changes she/he made will be taken as authorized changes. This will be the indicator for our pipeline that the PR is ready to merge automatically.

Note: If an author of PR is not in reviewers list or not in ci.yaml on main branch, PR will not be merged automatically.

Note: If an author of PR is not in reviewers list and reviewers are present in ci.yaml file. All reviewers will be mentioned in PR comment to check for upcoming changes.

For this to work, it is required to setup reviewers in ci.yaml file. It can be done by adding reviewers tag with a list of GitHub usernames. For example

"},{"location":"users/operator-ci-yaml/#example","title":"Example","text":"
$ cat <path-to-operator>/ci.yaml\n---\nreviewers:\n  - user1\n  - user2\n\n
"},{"location":"users/operator-ci-yaml/#fbc-mode","title":"FBC mode","text":""},{"location":"users/operator-ci-yaml/#fbcenabled","title":"fbc.enabled","text":"

The fbc.enabled flag enables the File-Based catalog feature. It is highly recommended to use the FBC mode in order to have better control over the operator's catalog.

"},{"location":"users/operator-ci-yaml/#fbcversion_promotion_strategy","title":"fbc.version_promotion_strategy","text":"

The fbc.version_promotion_strategy option defines the strategy for promoting the operator into a next OCP version. When a new OCP version becomes available an automated process will promote the operator from a version N to a version N+1. The fbc.version_promotion_strategy option can have the following values:

"},{"location":"users/operator-ci-yaml/#example_1","title":"Example","text":"
---\nfbc:\n    enabled: true\n    version_promotion_strategy: never\n
"},{"location":"users/operator-ci-yaml/#operator-versioning","title":"Operator versioning","text":"

NOTE: This option is only available for the non-FBC operators where user doesn't have a direct control over the catalog.

Operators have multiple versions. When a new version is released, OLM can update an operator automatically. There are 2 update strategies possible, which are defined in ci.yaml at the operator top level.

"},{"location":"users/operator-ci-yaml/#replaces-mode","title":"replaces-mode","text":"

Every next version defines which version will be replaced using replaces key in the CSV file. It means, that there is a possibility to omit some versions from the update graph. The best practice is to put them in a separate channel then.

"},{"location":"users/operator-ci-yaml/#semver-mode","title":"semver-mode","text":"

Every version will be replaced by the next higher version according to semantic versioning.

"},{"location":"users/operator-ci-yaml/#restrictions","title":"Restrictions","text":"

A contributor can decide if semver-mode or replaces-mode mode will be used for a specific operator. By default, replaces-mode is activated, when ci.yaml file is present and contains updateGraph: replaces-mode. When a contributor decides to switch and use semver-mode, it will be specified in ci.yaml file or the key updateGraph will be missing.

"},{"location":"users/operator-ci-yaml/#example_2","title":"Example","text":"
$ cat <path-to-operator>/ci.yaml\n---\n# Use `replaces-mode` or `semver-mode`.\nupdateGraph: replaces-mode\n
"},{"location":"users/operator-ci-yaml/#certification-project","title":"Certification project","text":""},{"location":"users/operator-ci-yaml/#cert_project_id","title":"cert_project_id","text":"

The cert_project_id option is required for certified and marketplace operators. It is used to link the operator to the certification project in Red Hat Connect.

"},{"location":"users/operator-ci-yaml/#kubernetes-max-version-in-csv","title":"Kubernetes max version in CSV","text":"

Starting from kubernetes 1.22 some old APIs were deprecated (Deprecated API Migration Guide from v1.22. Users can set operatorhub.io/ui-metadata-max-k8s-version: \"<version>\" in its CSV file to inform its maximum supported Kubernetes version. The following example will inform that operator can handle 1.21 as maximum Kubernetes version

$ cat <path-to-operators>/<name>/<version>/.../my.clusterserviceversion.yaml\nmetadata:\n  annotations:\n    operatorhub.io/ui-metadata-max-k8s-version: \"1.21\"\n
"},{"location":"users/packaging-required-criteria-ocp/","title":"OKD/OpenShift Catalogs criteria and options","text":""},{"location":"users/packaging-required-criteria-ocp/#okdopenshift-catalogs-criteria-and-options","title":"OKD/OpenShift Catalogs criteria and options","text":""},{"location":"users/packaging-required-criteria-ocp/#overview","title":"Overview","text":"

To distribute on OpenShift Catalogs, you will need to comply with the same standard criteria defined for OperatorHub.io (see Common recommendations and suggestions). Then, additionally, you have some requirements and options which follow.

IMPORTANT Kubernetes has been deprecating API(s) which will be removed and no longer available in 1.22 and in the Openshift version 4.9. Note that your project will be unable to use them on OCP 4.9/K8s 1.22 and then, it is strongly recommended to check Deprecated API Migration Guide from v1.22 and ensure that your projects have them migrated and are not using any deprecated API.

Note that your operator using them will not work in 1.22 and in the Openshift version 4.9. OpenShift 4.8 introduces two new alerts that fire when an API that will be removed in the next release is in use. Check the event alerts of your Operators running on 4.8 and ensure that you will not find any warning about these API(s) still being used by it.

Also, to prevent workflow issues, its users will need to have installed in their OCP cluster a version of your operator compatible with 4.9 before they try to upgrade their cluster from any previous version to 4.9 or higher. In this way, it is recommended to ensure that your operators are no longer using these API(s) versions. However, If you still need to publish the operator bundles with any of these API(s) for use on earlier k8s/OCP versions, ensure that the operator bundle is configured accordingly.

Taking the actions below will help prevent users from installing versions of your operator on an incompatible version of OCP, and also prevent them from upgrading to a newer version of OCP that would be incompatible with the version of your operator that is currently installed on their cluster.

"},{"location":"users/packaging-required-criteria-ocp/#configure-the-max-openshift-version-compatible","title":"Configure the max OpenShift Version compatible","text":"

Use the olm.openShiftMaxVersion annotation in the CSV to prevent the user from upgrading their OCP cluster before upgrading the installed operator version to any distribution which is compatible with:

apiVersion: operators.coreos.com/v1alpha1\nkind: ClusterServiceVersion\nmetadata:\n  annotations:\n    # Prevent cluster upgrades to OpenShift Version 4.9 when this\n    # bundle is installed on the cluster\n    \"olm.properties\": '[{\"type\": \"olm.maxOpenShiftVersion\", \"value\": \"4.8\"}]'\n

The CSV annotation will eventually prevent the user from upgrading their OCP cluster before they have installed a version of your operator which is compatible with 4.9. However, note that it is important to make these changes now as users running workloads with deprecated API(s) that are looking to upgrade to OCP 4.9 will need to be running operators that have this annotation set in order to prevent the cluster upgrade and potentially adversely impacting their crucial workloads.

This option is useful when you know that the current version of your project will not work well on some specific Openshift version.

"},{"location":"users/packaging-required-criteria-ocp/#configure-the-openshift-distribution","title":"Configure the Openshift distribution","text":"

Use the annotation com.redhat.openshift.versions in bundle/metadata/annotations.yaml to ensure that the index image will be generated with its OCP Label, to prevent the bundle from being distributed on to 4.9:

com.redhat.openshift.versions: \"v4.6-v4.8\"\n

This option is also useful when you know that the current version of your project will not work well on some specific OpenShift version. By using it you defined the Openshift versions where the Operator should be distributed and the Operator will not appear in a catalog of an Openshift version that is outside of the range. You must use it if you are distributing a solution that contains deprecated API(s) and will no longer be available in later versions. For more information see Managing OpenShift Versions.

"},{"location":"users/packaging-required-criteria-ocp/#validate-the-bundle-with-the-common-criteria-to-distribute-via-olm-with-sdk","title":"Validate the bundle with the common criteria to distribute via OLM with SDK","text":"

Also, you can check the bundle via operator-sdk bundle validate against the suite Validator Community Operators and the K8s Version that you are intended to publish:

operator-sdk bundle validate ./bundle --select-optional suite=operatorframework --optional-values=k8s-version=1.22\n

NOTE: The validators only check the manifests which are shipped in the bundle. They are unable to ensure that the project's code does not use the Deprecated/Removed API(s) in 1.22 and/or that it does not have as dependency another operator that uses them.

"},{"location":"users/packaging-required-criteria-ocp/#validate-the-bundle-with-the-specific-criteria-to-distribute-in-openshift-catalogs","title":"Validate the bundle with the specific criteria to distribute in Openshift catalogs","text":"

Pre-requirement Download the binary. You might want to keep it in your $GOPTH/bin

Then, we can use the experimental OpenShift OLM Catalog Validator to check your Operator bundle. In this case, we need to inform the bundle and the annotations.yaml file paths:

$ ocp-olm-catalog-validator my-bundle-path/bundle  --optional-values=\"file=bundle-path/bundle/metadata/annotations.yaml\"\n

Following is an example of an Operator bundle that uses the removed APIs in 1.22 and is not configured accordingly:

$ ocp-olm-catalog-validator bundle/ --optional-values=\"file=bundle/metadata/annotations.yaml\"\nWARN[0000] Warning: Value memcached-operator.v0.0.1: this bundle is using APIs which were deprecated and removed in v1.22. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22. Migrate the API(s) for CRD: ([\"memcacheds.cache.example.com\"])\nERRO[0000] Error: Value : (memcached-operator.v0.0.1) olm.maxOpenShiftVersion csv.Annotations not specified with an OCP version lower than 4.9. This annotation is required to prevent the user from upgrading their OCP cluster before they have installed a version of their operator which is compatible with 4.9. For further information see https://docs.openshift.com/container-platform/4.8/operators/operator_sdk/osdk-working-bundle-images.html#osdk-control-compat_osdk-working-bundle-images\nERRO[0000] Error: Value : (memcached-operator.v0.0.1) this bundle is using APIs which were deprecated and removed in v1.22. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22. Migrate the APIs for this bundle is using APIs which were deprecated and removed in v1.22. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-22. Migrate the API(s) for CRD: ([\"memcacheds.cache.example.com\"]) or provide compatible version(s) via the labels. (e.g. LABEL com.redhat.openshift.versions='4.6-4.8')\n
"},{"location":"users/pipelines_overview/","title":"Overview","text":"

Operator pipelines is a Tekton based solution that serves as a CI/CD platform for Operators targeting Red Hat Openshift platform. The CI/CD process makes sure all operators available in the OCP met certain standards.

The series of pipelines validates the operator, tests it and make it available to all OCP users. In combination with Github repository users or partners are able to submit a new operator in Pull request workflow and release it.

"},{"location":"users/pipelines_overview/#operator-repositories","title":"Operator repositories","text":"

The Openshift platform include by default several catalogs from which users can install an operator. This CI/CD solution aims for Red Hat Partners or Community members. Thus there are 3 separate repositories where operator owner can submit operators. Each repository serves a different purpose and has its own specific rules but is shares the common CI/CD solution.

"},{"location":"users/pipelines_overview/#sequence-diagram","title":"Sequence diagram","text":"
sequenceDiagram\n    Operator Owner->>Github: Submits a PR with operator\n    Github->>Pipeline: Triggers a hosted pipeline\n    Pipeline->>Pipeline: Execute tests and validation\n    Pipeline->>Github: Merge PR\n    Github->>Pipeline: Triggers a release pipeline\n    Pipeline->>Pipeline: Release operator to OCP catalog\n    Pipeline->>Github: Notify user in PR\n\n
"},{"location":"users/static_checks/","title":"Static check","text":"

The operator pipelines want to make sure the operator that will be released to OpenShift operator catalog follows a best practices and meet certain standards that we expect from an operator.

In order to meet these standards a series of static checks have been created for each stream of operators. The static checks are executed for each operator submission and reports warning or failures with an description of what is wrong and suggestion on how to fix it.

Here is the example how the result will look like in the PR:

"},{"location":"users/static_checks/#isv-tests","title":"ISV tests","text":""},{"location":"users/static_checks/#check_pruned_graph-warning","title":"check_pruned_graph (Warning)","text":"

This test make sure the operator update graph is not accidentally pruned by introducing operator configuration that prunes graph as a unwanted side effect.

The unintentional graph pruning happens when olm.skipRange annotation is set but replaces field is not set in the CSV. The definition may lead to unintentional pruning of the update graph.

If this is intentional, you can skip the check by adding /test skip check_pruned_graph comment to a pull request.

"},{"location":"users/static_checks/#check_marketplace_annotation","title":"check_marketplace_annotation","text":"

The marketplace operators requires additional metadata in order to be properly displayed in the Marketplace ecosystem.

There are 2 required fields in the operator clusterserviceversion that need to be filled and need to have specific value:

Where {annotation_package} matches operators.operatorframework.io.bundle.package.v1 from metadata/annotation.yaml file.

The test is only executed for operators submitted inside the Red Hat marketplace repo.

"},{"location":"users/static_checks/#community-tests","title":"Community tests","text":""},{"location":"users/static_checks/#check_osdk_bundle_validate_operatorhub","title":"check_osdk_bundle_validate_operatorhub","text":"

The test is based on operator-sdk bundle validate command with name=operatorhub test suite (link).

"},{"location":"users/static_checks/#check_osdk_bundle_validate_operator_framework","title":"check_osdk_bundle_validate_operator_framework","text":"

The test is based on operator-sdk bundle validate command with suite=operatorframework test suite (link).

"},{"location":"users/static_checks/#check_required_fields","title":"check_required_fields","text":"Field name Validation Description spec.displayName .{3,50} A string with 3 - 50 characters spec.description .{20,} A bundle description with at least 20 characters spec.icon media A valid base64 content with a supported media type ({\"base64data\": <b64 content>, \"mediatype\": enum[\"image/png\", \"image/jpeg\", \"image/gif\", \"image/svg+xml\"]}) spec.version SemVer Valid semantic version spec.maintainers At least 1 maintainer contacts. Example: {\"name\": \"User 123\", \"email\": \"user@redhat.com\"} spec.provider.name .{3,} A string with at least 3 characters spec.links At least 1 link. Example: {\"name\": \"Documentation\", \"url\": \"https://redhat.com\"}"},{"location":"users/static_checks/#check_dangling_bundles","title":"check_dangling_bundles","text":"

The test prevents from releasing an operator and keeping any previous bundle dangling. A dangling bundle is a bundle that is not referenced by any other bundle and is not a HEAD of a channel.

In the example bellow the v1.3 bundle is dangling.

graph LR\n    A(v1.0) -->B(v1.1)\n    B --> C(v1.2)\n    B --> E(v1.3)\n    C --> D(v1.4 - HEAD)\n
"},{"location":"users/static_checks/#check_api_version_constraints","title":"check_api_version_constraints","text":"

The test verifies a consistency between value com.redhat.openshift.versions from annotation with spec.minKubeVersion. In case a an operator targets specific version of OpenShift and at the same time sets minimal kube version that is higher than the one supported by the OCP. The test raises an error.

Example:

Following combination is not valid since the OCP 4.9 is based on 1.22 Kubernetes.

spec.minKubeVersion: 1.23\n\ncom.redhat.openshift.versions: 4.9-4.15\n
"},{"location":"users/static_checks/#check_upgrade_graph_loop","title":"check_upgrade_graph_loop","text":"

The purpose of this test is to check whether there are any loops in the upgrade graph.

As stated on the graph below the edge between v1.2 and v1.0 introduces a loop in the graph.

graph LR\n    A(v1.0) -->B(v1.1)\n    B --> C(v1.2)\n    C --> D(v1.3)\n    C --> A\n
"},{"location":"users/static_checks/#check_replaces_availability","title":"check_replaces_availability","text":"

The test aims to verify if a bundle referenced by the replaces value is available in all catalog version where the given bundle is going to be released to. The list of catalog version is determined by the com.redhat.openshift.versions annotation if present. If the annotation is not present the bundle targets all supported ocp version.

To fix the issue either change a range of versions where a bundle is going to be released by updating the annotation or change the replaces value.

"},{"location":"users/static_checks/#check_operator_name_unique","title":"check_operator_name_unique","text":"

The test makes sure the operator is consistent when using operator names as defined in the clusterserviceversion. It is not allowed to have multiple bundle names for a single operator. The source of the value is at csv.metadata.name.

"},{"location":"users/static_checks/#check_ci_upgrade_graph","title":"check_ci_upgrade_graph","text":"

The test verifies a content of the ci.yaml file and make sure only allowed values are used for updateGraph key. The currently supported values are: [\"replaces-mode\", \"semver-mode\"].

"},{"location":"users/static_checks/#common-tests","title":"Common tests","text":""},{"location":"users/static_checks/#check_operator_name-warning","title":"check_operator_name (Warning)","text":"

The test verifies a consistency between operator name annotation and operator name in the CSV definition. The source of these values are:

"},{"location":"users/static_checks/#running-tests-locally","title":"Running tests locally","text":"
# Install the package with static checks\n$ pip install git+https://github.com/redhat-openshift-ecosystem/operator-pipelines.git\n\n# Execute a test suite\n# In this example tests are executed for aqua operator with 2022.4.15 version\n$ python static-tests \\\n    --repo-path ~/community-operators-prod \\\n    --suites operatorcert.static_tests.community \\\n    --output-file /tmp/operator-test.json \\\n    --verbose\n    aqua 2022.4.15\n\n
$ cat /tmp/operator-test.json | jq\n\n{\n  \"passed\": false,\n  \"outputs\": [\n    {\n      \"type\": \"error\",\n      \"message\": \"Channel 2022.4.0 has dangling bundles: {Bundle(aqua/2022.4.14)}\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_dangling_bundles\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=ClusterConfigAuditReport: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=AquaStarboard: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=ConfigAuditReport: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: this bundle is using APIs which were deprecated and removed in v1.25. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25. Migrate the API(s) for podsecuritypolicies: ([\\\"ClusterServiceVersion.Spec.InstallStrategy.StrategySpec.ClusterPermissions[2].Rules[7]\\\"])\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: this bundle is using APIs which were deprecated and removed in v1.25. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25. Migrate the API(s) for podsecuritypolicies: ([\\\"ClusterServiceVersion.Spec.InstallStrategy.StrategySpec.ClusterPermissions[2].Rules[7]\\\"])\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: unable to find the resource requests for the container: (aqua-operator). It is recommended to ensure the resource request for CPU and Memory. Be aware that for some clusters configurations it is required to specify requests or limits for those values. Otherwise, the system or quota may reject Pod creation. More info: https://master.sdk.operatorframework.io/docs/best-practices/managing-resources/\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operator_framework\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aquasecurity.github.io/v1alpha1, Kind=ConfigAuditReport: provided API should have an example annotation\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : (aqua-operator.v2022.4.15) csv.Spec.minKubeVersion is not informed. It is recommended you provide this information. Otherwise, it would mean that your operator project can be distributed and installed in any cluster version available, which is not necessarily the case for all projects.\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value aqua-operator.v2022.4.15: this bundle is using APIs which were deprecated and removed in v1.25. More info: https://kubernetes.io/docs/reference/using-api/deprecation-guide/#v1-25. Migrate the API(s) for podsecuritypolicies: ([\\\"ClusterServiceVersion.Spec.InstallStrategy.StrategySpec.ClusterPermissions[2].Rules[7]\\\"])\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    },\n    {\n      \"type\": \"warning\",\n      \"message\": \"Warning: Value : The \\\"operatorhub\\\" validator is deprecated; for equivalent validation use \\\"operatorhub/v2\\\", \\\"standardcapabilities\\\" and \\\"standardcategories\\\" validators\",\n      \"test_suite\": \"operatorcert.static_tests.community\",\n      \"check\": \"check_osdk_bundle_validate_operatorhub\"\n    }\n  ]\n}\n\n
"}]} \ No newline at end of file diff --git a/users/operator-ci-yaml/index.html b/users/operator-ci-yaml/index.html index e5f9a2d1..57344a45 100644 --- a/users/operator-ci-yaml/index.html +++ b/users/operator-ci-yaml/index.html @@ -524,6 +524,24 @@ + + +
  • + + + Certification project + + + + +
  • @@ -969,7 +1011,8 @@

    Operator Publishing / Review settings

    -

    Each operator might have ci.yaml configuration file to be present in an operator directory (for example operators/aqua/ci.yaml). This configuration file is used by community-operators pipeline to setup various features like reviewers, fbc mode or operator versioning.

    +

    Each operator might have ci.yaml configuration file to be present in an operator directory (for example operators/aqua/ci.yaml). This configuration file is used by the pipeline automation to control a way how the operator will be published and reviewed.

    +

    A content of the file depends on the operator source type. There are a different set of options for community operators and certified operators.

    Note: One can create or modify ci.yaml file with a new operator version. This operation can be done in the same PR with other operator changes.

    @@ -996,11 +1039,15 @@

    Example

    FBC mode

    +

    fbc.enabled

    The fbc.enabled flag enables the File-Based catalog feature. It is highly recommended to use the FBC mode in order to have better control over the operator's catalog.

    -

    The fbc.version_promotion_strategy option defines the strategy for promoting the operator into a next OCP version. When a new OCP version becomes available an automated process will promote the operator from a version N to a version N+1. The fbc.version_promotion_strategy option can have the following values: -- never - the operator will not be promoted to the next OCP version automatically -- always - the operator will be promoted to the next OCP version automatically -- review-needed - the operator will be promoted to the next OCP version automatically, but the PR will be created and the reviewers will be asked to review the changes

    +

    fbc.version_promotion_strategy

    +

    The fbc.version_promotion_strategy option defines the strategy for promoting the operator into a next OCP version. When a new OCP version becomes available an automated process will promote the operator from a version N to a version N+1. The fbc.version_promotion_strategy option can have the following values:

    +
      +
    • never - the operator will not be promoted to the next OCP version automatically
    • +
    • always - the operator will be promoted to the next OCP version automatically
    • +
    • review-needed - the operator will be promoted to the next OCP version automatically, but the PR will be created and the reviewers will be asked to review the changes
    • +

    Example

    ---
     fbc:
    @@ -1024,6 +1071,9 @@ 

    Example

    # Use `replaces-mode` or `semver-mode`. updateGraph: replaces-mode
    +

    Certification project

    +

    cert_project_id

    +

    The cert_project_id option is required for certified and marketplace operators. It is used to link the operator to the certification project in Red Hat Connect.

    Kubernetes max version in CSV

    Starting from kubernetes 1.22 some old APIs were deprecated (Deprecated API Migration Guide from v1.22. Users can set operatorhub.io/ui-metadata-max-k8s-version: "<version>" in its CSV file to inform its maximum supported Kubernetes version. The following example will inform that operator can handle 1.21 as maximum Kubernetes version

    $ cat <path-to-operators>/<name>/<version>/.../my.clusterserviceversion.yaml