diff --git a/.github/workflows/markdown-link-check-pr.yaml b/.github/workflows/markdown-link-check-pr.yaml new file mode 100644 index 000000000..33b020562 --- /dev/null +++ b/.github/workflows/markdown-link-check-pr.yaml @@ -0,0 +1,38 @@ +name: PR check Markdown links + +# Only run this workflow on PRs +on: + pull_request: + types: [opened, synchronize, reopened, ready_for_review] + +# Disable permissions for all available scopes for the GITHUB_TOKEN +# except for the metadata scope. +permissions: {} + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + +jobs: + pr-markdown-link-check: + # Only run the job on non-draft PRs + if: "github.event.pull_request.draft == false" + name: Check PR Markdown links + runs-on: ubuntu-latest + steps: + - name: Check out code under the $GITHUB_WORKSPACE directory + uses: actions/checkout@v3 + - name: Check markdown links + uses: gaurav-nelson/github-action-markdown-link-check@1.0.15 + with: + use-quiet-mode: 'no' + use-verbose-mode: 'yes' + config-file: './.markdown-link-check/markdown-link-check-config.json' + check-modified-files-only: 'no' + file-extension: '.md' + # Unfortunately there's currently not a way to ignore specific + # directories/paths in the GitHub action so we need to specify + # the desired set of directories/paths that we want to check + # in order to avoid checking undesired directories/paths + folder-path: './docs, ./pkg, ./templates, ./.github, ./cmd, ./internal, ./openapi, ./docker, ./scripts' + file-path: './README.md, ./CONTRIBUTING.md' diff --git a/.github/workflows/markdown-link-check-weekly.yaml b/.github/workflows/markdown-link-check-weekly.yaml new file mode 100644 index 000000000..3638b876f --- /dev/null +++ b/.github/workflows/markdown-link-check-weekly.yaml @@ -0,0 +1,32 @@ +name: Weekly check Markdown links + +# Run this workflow every Monday at 07:00 UTC +on: + schedule: + - cron: "0 7 * * 1" + +# Disable permissions for all available scopes for the GITHUB_TOKEN +# except for the metadata scope. +permissions: {} + +jobs: + weekly-markdown-link-check: + name: Weekly Check Markdown links + runs-on: ubuntu-latest + steps: + - name: Check out code under the $GITHUB_WORKSPACE directory + uses: actions/checkout@v3 + - name: Check markdown links + uses: gaurav-nelson/github-action-markdown-link-check@1.0.15 + with: + use-quiet-mode: 'no' + use-verbose-mode: 'yes' + config-file: './.markdown-link-check/markdown-link-check-config.json' + check-modified-files-only: 'no' + file-extension: '.md' + # Unfortunately there's currently not a way to ignore specific + # directories/paths in the GitHub action so we need to specify + # the desired set of directories/paths that we want to check + # in order to avoid checking undesired directories/paths + folder-path: './docs, ./pkg, ./templates, ./.github, ./cmd, ./internal, ./openapi, ./docker, ./scripts' + file-path: './README.md, ./CONTRIBUTING.md' diff --git a/.markdown-link-check/markdown-link-check-config.json b/.markdown-link-check/markdown-link-check-config.json new file mode 100644 index 000000000..6691d5eb5 --- /dev/null +++ b/.markdown-link-check/markdown-link-check-config.json @@ -0,0 +1,10 @@ +{ + "ignorePatterns": [ + { + "pattern": "^https://gitlab.cee.redhat.com" + }, + { + "pattern": "^https://github.com/bf2fc6cc711aee1a0c2a/observability-resources-mk" + } + ] +} diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index dc936d769..556fd4294 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -26,7 +26,7 @@ file in the root directory of the repository. * Changes have been verified by one additional reviewer against: * each required environment * each supported upgrade path -* If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members. +* If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members. * PR has been merged @@ -55,7 +55,7 @@ $GOPATH ``` ## Set up Git Hooks -Run the following command to set up git hooks for the project. +Run the following command to set up git hooks for the project. ``` make setup/git/hooks @@ -87,7 +87,10 @@ Set the following configuration in your **Launch.json** file. } ``` ## Modifying the API definition -The services' OpenAPI specification is located in `openapi/kas-fleet-manager.yaml`. It can be modified using Apicurio Studio, Swagger or manually. The OpenAPI files follows the [RHOAS API standards](https://api.appservices.tech/docs/api-standards.html), each modification has to adhere to it. +The services' OpenAPI specification is located in `openapi/kas-fleet-manager.yaml`. +It can be modified using Apicurio Studio, Swagger or manually. The OpenAPI +files follow the [RHOAS API standards](https://github.com/redhat-developer/app-services-api-guidelines/blob/main/docs/api-standards.md), +each modification has to adhere to it. Once you've made your changes, the second step is to validate it: @@ -101,16 +104,16 @@ Once the schema is valid, the remaining step is to generate the openapi modules make openapi/generate ``` ## Adding a new ConfigModule -See the [Adding a new Config Module](./docs/adding-a-new-ConfigModule.md) documentation for more information. +See the [Adding a new Config Module](docs/adding-a-new-config-module.md) documentation for more information. ## Adding a new endpoint -See the [adding-a-new-endpoint](./docs/adding-a-new-endpoint.md) documentation. +See the [adding-a-new-endpoint](docs/adding-a-new-endpoint.md) documentation. ## Adding New Serve Command Flags -See the [Adding Flags to KAS Fleet Manager](./docs/adding-new-flags.md) documentation for more information. +See the [Adding Flags to KAS Fleet Manager](docs/adding-new-flags.md) documentation for more information. ## Testing -See the [automated testing](./docs/automated-testing.md) documentation. +See the [automated testing](docs/automated-testing.md) documentation. ## Logging Standards & Best Practices * Log only actionable information, which will be read by a human or a machine for auditing or debugging purposes @@ -131,7 +134,7 @@ Log to this level any error based information that might want to be brought to s #### Error Log to this level any error that is fatal to the given transaction and affects expected user operation. This may or may not include failed connections, missing expected data, or other unrecoverable outcomes. -Error handling should be implemented by following these best practices as laid out in [this guide](docs/error-handing.md). +Error handling should be implemented by following these best practices as laid out in [this guide](docs/best-practices/error-handling.md). #### Fatal Log to this level any error that is fatal to the service and requires the service to be immediately shutdown in order to prevent data loss or other unrecoverable states. This should be limited to scripts and fail-fast scenarios in service startup *only* and *never* because of a user operation in an otherwise healthy servce. @@ -160,17 +163,17 @@ glog.V(10).Info("biz") ### Sentry Logging Sentry monitors errors/exceptions in a real-time environment. It provides detailed information about captured errors. See [sentry](https://sentry.io/welcome/) for more details. - + Logging can be enabled by importing the sentry-go package: "github.com/getsentry/sentry-go Following are possible ways of logging events via Sentry: ```go sentry.CaptureMessage(message) // for logging message -sentry.CaptureEvent(event) // capture the events +sentry.CaptureEvent(event) // capture the events sentry.CaptureException(error) // capture the exception -``` -Example : +``` +Example : ```go func check(err error, msg string) { if err != nil && err != http.ErrServerClosed { @@ -200,4 +203,4 @@ make code/fix ## Writing Docs -Please see the [README](./docs/README.md) in `docs` directory. +Please see the [README](docs/README.md) in `docs` directory. diff --git a/Makefile b/Makefile index 2ca860ea0..3bc0f0ea8 100644 --- a/Makefile +++ b/Makefile @@ -161,6 +161,22 @@ ifeq (, $(shell which ${LOCAL_BIN_PATH}/rhoasapi-installation/spectral 2> /dev/n } endif +MARKDOWN_LINK_CHECK ?= ${LOCAL_BIN_PATH}/markdown-link-check +markdown-link-check-install: +ifeq (, $(shell which ${NPM} 2> /dev/null)) + @echo "npm is not available please install it to be able to install markdown-link-check" + exit 1 +endif +ifeq (, $(shell which ${LOCAL_BIN_PATH}/markdown-link-check 2> /dev/null)) + @{ \ + set -e ;\ + mkdir -p ${LOCAL_BIN_PATH} ;\ + mkdir -p ${LOCAL_BIN_PATH}/markdown-link-check-installation ;\ + cd ${LOCAL_BIN_PATH} ;\ + ${NPM} install --prefix ${LOCAL_BIN_PATH}/markdown-link-check-installation markdown-link-check@3.10.3 ;\ + ln -s markdown-link-check-installation/node_modules/.bin/markdown-link-check markdown-link-check ;\ + } +endif .PHONY: openapi/spec/validate openapi/spec/validate: specinstall @@ -177,6 +193,7 @@ help: @echo "" @echo "make verify verify source code" @echo "make lint lint go files and .yaml templates" + @echo "make lint/markdown-link-check check for broken links in markdown files" @echo "make binary compile binaries" @echo "make install compile binaries and install in GOPATH bin" @echo "make run run the application" @@ -263,6 +280,10 @@ lint/templates: specinstall $(SPECTRAL) lint templates/*.yml templates/*.yaml --ignore-unknown-format --ruleset .validate-templates.yaml .PHONY: lint/templates +MARKDOWN_LINK_CHECK_CONFIG_DIR ?= $(PROJECT_PATH)/.markdown-link-check +MARKDOWN_LINK_CHECK_CONFIG ?= $(MARKDOWN_LINK_CHECK_CONFIG_DIR)/markdown-link-check-config.json +lint/markdown-link-check: markdown-link-check-install + find $(PROJECT_PATH) -type f -not -path '$(PROJECT_PATH)/bin*' -name '*.md' -exec $(MARKDOWN_LINK_CHECK) -q -c $(MARKDOWN_LINK_CHECK_CONFIG) {} \; # Runs linter against go files and .y(a)ml files in the templates directory # Requires golangci-lint to be installed @ $(go env GOPATH)/bin/golangci-lint diff --git a/docs/adding-a-new-config-module.md b/docs/adding-a-new-config-module.md index ad1b4c4ab..aee168423 100644 --- a/docs/adding-a-new-config-module.md +++ b/docs/adding-a-new-config-module.md @@ -1,22 +1,22 @@ # Adding a new Config Module This document covers the steps on how to add a new config module to the service. -1. Add an example configuration file in the /config folder. Ensure that the parameters are documented, see [provider-configuration.yaml](/config/provider-configuration.yaml) as example. +1. Add an example configuration file in the /config folder. Ensure that the parameters are documented, see [provider-configuration.yaml](../config/provider-configuration.yaml) as example. 2. Create a new config file with a filename format of `.go` (see [kafka.go](../internal/kafka/internal/config/kafka.go) as an example). - > **NOTE**: If this configuration is an extension of an already existing configuration, please prefix the filename with the name of the config module its extending i.e. [kafka_supported_sizes.go](../internal/kafka/internal/config/kafka_supported_sizes.go) is an extension of the [kafka.go](../internal/kafka/internal/config/kafka.go) config. + > **NOTE**: If this configuration is an extension of an already existing configuration, please prefix the filename with the name of the config module its extending i.e. [kafka_supported_instance_types.go](../internal/kafka/internal/config/kafka_supported_instance_types.go) is an extension of the [kafka.go](../internal/kafka/internal/config/kafka.go) config. These files should be created in the following places based on usage : - `/pkg/...` - If it's a common configuration that can be shared across all services - `/internal/kafka/internal/config` - for any Kafka specific configuration - `/internal/connector/internal/config` - for any Connector specific configuration -3. The new config file has to implement the `ConfigModule` [interface](/pkg/environments/interfaces.go). This should be added into one of the following providers file: +3. The new config file has to implement the `ConfigModule` [interface](../pkg/environments/interfaces.go). This should be added into one of the following providers file: - `CoreConfigProviders()` inside the [core providers file](../pkg/providers/core.go): For any global configuration that is not specific to a particular service (i.e. kafka or connector). - `ConfigProviders()` inside [kafka providers](../internal/kafka/providers.go): For any kafka specific configuration. - `ConfigProviders()` inside [connector providers](../internal/connector/providers.go): For any connector specific configuration. - > **NOTE**: If your ConfigModule also implements the ServiceValidator [interface](/pkg/environments/interfaces.go), please ensure to also specify `di.As(new(environments2.ServiceValidator))` when providing the dependency in one of the ConfigProviders listed above. Otherwise, the validation for your configuration will not be called. + > **NOTE**: If your ConfigModule also implements the ServiceValidator [interface](../pkg/environments/interfaces.go), please ensure to also specify `di.As(new(environments2.ServiceValidator))` when providing the dependency in one of the ConfigProviders listed above. Otherwise, the validation for your configuration will not be called. -4. Create/edit tests for the configuration file if needed with a filename format of `.go` in the same directory the config file was created. +4. Create/edit tests for the configuration file if needed with a filename format of `.go` in the same directory the config file was created. 5. Ensure the [service-template](../templates/service-template.yml) is updated. See this [pr](https://github.com/bf2fc6cc711aee1a0c2a/kas-fleet-manager/pull/817) as an example. diff --git a/docs/adding-new-flags.md b/docs/adding-new-flags.md index ade7ffc91..cc5031d77 100644 --- a/docs/adding-new-flags.md +++ b/docs/adding-new-flags.md @@ -48,7 +48,7 @@ func (c *Config) AddFlags(fs *pflag.FlagSet) { ### Adding a New Config File If the new configuration flag doesn't fit in any of the existing config file, a new one should be created. -1. See the [Adding a New Config File](/docs/adding-a-new-config-module.md) documentation. +1. See the [Adding a New Config File](adding-a-new-config-module.md) documentation. 2. Define any new flags in the `AddFlags()` function. ### Verify Addition of New Flags diff --git a/docs/architecture/data-plane-osd-cluster-dynamic-scaling.md b/docs/architecture/data-plane-osd-cluster-dynamic-scaling.md index 8e9dbf264..3f87a4677 100644 --- a/docs/architecture/data-plane-osd-cluster-dynamic-scaling.md +++ b/docs/architecture/data-plane-osd-cluster-dynamic-scaling.md @@ -12,7 +12,7 @@ Data Plane OSD Cluster Dynamic Scaling functionality currently deals with: ## Autoscaling of worker nodes of an OSD cluster -Autoscaling of worker nodes of an OSD cluster is done by leveraging the [Cluster Autoscaler](https://docs.openshift.com/container-platform/4.9/machine_management/applying-autoscaling.html) as described in [AP-15: Dynamic Scaling of Data Plane Clusters](https://architecture.appservices.tech/ap/15/#autoscaling-of-nodes). For Manual clusters, this has to be enabled manually. Worker node autoscaling is enabled by default for all clusters that are created dynamically by the Fleet manager +Autoscaling of worker nodes of an OSD cluster is done by leveraging the [Cluster Autoscaler](https://docs.openshift.com/container-platform/4.9/machine_management/applying-autoscaling.html) as described in [AP-15: Dynamic Scaling of Data Plane Clusters](https://architecture.bf2.dev/ap/15/). For Manual clusters, this has to be enabled manually. Worker node autoscaling is enabled by default for all clusters that are created dynamically by the Fleet manager ## Prewarming of worker nodes diff --git a/docs/automated-testing.md b/docs/automated-testing.md index 9f9f8e95b..5bc80593c 100644 --- a/docs/automated-testing.md +++ b/docs/automated-testing.md @@ -29,7 +29,7 @@ A new `InterfaceNameMock` struct will be generated and can be used in tests. For more information on using `moq`, see: - [The moq repository](https://github.com/matryer/moq) -- The IDGenerator [interface](../pkg/client/ocm/id.go) and [mock](../pkg/client/ocm/idgenerator_moq_test.go) in this repository +- The IDGenerator [interface](../pkg/client/ocm/id.go) and [mock](../pkg/client/ocm/idgenerator_moq.go) in this repository For mocking the OCM client, [this wrapper interface](../pkg/client/ocm/client.go). If using the OCM SDK in any component of the system, please ensure the OCM SDK logic is placed inside this mock-able @@ -60,7 +60,7 @@ defer teardown() See [TestKafkaPost integration test](../internal/kafka/test/integration/kafkas_test.go) as an example of this. ->NOTE: the `teardown` function is responsible for performing post test cleanups e.g of service accounts that are provisioned +>NOTE: the `teardown` function is responsible for performing post test cleanups e.g of service accounts that are provisioned for the Fleetshard authentication or Kafka canary service account. Ensure that if the integration test of a new features provision external resources, then these are properly cleanedup. Integration tests in this service can take advantage of running in an "emulated OCM API". This @@ -129,15 +129,15 @@ import ( func MyTestFunction(t *testing.T) { err := utils.NewPollerBuilder(). // test output should go to `t.Logf` instead of `fmt.Printf` - OutputFunction(t.Logf). + OutputFunction(t.Logf). // sets number of retries and interval between each retry IntervalAndRetries(10 * time.Second, 10). OnRetry(func(attempt int, maxAttempts int) (done bool, err error) { // function executed on each retry - // put your logic here + // put your logic here return true, nil }). Build().Poll() - if err != nil { + if err != nil { // ... } // ... @@ -148,9 +148,9 @@ func MyTestFunction(t *testing.T) { ### Mock KAS Fleetshard Sync The mock [KAS Fleetshard Sync](../internal/kafka/test/mocks/kasfleetshardsync/kas-fleetshard-sync.go) is used to reconcile data plane and Kafka cluster status during integration tests. This also needs to be used even when running integration tests against a real OCM environment as the KAS Fleetshard Sync in the data plane cluster cannot -communicate to the KAS Fleet Manager during integration test runs. +communicate to the KAS Fleet Manager during integration test runs. -The mock KAS Fleetshard Sync needs to be started at the start of a test that requires updates to a data plane or Kafka cluster status. +The mock KAS Fleetshard Sync needs to be started at the start of a test that requires updates to a data plane or Kafka cluster status. **Example Usage:** ```go @@ -233,4 +233,4 @@ Some endpoints will act differently depending on privileges of the entity callin ### Avoid setting up Kafka_SRE Identity Provider for Dataplane clusters -The KafkaSRE identity provider is automatically setup for each cluster in `cluster_provisioned` state and it is reconciled every time for all the clusters in `ready` state. This step is not done if the cluster IDP has already been configured i.e the `identity_provider_id` column is set. When it is not required to set up the IDP, you just have to make sure that the dataplane cluster in under test has the `IdentityProviderID` field / `identity_provider_id` column set to a dummy value. +The KafkaSRE identity provider is automatically setup for each cluster in `cluster_provisioned` state and it is reconciled every time for all the clusters in `ready` state. This step is not done if the cluster IDP has already been configured i.e the `identity_provider_id` column is set. When it is not required to set up the IDP, you just have to make sure that the dataplane cluster in under test has the `IdentityProviderID` field / `identity_provider_id` column set to a dummy value. diff --git a/docs/best-practices/error-handling.md b/docs/best-practices/error-handling.md index df85d091e..af50bd53b 100644 --- a/docs/best-practices/error-handling.md +++ b/docs/best-practices/error-handling.md @@ -54,7 +54,7 @@ err := fmt.Errorf("wrap error: %w", err) ### Capture the original error when creating a new ServiceError -When a new [ServiceError](../pkg/errors/errors.go) is created and it is caused by the another error, use the `NewWithCause` function to create the new one, but retain the original error value. +When a new [ServiceError](../../pkg/errors/errors.go) is created and it is caused by the another error, use the `NewWithCause` function to create the new one, but retain the original error value. Make sure the message for the `Reason` field of the new `ServiceError` does not leak any internal information. #### Do @@ -82,8 +82,8 @@ if err := somefunc(); err != nil { There is no need to log an error or forward it to Sentry at the place where it occurs. We have central places to handle errors. The main places where errors handled are: -1. The [handError](../pkg/handlers/framework.go#L42) function. All errors for HTTP requested should be handled here, and it will log the error to logs and forward the error to Sentry. -2. The [runReconcile](../pkg/workers/reconciler.go#L87) function. All errors occur in the background workers should be handled here. +1. The [errorHandler](../../pkg/handlers/framework.go) function. All errors for HTTP requested should be handled here, and it will log the error to logs and forward the error to Sentry. +2. The [runReconcile](../../pkg/workers/reconciler.go) function. All errors occur in the background workers should be handled here. 3. If the error is not returned to the caller, we should use an instance of the `UHCLogger` to log the error which will make sure it is captured by Sentry as well. #### Do diff --git a/docs/feature-flags.md b/docs/feature-flags.md index c7cc6e17e..7632f8f73 100644 --- a/docs/feature-flags.md +++ b/docs/feature-flags.md @@ -44,7 +44,7 @@ This lists the feature flags and their sub-configurations to enable/disable and - `kafka-tls-key-file` [Required]: The path to the file containing the Kafka TLS private key (default: `'secrets/kafka-tls.key'`). - **enable-developer-instance**: Enable the creation of one kafka developer instances per user - **quota-type**: Sets the quota service to be used for access control when requesting Kafka instances (options: `ams` or `quota-management-list`, default: `quota-management-list`). - > For more information on the quota service implementation, see the [quota service architecture](./architecture/quota-service-implementation) architecture documentation. + > For more information on the quota service implementation, see the [quota service architecture](./architecture/quota-service-implementation.md) architecture documentation. - If this is set to `quota-management-list`, quotas will be managed via the quota management list configuration. > See [quota control](./quota-management-list-configuration.md) documentation for more information about the quota management list. - `enable-instance-limit-control` [Required]: Enables enforcement of limits on how much Kafka instances a user can create (default: `false`). diff --git a/docs/utilities/arrays.md b/docs/utilities/arrays.md index 417a0e1c6..4bf0e3d3c 100644 --- a/docs/utilities/arrays.md +++ b/docs/utilities/arrays.md @@ -13,7 +13,7 @@ Returned values are: * The index of the value into the list of passed-in values (-1 if not found) * The found value or the _zero value_ for the `T` type -Some example usage can be found [here](../pkg/shared/utils/arrays/generic_array_utils_test.go) +Some example usage can be found [here](../../pkg/shared/utils/arrays/generic_array_utils_test.go) ### Filter diff --git a/pkg/db/README.md b/pkg/db/README.md index 3ecf024e4..ccb1c6abf 100644 --- a/pkg/db/README.md +++ b/pkg/db/README.md @@ -2,7 +2,7 @@ Database migrations are handled by this package. All migrations should be created in separate files, following a starndard naming convetion -The `migrations.go` file defines an array of migrate functions that are called by the [gormigrate](https://github.com/go-gormigrate/gormigrate/v2) helper. Each migration function should perform a specific migration. +The `migrations.go` file defines an array of migrate functions that are called by the [Gormigrate](https://github.com/go-gormigrate/gormigrate) helper. Each migration function should perform a specific migration. ## Creating a new migration @@ -10,7 +10,7 @@ Create a migration ID based on the time using the YYYYMMDDHHMM format. Example: Your migration's name should be used in the file name and in the function name and should adequately represent the actions your migration is taking. If your migration is doing too much to fit in a name, you should consider creating multiple migrations. -Create a separate file in `pkg/db/` following the naming schema in place: `_.go`. In the file, you'll create a function that returns a [gormmigrate.Migration](https://github.com/go-gormigrate/gormigrate/v2/blob/master/gormigrate.go#L37) object with `gormigrate.Migrate` and `gormigrate.Rollback` functions defined. +Create a separate file in `pkg/db/` following the naming schema in place: `_.go`. In the file, you'll create a function that returns a [gormmigrate.Migration](https://github.com/go-gormigrate/gormigrate/blob/master/gormigrate.go) object with `gormigrate.Migrate` and `gormigrate.Rollback` functions defined. Add the function you created in the separate file to the `migrations` list in `pkg/db/migrations.go`.