Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add markdown link checking #1713

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 38 additions & 0 deletions .github/workflows/markdown-link-check-pr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: PR check Markdown links
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Separating this into its own workflow is intentional. The reason for it is that we want to take advantage of parallelism and restrict the set of events that are considered.
The same applies to the workflow for the periodic execution


# Only run this workflow on PRs
on:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]

# Disable permissions for all available scopes for the GITHUB_TOKEN
# except for the metadata scope.
permissions: {}

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
pr-markdown-link-check:
# Only run the job on non-draft PRs
if: "github.event.pull_request.draft == false"
name: Check PR Markdown links
runs-on: ubuntu-latest
steps:
- name: Check out code under the $GITHUB_WORKSPACE directory
uses: actions/checkout@v3
- name: Check markdown links
uses: gaurav-nelson/[email protected]
with:
use-quiet-mode: 'no'
use-verbose-mode: 'yes'
config-file: './.markdown-link-check/markdown-link-check-config.json'
check-modified-files-only: 'no'
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is intentional:

All the specified md files and directories in the workflows are
intentionally checked, independently on whether they have been
modified or not. The reason for this is that even if a given .md
file has not been modified, its links can become broken. For example,
if a PR moves .md files from directories and some other .md files
contain relative links referencing the old path of the moved files,
then the links become broken even though the file containing the
references not being modified.

file-extension: '.md'
# Unfortunately there's currently not a way to ignore specific
# directories/paths in the GitHub action so we need to specify
# the desired set of directories/paths that we want to check
# in order to avoid checking undesired directories/paths
folder-path: './docs, ./pkg, ./templates, ./.github, ./cmd, ./internal, ./openapi, ./docker, ./scripts'
file-path: './README.md, ./CONTRIBUTING.md'
32 changes: 32 additions & 0 deletions .github/workflows/markdown-link-check-weekly.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Weekly check Markdown links

# Run this workflow every Monday at 07:00 UTC
on:
schedule:
- cron: "0 7 * * 1"

# Disable permissions for all available scopes for the GITHUB_TOKEN
# except for the metadata scope.
permissions: {}

jobs:
weekly-markdown-link-check:
name: Weekly Check Markdown links
runs-on: ubuntu-latest
steps:
- name: Check out code under the $GITHUB_WORKSPACE directory
uses: actions/checkout@v3
- name: Check markdown links
uses: gaurav-nelson/[email protected]
with:
use-quiet-mode: 'no'
use-verbose-mode: 'yes'
config-file: './.markdown-link-check/markdown-link-check-config.json'
check-modified-files-only: 'no'
file-extension: '.md'
# Unfortunately there's currently not a way to ignore specific
# directories/paths in the GitHub action so we need to specify
# the desired set of directories/paths that we want to check
# in order to avoid checking undesired directories/paths
folder-path: './docs, ./pkg, ./templates, ./.github, ./cmd, ./internal, ./openapi, ./docker, ./scripts'
file-path: './README.md, ./CONTRIBUTING.md'
10 changes: 10 additions & 0 deletions .markdown-link-check/markdown-link-check-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"ignorePatterns": [
{
"pattern": "^https://gitlab.cee.redhat.com"
},
{
"pattern": "^https://github.com/bf2fc6cc711aee1a0c2a/observability-resources-mk"
}
]
}
29 changes: 16 additions & 13 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ file in the root directory of the repository.
* Changes have been verified by one additional reviewer against:
* each required environment
* each supported upgrade path
* If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members.
* If the changes could have an impact on the clients (either UI or CLI), a JIRA should be created for making the required changes on the client side and acknowledged by one of the client side team members.
* PR has been merged


Expand Down Expand Up @@ -55,7 +55,7 @@ $GOPATH
```

## Set up Git Hooks
Run the following command to set up git hooks for the project.
Run the following command to set up git hooks for the project.

```
make setup/git/hooks
Expand Down Expand Up @@ -87,7 +87,10 @@ Set the following configuration in your **Launch.json** file.
}
```
## Modifying the API definition
The services' OpenAPI specification is located in `openapi/kas-fleet-manager.yaml`. It can be modified using Apicurio Studio, Swagger or manually. The OpenAPI files follows the [RHOAS API standards](https://api.appservices.tech/docs/api-standards.html), each modification has to adhere to it.
The services' OpenAPI specification is located in `openapi/kas-fleet-manager.yaml`.
It can be modified using Apicurio Studio, Swagger or manually. The OpenAPI
files follow the [RHOAS API standards](https://github.com/redhat-developer/app-services-api-guidelines/blob/main/docs/api-standards.md),
each modification has to adhere to it.

Once you've made your changes, the second step is to validate it:

Expand All @@ -101,16 +104,16 @@ Once the schema is valid, the remaining step is to generate the openapi modules
make openapi/generate
```
## Adding a new ConfigModule
See the [Adding a new Config Module](./docs/adding-a-new-ConfigModule.md) documentation for more information.
See the [Adding a new Config Module](docs/adding-a-new-config-module.md) documentation for more information.

## Adding a new endpoint
See the [adding-a-new-endpoint](./docs/adding-a-new-endpoint.md) documentation.
See the [adding-a-new-endpoint](docs/adding-a-new-endpoint.md) documentation.

## Adding New Serve Command Flags
See the [Adding Flags to KAS Fleet Manager](./docs/adding-new-flags.md) documentation for more information.
See the [Adding Flags to KAS Fleet Manager](docs/adding-new-flags.md) documentation for more information.

## Testing
See the [automated testing](./docs/automated-testing.md) documentation.
See the [automated testing](docs/automated-testing.md) documentation.

## Logging Standards & Best Practices
* Log only actionable information, which will be read by a human or a machine for auditing or debugging purposes
Expand All @@ -131,7 +134,7 @@ Log to this level any error based information that might want to be brought to s

#### Error
Log to this level any error that is fatal to the given transaction and affects expected user operation. This may or may not include failed connections, missing expected data, or other unrecoverable outcomes.
Error handling should be implemented by following these best practices as laid out in [this guide](docs/error-handing.md).
Error handling should be implemented by following these best practices as laid out in [this guide](docs/best-practices/error-handling.md).

#### Fatal
Log to this level any error that is fatal to the service and requires the service to be immediately shutdown in order to prevent data loss or other unrecoverable states. This should be limited to scripts and fail-fast scenarios in service startup *only* and *never* because of a user operation in an otherwise healthy servce.
Expand Down Expand Up @@ -160,17 +163,17 @@ glog.V(10).Info("biz")

### Sentry Logging
Sentry monitors errors/exceptions in a real-time environment. It provides detailed information about captured errors. See [sentry](https://sentry.io/welcome/) for more details.

Logging can be enabled by importing the sentry-go package: "github.com/getsentry/sentry-go

Following are possible ways of logging events via Sentry:

```go
sentry.CaptureMessage(message) // for logging message
sentry.CaptureEvent(event) // capture the events
sentry.CaptureEvent(event) // capture the events
sentry.CaptureException(error) // capture the exception
```
Example :
```
Example :
```go
func check(err error, msg string) {
if err != nil && err != http.ErrServerClosed {
Expand Down Expand Up @@ -200,4 +203,4 @@ make code/fix

## Writing Docs

Please see the [README](./docs/README.md) in `docs` directory.
Please see the [README](docs/README.md) in `docs` directory.
21 changes: 21 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,22 @@ ifeq (, $(shell which ${LOCAL_BIN_PATH}/rhoasapi-installation/spectral 2> /dev/n
}
endif

MARKDOWN_LINK_CHECK ?= ${LOCAL_BIN_PATH}/markdown-link-check
markdown-link-check-install:
ifeq (, $(shell which ${NPM} 2> /dev/null))
@echo "npm is not available please install it to be able to install markdown-link-check"
exit 1
endif
ifeq (, $(shell which ${LOCAL_BIN_PATH}/markdown-link-check 2> /dev/null))
@{ \
set -e ;\
mkdir -p ${LOCAL_BIN_PATH} ;\
mkdir -p ${LOCAL_BIN_PATH}/markdown-link-check-installation ;\
cd ${LOCAL_BIN_PATH} ;\
${NPM} install --prefix ${LOCAL_BIN_PATH}/markdown-link-check-installation [email protected] ;\
ln -s markdown-link-check-installation/node_modules/.bin/markdown-link-check markdown-link-check ;\
}
endif

.PHONY: openapi/spec/validate
openapi/spec/validate: specinstall
Expand All @@ -177,6 +193,7 @@ help:
@echo ""
@echo "make verify verify source code"
@echo "make lint lint go files and .yaml templates"
@echo "make lint/markdown-link-check check for broken links in markdown files"
@echo "make binary compile binaries"
@echo "make install compile binaries and install in GOPATH bin"
@echo "make run run the application"
Expand Down Expand Up @@ -263,6 +280,10 @@ lint/templates: specinstall
$(SPECTRAL) lint templates/*.yml templates/*.yaml --ignore-unknown-format --ruleset .validate-templates.yaml
.PHONY: lint/templates

MARKDOWN_LINK_CHECK_CONFIG_DIR ?= $(PROJECT_PATH)/.markdown-link-check
MARKDOWN_LINK_CHECK_CONFIG ?= $(MARKDOWN_LINK_CHECK_CONFIG_DIR)/markdown-link-check-config.json
lint/markdown-link-check: markdown-link-check-install
Copy link
Contributor Author

@miguelsorianod miguelsorianod Apr 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This hasn't been added to the lint task intentionally. As it might be time consuming I kept it separate from there.

find $(PROJECT_PATH) -type f -not -path '$(PROJECT_PATH)/bin*' -name '*.md' -exec $(MARKDOWN_LINK_CHECK) -q -c $(MARKDOWN_LINK_CHECK_CONFIG) {} \;

# Runs linter against go files and .y(a)ml files in the templates directory
# Requires golangci-lint to be installed @ $(go env GOPATH)/bin/golangci-lint
Expand Down
10 changes: 5 additions & 5 deletions docs/adding-a-new-config-module.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,22 @@
# Adding a new Config Module
This document covers the steps on how to add a new config module to the service.

1. Add an example configuration file in the /config folder. Ensure that the parameters are documented, see [provider-configuration.yaml](/config/provider-configuration.yaml) as example.
1. Add an example configuration file in the /config folder. Ensure that the parameters are documented, see [provider-configuration.yaml](../config/provider-configuration.yaml) as example.

2. Create a new config file with a filename format of `<config>.go` (see [kafka.go](../internal/kafka/internal/config/kafka.go) as an example).
> **NOTE**: If this configuration is an extension of an already existing configuration, please prefix the filename with the name of the config module its extending i.e. [kafka_supported_sizes.go](../internal/kafka/internal/config/kafka_supported_sizes.go) is an extension of the [kafka.go](../internal/kafka/internal/config/kafka.go) config.
> **NOTE**: If this configuration is an extension of an already existing configuration, please prefix the filename with the name of the config module its extending i.e. [kafka_supported_instance_types.go](../internal/kafka/internal/config/kafka_supported_instance_types.go) is an extension of the [kafka.go](../internal/kafka/internal/config/kafka.go) config.

These files should be created in the following places based on usage :
- `/pkg/...` - If it's a common configuration that can be shared across all services
- `/internal/kafka/internal/config` - for any Kafka specific configuration
- `/internal/connector/internal/config` - for any Connector specific configuration

3. The new config file has to implement the `ConfigModule` [interface](/pkg/environments/interfaces.go). This should be added into one of the following providers file:
3. The new config file has to implement the `ConfigModule` [interface](../pkg/environments/interfaces.go). This should be added into one of the following providers file:
- `CoreConfigProviders()` inside the [core providers file](../pkg/providers/core.go): For any global configuration that is not specific to a particular service (i.e. kafka or connector).
- `ConfigProviders()` inside [kafka providers](../internal/kafka/providers.go): For any kafka specific configuration.
- `ConfigProviders()` inside [connector providers](../internal/connector/providers.go): For any connector specific configuration.
> **NOTE**: If your ConfigModule also implements the ServiceValidator [interface](/pkg/environments/interfaces.go), please ensure to also specify `di.As(new(environments2.ServiceValidator))` when providing the dependency in one of the ConfigProviders listed above. Otherwise, the validation for your configuration will not be called.
> **NOTE**: If your ConfigModule also implements the ServiceValidator [interface](../pkg/environments/interfaces.go), please ensure to also specify `di.As(new(environments2.ServiceValidator))` when providing the dependency in one of the ConfigProviders listed above. Otherwise, the validation for your configuration will not be called.

4. Create/edit tests for the configuration file if needed with a filename format of `<config_test>.go` in the same directory the config file was created.
4. Create/edit tests for the configuration file if needed with a filename format of `<config_test>.go` in the same directory the config file was created.

5. Ensure the [service-template](../templates/service-template.yml) is updated. See this [pr](https://github.com/bf2fc6cc711aee1a0c2a/kas-fleet-manager/pull/817) as an example.
2 changes: 1 addition & 1 deletion docs/adding-new-flags.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ func (c *Config) AddFlags(fs *pflag.FlagSet) {
### Adding a New Config File
If the new configuration flag doesn't fit in any of the existing config file, a new one should be created.

1. See the [Adding a New Config File](/docs/adding-a-new-config-module.md) documentation.
1. See the [Adding a New Config File](adding-a-new-config-module.md) documentation.
2. Define any new flags in the `AddFlags()` function.

### Verify Addition of New Flags
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Data Plane OSD Cluster Dynamic Scaling functionality currently deals with:

## Autoscaling of worker nodes of an OSD cluster

Autoscaling of worker nodes of an OSD cluster is done by leveraging the [Cluster Autoscaler](https://docs.openshift.com/container-platform/4.9/machine_management/applying-autoscaling.html) as described in [AP-15: Dynamic Scaling of Data Plane Clusters](https://architecture.appservices.tech/ap/15/#autoscaling-of-nodes). For Manual clusters, this has to be enabled manually. Worker node autoscaling is enabled by default for all clusters that are created dynamically by the Fleet manager
Autoscaling of worker nodes of an OSD cluster is done by leveraging the [Cluster Autoscaler](https://docs.openshift.com/container-platform/4.9/machine_management/applying-autoscaling.html) as described in [AP-15: Dynamic Scaling of Data Plane Clusters](https://architecture.bf2.dev/ap/15/). For Manual clusters, this has to be enabled manually. Worker node autoscaling is enabled by default for all clusters that are created dynamically by the Fleet manager

## Prewarming of worker nodes

Expand Down
16 changes: 8 additions & 8 deletions docs/automated-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ A new `InterfaceNameMock` struct will be generated and can be used in tests.
For more information on using `moq`, see:

- [The moq repository](https://github.com/matryer/moq)
- The IDGenerator [interface](../pkg/client/ocm/id.go) and [mock](../pkg/client/ocm/idgenerator_moq_test.go) in this repository
- The IDGenerator [interface](../pkg/client/ocm/id.go) and [mock](../pkg/client/ocm/idgenerator_moq.go) in this repository

For mocking the OCM client, [this wrapper interface](../pkg/client/ocm/client.go). If using the OCM SDK in
any component of the system, please ensure the OCM SDK logic is placed inside this mock-able
Expand Down Expand Up @@ -60,7 +60,7 @@ defer teardown()

See [TestKafkaPost integration test](../internal/kafka/test/integration/kafkas_test.go) as an example of this.

>NOTE: the `teardown` function is responsible for performing post test cleanups e.g of service accounts that are provisioned
>NOTE: the `teardown` function is responsible for performing post test cleanups e.g of service accounts that are provisioned
for the Fleetshard authentication or Kafka canary service account. Ensure that if the integration test of a new features provision external resources, then these are properly cleanedup.

Integration tests in this service can take advantage of running in an "emulated OCM API". This
Expand Down Expand Up @@ -129,15 +129,15 @@ import (
func MyTestFunction(t *testing.T) {
err := utils.NewPollerBuilder().
// test output should go to `t.Logf` instead of `fmt.Printf`
OutputFunction(t.Logf).
OutputFunction(t.Logf).
// sets number of retries and interval between each retry
IntervalAndRetries(10 * time.Second, 10).
OnRetry(func(attempt int, maxAttempts int) (done bool, err error) { // function executed on each retry
// put your logic here
// put your logic here
return true, nil
}).
Build().Poll()
if err != nil {
if err != nil {
// ...
}
// ...
Expand All @@ -148,9 +148,9 @@ func MyTestFunction(t *testing.T) {

### Mock KAS Fleetshard Sync
The mock [KAS Fleetshard Sync](../internal/kafka/test/mocks/kasfleetshardsync/kas-fleetshard-sync.go) is used to reconcile data plane and Kafka cluster status during integration tests. This also needs to be used even when running integration tests against a real OCM environment as the KAS Fleetshard Sync in the data plane cluster cannot
communicate to the KAS Fleet Manager during integration test runs.
communicate to the KAS Fleet Manager during integration test runs.

The mock KAS Fleetshard Sync needs to be started at the start of a test that requires updates to a data plane or Kafka cluster status.
The mock KAS Fleetshard Sync needs to be started at the start of a test that requires updates to a data plane or Kafka cluster status.

**Example Usage:**
```go
Expand Down Expand Up @@ -233,4 +233,4 @@ Some endpoints will act differently depending on privileges of the entity callin

### Avoid setting up Kafka_SRE Identity Provider for Dataplane clusters

The KafkaSRE identity provider is automatically setup for each cluster in `cluster_provisioned` state and it is reconciled every time for all the clusters in `ready` state. This step is not done if the cluster IDP has already been configured i.e the `identity_provider_id` column is set. When it is not required to set up the IDP, you just have to make sure that the dataplane cluster in under test has the `IdentityProviderID` field / `identity_provider_id` column set to a dummy value.
The KafkaSRE identity provider is automatically setup for each cluster in `cluster_provisioned` state and it is reconciled every time for all the clusters in `ready` state. This step is not done if the cluster IDP has already been configured i.e the `identity_provider_id` column is set. When it is not required to set up the IDP, you just have to make sure that the dataplane cluster in under test has the `IdentityProviderID` field / `identity_provider_id` column set to a dummy value.
Loading