Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best practices for testing #997

Merged
merged 12 commits into from
Nov 15, 2023
1 change: 0 additions & 1 deletion .cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,6 @@
"profefe",
"promtail",
"Promtail",
"proselint",
"Pryce's",
"pscore",
"pseudocode",
Expand Down
18 changes: 18 additions & 0 deletions docs/automated-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,3 +59,21 @@ The table below maps outcomes -- the results that you may want to achieve in you

- [Using DevTest Pattern for building containers with AzDO](tech-specific-samples/azdo-container-dev-test-release)
- [Using Azurite to run blob storage tests in pipeline](tech-specific-samples/blobstorage-unit-tests/README.md)

## Build for Testing

Testing is a critical part of the development process. It is important to build your application with testing in mind. Here are some tips to help you build for testing:

- **Parameterize everything.** Rather than hard-code any variables, consider making everything a configurable parameter with a reasonable default. This will allow you to easily change the behavior of your application during testing. Particularly during performance testing, it is common to test different values to see what impact that has on performance. If a range of defaults need to change together, consider one or more parameters which set "modes", changing the defaults of a group of parameters together.

- **Document at startup.** When your application starts up, it should log all parameters. This ensures the person reviewing the logs and application behavior know exactly how the application is configured.

- **Log to console.** Logging to external systems like Azure Monitor is desirable for traceability across services. This requires logs to be dispatched from the local system to the external system and that is a dependency that can fail. It is important that someone be able to console logs directly on the local system.

- **Log to external system.** In addition to console logs, logging to an external system like Azure Monitor is desirable for traceability across services and durability of logs.

- **Log all activity.** If the system is performing some activity (reading data from a database, calling an external service, etc.), it should log that activity. Ideally, there should be a log message saying the activity is starting and another log message saying the activity is complete. This allows someone reviewing the logs to understand what the application is doing and how long it is taking. Depending on how noisy this is, different messages can be associated with different log levels, but it is important to have the information available when it comes to debugging a deployed system.

- **Correlate distributed activities.** If the system is performing some activity that is distributed across multiple systems, it is important to correlate the activity across those systems. This can be done using a Correlation ID that is passed from system to system. This allows someone reviewing the logs to understand the entire flow of activity.
plasne marked this conversation as resolved.
Show resolved Hide resolved

- **Log metadata.** When logging, it is important to include metadata that is relevant to the activity. For example, a Tenant ID, Customer ID, or Order ID. This allows someone reviewing the logs to understand the context of the activity and filter to a manageable set of logs.
1 change: 0 additions & 1 deletion docs/automated-testing/fault-injection-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ Fault injection methods are a way to increase coverage and validate software rob
* **Error** - That part of the system state that may cause a subsequent failure.
* **Failure** - An event that occurs when the delivered service deviates from correct state.
* **Fault-Error-Failure cycle** - A key mechanism in [dependability](https://en.wikipedia.org/wiki/Dependability): A fault may cause an error. An error may cause further errors within the system boundary; therefore each new error acts as a fault. When error states are observed at the system boundary, they are termed failures.
(Modeled by [Laprie/Avizienis](https://www.nasa.gov/pdf/636745main_day_3-algirdas_avizienis.pdf))

#### Fault Injection Testing Basics

Expand Down
16 changes: 16 additions & 0 deletions docs/automated-testing/performance-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,22 @@ Developers often implement fallback procedures for service failure. Chaos
testing arbitrarily shuts down different parts of the system to validate that
fallback procedures function correctly.

## Best practices

Consider the following best practices for performance testing:

- **Make one change at a time.** Don't make multiple changes to the system
between tests. If you do, you won't know which change caused the performance
to improve or degrade.

- **Automate testing.** Strive to automate the setup and teardown of resources
for a performance run as much as possible. Manual execution can lead to
misconfigurations.

- **Use different IP addresses.** Some systems will throttle requests from a
single IP address. If you are testing a system that has this type of
restriction, you can use different IP addresses to simulate multiple users.

plasne marked this conversation as resolved.
Show resolved Hide resolved
## Performance monitor metrics

When executing the various types of testing approaches, whether it is stress,
Expand Down
17 changes: 0 additions & 17 deletions docs/code-reviews/recipes/markdown.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,23 +46,6 @@ markdownlint **/*.md --ignore node_modules --fix

A comprehensive list of markdownlint rules is available [here](https://github.com/DavidAnson/markdownlint/blob/main/doc/Rules.md).

### proselint

[`proselint`](http://proselint.com/) is a command line utility that lints the text contents of the document. It checks for jargon, spelling errors, redundancy, corporate speak and other language related issues.

It's available both as a [python package](https://github.com/amperser/proselint/#checks) and a [node package](https://www.npmjs.com/package/proselint).

```bash
pip install proselint
npm install -g proselint
```

Run proselint

```bash
proselint document.md
```

### write-good

[`write-good`](https://github.com/btford/write-good) is a linter for English text that helps writing better documentation.
Expand Down
1 change: 0 additions & 1 deletion docs/documentation/tools/automation.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@ If you want to automate some checks on your Markdown documents, there are severa
- Code Analysis / Linting
- [markdownlint](../../code-reviews/recipes/markdown.md#markdownlint) to verify Markdown syntax and enforce rules that make the text more readable.
- [markdown-link-check](https://github.com/tcort/markdown-link-check) to extract links from markdown texts and check whether each link is alive (200 OK) or dead.
- [proselint](../../code-reviews/recipes/markdown.md#proselint) to check for jargon, spelling errors, redundancy, corporate speak and other language related issues.
- [write-good](../../code-reviews/recipes/markdown.md#write-good) to check English prose.
- [Docker image for node-markdown-spellcheck](https://github.com/tmaier/docker-markdown-spellcheck), a lightweight docker image to spellcheck markdown files.
- [static code analysis](../../continuous-integration/dev-sec-ops/static-code-analysis/static_code_analysis.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/observability/profiling.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,4 +20,4 @@ Unfortunately, each profiler tool typically uses its own format for storing prof
- (Java and Go) [Flame](https://github.com/VerizonMedia/kubectl-flame) - profiling containers in Kubernetes
- (Java, Python, Go) [Datadog Continuous profiler](https://www.datadoghq.com/product/code-profiling/)
- (Go) [profefe](https://github.com/profefe/profefe), which builds `pprof` to provide continuous profiling
- (Java) [Eclipse Memory Analyzer](https://www.eclipse.org/mat/)
- (Java) [Eclipse Memory Analyzer](https://eclipse.dev/mat/)
4 changes: 3 additions & 1 deletion linkcheck.json
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,9 @@
"https://blog.insightdatascience.com",
"https://www.w3.org/",
"https://mtirion.medium.com/",
"https://chrieke.medium.com/"
"https://chrieke.medium.com/",
"https://eclipse.dev/mat/",
"https://cloud.google.com/blog/products/gcp/cre-life-lessons-what-is-a-dark-launch-and-what-does-it-do-for-me"
],
"only_errors": true
}
Loading