diff --git a/docs/.pages b/docs/.pages index 4c2ec50e13..7eaae9cf50 100644 --- a/docs/.pages +++ b/docs/.pages @@ -1,4 +1,5 @@ nav: - - ... + - Agile Development: agile-development + - Automated Testing: automated-testing - CI/CD: ci-cd - ... diff --git a/docs/automated-testing/README.md b/docs/automated-testing/README.md index 5dac2e1dae..e672613f3b 100644 --- a/docs/automated-testing/README.md +++ b/docs/automated-testing/README.md @@ -1,20 +1,60 @@ # Testing -## Map of Outcomes to Testing Techniques +## Why testing -The table below maps outcomes -- the results that you may want to achieve in your validation efforts -- to one or more techniques that can be used to accomplish that outcome. +- Tests allow us to find flaws in our software +- Good tests document the code by describing the intent +- Automated tests saves time, compared to manual tests +- Automated tests allow us to safely change and refactor our code without introducing regressions + +## The fundamentals + +- We consider code to be incomplete if it is not accompanied by tests +- We write unit tests (tests without external dependencies) that can run before every PR merge to validate that we don’t have regressions +- We write Integration tests/E2E tests that test the whole system end to end, and run them regularly +- We write our tests early and block any further code merging if tests fail. +- We run load tests/performance tests where appropriate to validate that the system performs under stress + +## Build for testing + +Testing is a critical part of the development process. It is important to build your application with testing in mind. Here are some tips to help you build for testing: + +- **Parameterize everything.** Rather than hard-code any variables, consider making everything a configurable parameter with a reasonable default. This will allow you to easily change the behavior of your application during testing. Particularly during performance testing, it is common to test different values to see what impact that has on performance. If a range of defaults need to change together, consider one or more parameters which set "modes", changing the defaults of a group of parameters together. + +- **Document at startup.** When your application starts up, it should log all parameters. This ensures the person reviewing the logs and application behavior know exactly how the application is configured. + +- **Log to console.** Logging to external systems like Azure Monitor is desirable for traceability across services. This requires logs to be dispatched from the local system to the external system and that is a dependency that can fail. It is important that someone be able to console logs directly on the local system. + +- **Log to external system.** In addition to console logs, logging to an external system like Azure Monitor is desirable for traceability across services and durability of logs. + +- **Log all activity.** If the system is performing some activity (reading data from a database, calling an external service, etc.), it should log that activity. Ideally, there should be a log message saying the activity is starting and another log message saying the activity is complete. This allows someone reviewing the logs to understand what the application is doing and how long it is taking. Depending on how noisy this is, different messages can be associated with different log levels, but it is important to have the information available when it comes to debugging a deployed system. + +- **Correlate distributed activities.** If the system is performing some activity that is distributed across multiple systems, it is important to correlate the activity across those systems. This can be done using a Correlation ID that is passed from system to system. This allows someone reviewing the logs to understand the entire flow of activity. For more information, please see [Observability in Microservices](../observability/microservices.md). + +- **Log metadata.** When logging, it is important to include metadata that is relevant to the activity. For example, a Tenant ID, Customer ID, or Order ID. This allows someone reviewing the logs to understand the context of the activity and filter to a manageable set of logs. + +- **Log performance metrics.** Even if you are using App Insights to capture how long dependency calls are taking, it is often useful to know long certain functions of your application took. It then becomes possible to evaluate the performance characteristics of your application as it is deployed on different compute platforms with different limitations on CPU, memory, and network bandwidth. For more information, please see [Metrics](../observability/pillars/metrics.md). + + +## Map of outcomes to testing techniques + +The table below maps outcomes (the results that you may want to achieve in your validation efforts) to one or more techniques that can be used to accomplish that outcome. | When I am working on... | I want to get this outcome... | ...so I should consider | -|-------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Development | Prove backward compatibility with existing callers and clients | [Shadow testing](shadow-testing/README.md) | Development; [Integration testing](integration-testing/README.md) | Ensure telemetry is sufficiently detailed and complete to trace and diagnose malfunction in [End-to-End testing](e2e-testing/README.md) flows | Distributed Debug challenges ; Orphaned call chain analysis | +| -- | -- | -- | +| Development | Prove backward compatibility with existing callers and clients | [Shadow testing](shadow-testing/README.md) | +| Development | Ensure telemetry is sufficiently detailed and complete to trace and diagnose malfunction in [End-to-End testing](e2e-testing/README.md) flows | Distributed Debug challenges; Orphaned call chain analysis | | Development | Ensure program logic is correct for a variety of expected, mainline, edge and unexpected inputs | [Unit testing](unit-testing/README.md); Functional tests; [Consumer-driven Contract Testing](cdc-testing/README.md); [Integration testing](integration-testing/README.md) | -| Development | Prevent regressions in logical correctness; earlier is better | [Unit testing](unit-testing/README.md); Functional tests; [Consumer-driven Contract Testing](cdc-testing/README.md); [Integration testing](integration-testing/README.md); Rings (each of these are expanding scopes of coverage) | | Development | Quickly validate mainline correctness of a point of functionality (e.g. single API), manually | Manual smoke testing Tools: postman, powershell, curl | +| Development | Prevent regressions in logical correctness; earlier is better | [Unit testing](unit-testing/README.md); Functional tests; [Consumer-driven Contract Testing](cdc-testing/README.md); [Integration testing](integration-testing/README.md); Rings (each of these are expanding scopes of coverage) | +| Development | Quickly validate mainline correctness of a point of functionality (e.g. single API), manually | Manual smoke testing Tools: postman, powershell, curl | | Development | Validate interactions between components in isolation, ensuring that consumer and provider components are compatible and conform to a shared understanding documented in a contract | [Consumer-driven Contract Testing](cdc-testing/README.md) | -| Development; [Integration testing](integration-testing/README.md) | Validate that multiple components function together across multiple interfaces in a call chain, incl network hops | [Integration testing](integration-testing/README.md); End-to-end ([End-to-End testing](e2e-testing/README.md)) tests; Segmented end-to-end ([End-to-End testing](e2e-testing/README.md)) | +| Development | Validate that multiple components function together across multiple interfaces in a call chain, incl network hops | [Integration testing](integration-testing/README.md); End-to-end ([End-to-End testing](e2e-testing/README.md)) tests; Segmented end-to-end ([End-to-End testing](e2e-testing/README.md)) | | Development | Prove disaster recoverability – recover from corruption of data | DR drills | -| Development | Find vulnerabilities in service Authentication or Authorization | Scenario (security) | | Development | Prove correct RBAC and claims interpretation of Authorization code | Scenario (security) | | Development | Document and/or enforce valid API usage | [Unit testing](unit-testing/README.md); Functional tests; [Consumer-driven Contract Testing](cdc-testing/README.md) | +| Development | Find vulnerabilities in service Authentication or Authorization | Scenario (security) | +| Development | Prove correct RBAC and claims interpretation of Authorization code | Scenario (security) | +| Development | Document and/or enforce valid API usage | [Unit testing](unit-testing/README.md); Functional tests; [Consumer-driven Contract Testing](cdc-testing/README.md) | | Development | Prove implementation correctness in advance of a dependency or absent a dependency | [Unit testing](unit-testing/README.md) (with mocks); [Unit testing](unit-testing/README.md) (with emulators); [Consumer-driven Contract Testing](cdc-testing/README.md) | -| Development | Ensure that the user interface is accessible | [Accessibility](../accessibility/README.md) | +| Development | Ensure that the user interface is accessible | [Accessibility](../non-functional-requirements/accessibility.md) | | Development | Ensure that users can operate the interface | [UI testing (automated)](ui-testing/README.md) (human usability observation) | | Development | Prevent regression in user experience | UI automation; [End-to-End testing](e2e-testing/README.md) | | Development | Detect and prevent 'noisy neighbor' phenomena | [Load testing](performance-testing/load-testing.md) | @@ -40,42 +80,9 @@ The table below maps outcomes -- the results that you may want to achieve in you | Staging; Operation | Measure behavior under rapid changes in traffic | Spike | | Staging; Optimizing | Discover cost metrics per unit load volume (what factors influence cost at what load points, e.g. cost per million concurrent users) | Load (stress) | | Development; Operation | Discover points where a system is not resilient to unpredictable yet inevitable failures (network outage, hardware failure, VM host servicing, rack/switch failures, random acts of the Malevolent Divine, solar flares, sharks that eat undersea cable relays, cosmic radiation, power outages, renegade backhoe operators, wolves chewing on junction boxes, …) | Chaos | -| Development | Perform unit testing on Power platform custom connectors | [Custom Connector Testing](unit-testing/custom-connector.md) | - -## Sections within Testing - -- [Consumer-driven contract (CDC) testing](cdc-testing/README.md) -- [End-to-End testing](e2e-testing/README.md) -- [Fault Injection testing](fault-injection-testing/README.md) -- [Integration testing](integration-testing/README.md) -- [Performance testing](performance-testing/README.md) -- [Shadow testing](shadow-testing/README.md) -- [Smoke testing](smoke-testing/README.md) -- [Synthetic Transaction testing](synthetic-monitoring-tests/README.md) -- [UI testing](ui-testing/README.md) -- [Unit testing](unit-testing/README.md) +| Development | Perform unit testing on Power platform custom connectors | [Custom Connector Testing](unit-testing/custom-connector.md) | -## Technology Specific Testing +## Technology specific testing -- [Using DevTest Pattern for building containers with AzDO](tech-specific-samples/azdo-container-dev-test-release/README.md) +- [Using DevTest Pattern for building containers with AzDO](tech-specific-samples/building-containers-with-azure-devops.md) - [Using Azurite to run blob storage tests in pipeline](tech-specific-samples/blobstorage-unit-tests/README.md) - -## Build for Testing - -Testing is a critical part of the development process. It is important to build your application with testing in mind. Here are some tips to help you build for testing: - -- **Parameterize everything.** Rather than hard-code any variables, consider making everything a configurable parameter with a reasonable default. This will allow you to easily change the behavior of your application during testing. Particularly during performance testing, it is common to test different values to see what impact that has on performance. If a range of defaults need to change together, consider one or more parameters which set "modes", changing the defaults of a group of parameters together. - -- **Document at startup.** When your application starts up, it should log all parameters. This ensures the person reviewing the logs and application behavior know exactly how the application is configured. - -- **Log to console.** Logging to external systems like Azure Monitor is desirable for traceability across services. This requires logs to be dispatched from the local system to the external system and that is a dependency that can fail. It is important that someone be able to console logs directly on the local system. - -- **Log to external system.** In addition to console logs, logging to an external system like Azure Monitor is desirable for traceability across services and durability of logs. - -- **Log all activity.** If the system is performing some activity (reading data from a database, calling an external service, etc.), it should log that activity. Ideally, there should be a log message saying the activity is starting and another log message saying the activity is complete. This allows someone reviewing the logs to understand what the application is doing and how long it is taking. Depending on how noisy this is, different messages can be associated with different log levels, but it is important to have the information available when it comes to debugging a deployed system. - -- **Correlate distributed activities.** If the system is performing some activity that is distributed across multiple systems, it is important to correlate the activity across those systems. This can be done using a Correlation ID that is passed from system to system. This allows someone reviewing the logs to understand the entire flow of activity. For more information, please see [Observability in Microservices](../observability/microservices.md). - -- **Log metadata.** When logging, it is important to include metadata that is relevant to the activity. For example, a Tenant ID, Customer ID, or Order ID. This allows someone reviewing the logs to understand the context of the activity and filter to a manageable set of logs. - -- **Log performance metrics.** Even if you are using App Insights to capture how long dependency calls are taking, it is often useful to know long certain functions of your application took. It then becomes possible to evaluate the performance characteristics of your application as it is deployed on different compute platforms with different limitations on CPU, memory, and network bandwidth. For more information, please see [Metrics](../observability/pillars/metrics.md). diff --git a/docs/automated-testing/e2e-testing/recipes/README.md b/docs/automated-testing/e2e-testing/recipes/README.md deleted file mode 100644 index 0d6f0fc1ed..0000000000 --- a/docs/automated-testing/e2e-testing/recipes/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# Templates - -- [Gauge Framework](gauge-framework.md) -- [Postman](postman-testing.md) \ No newline at end of file diff --git a/docs/automated-testing/integration-testing/README.md b/docs/automated-testing/integration-testing/README.md index 01d6a83c63..e3698a4098 100644 --- a/docs/automated-testing/integration-testing/README.md +++ b/docs/automated-testing/integration-testing/README.md @@ -12,7 +12,7 @@ Consider a banking application with three modules: login, transfers, and current Integration testing is done by the developer or QA tester. In the past, integration testing always happened after unit and before system and E2E testing. Compared to unit-tests, integration tests are fewer in quantity, usually run slower, and are more expensive to set up and develop. Now, if a team is following agile principles, integration tests can be performed before or after unit tests, early and often, as there is no need to wait for sequential processes. Additionally, integration tests can utilize mock data in order to simulate a complete system. There is an abundance of language-specific testing frameworks that can be used throughout the entire development lifecycle. -\*\* It is important to note the difference between integration and acceptance testing. Integration testing confirms a group of components work together as intended from a technical perspective, while acceptance testing confirms a group of components work together as intended from a business scenario. +> It is important to note the difference between integration and acceptance testing. Integration testing confirms a group of components work together as intended from a technical perspective, while acceptance testing confirms a group of components work together as intended from a business scenario. ## Applying Integration Testing diff --git a/docs/automated-testing/performance-testing/README.md b/docs/automated-testing/performance-testing/README.md index bd068f8040..42ed1a7999 100644 --- a/docs/automated-testing/performance-testing/README.md +++ b/docs/automated-testing/performance-testing/README.md @@ -38,7 +38,7 @@ following: the cost of running the hardware and software infrastructure. - Assess the **system's readiness** for release: - + - Evaluating the system's performance characteristics (response time, throughput) in a production-like environment. The goal is to ensure that performance goals can be achieved upon release. @@ -49,7 +49,7 @@ following: to the values of performance characteristics during previous runs (or baseline values), can provide an indication of performance issues (performance regression) or enhancements introduced due to a change - + ## Key Performance Testing categories Performance testing is a broad topic. There are many areas where you can perform diff --git a/docs/automated-testing/performance-testing/iterative-perf-test-template.md b/docs/automated-testing/performance-testing/iterative-perf-test-template.md index d851d8afb0..4564472da1 100644 --- a/docs/automated-testing/performance-testing/iterative-perf-test-template.md +++ b/docs/automated-testing/performance-testing/iterative-perf-test-template.md @@ -26,7 +26,7 @@ ### Results ```md -In bullet points document the results from the test. +In bullet points document the results from the test. - Attach any documents supporting the test results. - Add links to the dashboard for metrics and logs such as Application Insights. - Capture screenshots for metrics and include it in the results. Good candidate for this is CPU/Memory/Disk usage. diff --git a/docs/automated-testing/performance-testing/load-testing.md b/docs/automated-testing/performance-testing/load-testing.md index 3cccf3a3c9..f6f2825bf4 100644 --- a/docs/automated-testing/performance-testing/load-testing.md +++ b/docs/automated-testing/performance-testing/load-testing.md @@ -51,22 +51,23 @@ Evaluate whether load tests should be run as part of the PR strategy. ### Execution -It is recommended to use an existing testing framework (see below). These tools will provide a method of both specifying the user activity scenarios and how to execute those at load. Depending on the situation, it may be advisable to coordinate testing activities with the platform operations team. +It is recommended to use an existing testing framework (see below). These tools will provide a method of both specifying the user activity scenarios and how to execute those at load. Depending on the situation, it may be advisable to coordinate testing activities with the platform operations team. It is common to slowly ramp up to your desired load to better replicate real world behavior. Once you have reached your defined workload, maintain this level long enough to see if your system stabilizes. To finish up the test you should also ramp to see record how the system slows down as well. You should also consider the origin of your load test traffic. Depending on the scope of the target system you may want to initiate from a different location to better replicate real world traffic such as from a different region. -**Note:** Before starting please be aware of any restrictions on your network such as DDOS protection where you may need to notify a network administrator or apply for an exemption. - -**Note:** In general, the preferred approach to load testing would be the usage of a standard test framework such as the ones discussed below. There are cases, however, where a custom test client may be advantageous. Examples include batch oriented workloads that can be run under a single security context and the same test data can be re-used for multiple load tests. In such a scenario it may be beneficial to develop a custom script that can be used interactively as well as non-interactively. +> **Note:** Before starting please be aware of any restrictions on your network such as DDOS protection where you may need to notify a network administrator or apply for an exemption. +> +> **Note:** In general, the preferred approach to load testing would be the usage of a standard test framework such as the ones discussed below. There are cases, however, where a custom test client may be advantageous. Examples include batch oriented workloads that can be run under a single security context and the same test data can be re-used for multiple load tests. In such a scenario it may be beneficial to develop a custom script that can be used interactively as well as non-interactively. ### Analysis The analysis phase represents the work that brings all previous activities together: -* Set aside time to allow for collection of new test data based on the analysis of the load tests. -* Correlate application metrics and platform metrics to identify potential pitfalls and bottlenecks. -* Include business stakeholders early in the analysis phase to validate application findings. Include platform operations to validate platform findings. + +- Set aside time to allow for collection of new test data based on the analysis of the load tests. +- Correlate application metrics and platform metrics to identify potential pitfalls and bottlenecks. +- Include business stakeholders early in the analysis phase to validate application findings. Include platform operations to validate platform findings. ### Report writing diff --git a/docs/automated-testing/shadow-testing/README.md b/docs/automated-testing/shadow-testing/README.md index 0427424765..cb2c762918 100644 --- a/docs/automated-testing/shadow-testing/README.md +++ b/docs/automated-testing/shadow-testing/README.md @@ -54,7 +54,7 @@ Some advantages of shadow testing are: - We can test real-life scenarios with real-life data. - We can simulate scale with replicated production traffic. -## References +## References - [Martin Fowler - Dark Launching](https://martinfowler.com/bliki/DarkLaunching.html) - [Martin Fowler - Feature Toggle](https://martinfowler.com/bliki/FeatureToggle.html) diff --git a/docs/automated-testing/tech-specific-samples/README.md b/docs/automated-testing/tech-specific-samples/README.md deleted file mode 100644 index f6d7e47470..0000000000 --- a/docs/automated-testing/tech-specific-samples/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# Tech specific samples - -- [azdo-container-dev-test-release](azdo-container-dev-test-release/README.md) -- [blobstorage-unit-tests](blobstorage-unit-tests/README.md) \ No newline at end of file diff --git a/docs/automated-testing/tech-specific-samples/azdo-container-dev-test-release/README.md b/docs/automated-testing/tech-specific-samples/building-containers-with-azure-devops.md similarity index 98% rename from docs/automated-testing/tech-specific-samples/azdo-container-dev-test-release/README.md rename to docs/automated-testing/tech-specific-samples/building-containers-with-azure-devops.md index f8faa3bcac..30b3e13048 100644 --- a/docs/automated-testing/tech-specific-samples/azdo-container-dev-test-release/README.md +++ b/docs/automated-testing/tech-specific-samples/building-containers-with-azure-devops.md @@ -8,13 +8,6 @@ We will dive into tools needed to build, test and push a container, our environm Follow this link to dive deeper or revisit the [DevTest pattern](https://learn.microsoft.com/en-us/azure/architecture/solution-ideas/articles/dev-test-paas). -## Table of Contents - -[Build the Container](#build-the-container) -[Test the Container](#test-the-container) -[Push Container](#push-container) -[References](#references) - ## Build the Container The first step in container development, after creating the necessary Dockerfiles and source code, is building the container. Even the Dockerfile itself can include some basic testing. Code tests are performed when pushing the code to the repository origin, where it is then used to build the container. diff --git a/docs/automated-testing/templates/README.md b/docs/automated-testing/templates/README.md deleted file mode 100644 index eea709bfab..0000000000 --- a/docs/automated-testing/templates/README.md +++ /dev/null @@ -1,4 +0,0 @@ -# Templates - -- [case-study-template](./case-study-template.md) -- [test-type-template](./test-type-template.md) \ No newline at end of file diff --git a/docs/automated-testing/templates/case-study-template.md b/docs/automated-testing/templates/case-study-template.md index 14a520c8aa..ddcd44e0b8 100644 --- a/docs/automated-testing/templates/case-study-template.md +++ b/docs/automated-testing/templates/case-study-template.md @@ -1,4 +1,6 @@ -# ~Customer Project~ Case Study +# Case study template + +**[Customer Project] Case Study** ## Background diff --git a/docs/automated-testing/templates/test-type-template.md b/docs/automated-testing/templates/test-type-template.md index 6f313e3383..2b1e6cbc5c 100644 --- a/docs/automated-testing/templates/test-type-template.md +++ b/docs/automated-testing/templates/test-type-template.md @@ -1,4 +1,6 @@ -# Insert Test Technique Name Here +# Test Type Template + +**[Test Technique Name Here]** Put a 2-3 sentence overview about the test technique here. @@ -22,7 +24,7 @@ How much is enough? For example, some opine that unit test ROI drops significan - [ ] Build pipelines - [ ] Non-production deployments - [ ] Production deployments - + ## NOTE: If there is great (clear, succinct) documentation for the technique on the web, supply a pointer and skip the rest of this template. No need to re-type content ## How to Use diff --git a/docs/automated-testing/ui-testing/teams-tests.md b/docs/automated-testing/ui-testing/teams-tests.md index 6d6ef5b557..fbe529ec02 100644 --- a/docs/automated-testing/ui-testing/teams-tests.md +++ b/docs/automated-testing/ui-testing/teams-tests.md @@ -16,7 +16,7 @@ This is an overview on how you can implement UI tests for a custom Teams applica The following are learnings from various engagements: -## 1. Web based UI tests +## Web based UI tests To implement web-based UI tests for your Teams application, follow the same approach as you would for testing any other web application with a UI. [UI testing](README.md) provides valuable guidance in this regard. Your starting point for the test would be to automatically launch a browser (using Selenium or similar frameworks) and navigate to [https://teams.microsoft.com](https://teams.microsoft.com). @@ -65,13 +65,13 @@ var buildEdgeDriver = function () { }; ``` -## 2. Mobile based UI tests +## Mobile based UI tests Testing your custom Teams application on mobile devices is a bit more difficult than using the web-based approach as it requires usage of actual or simulated devices. Running such tests in a CI/CD pipeline can be more difficult and resource-intensive. One approach is to use real devices or cloud-based emulators from vendors such as [BrowserStack](https://www.browserstack.com/) which requires a license. Alternatively, you can use virtual devices hosted in Azure Virtual Machines. -### a) Using Android Virtual Devices (AVD) +### Option 1: Using Android Virtual Devices (AVD) This approach enables the creation of Android UI tests using virtual devices. It comes with the advantage of not requiring paid licenses to certain vendors. However, due to the nature of emulators, compared to real devices, it may prove to be less stable. Always choose the solution that best fits your project requirements and resources. @@ -250,7 +250,7 @@ Assuming you are using [webdriverio](https://webdriver.io/) as the client, you w - "appium:appActivity": the activity within Teams that you would like to launch on the device. In our case, we would like just to launch the app. The activity name for launching Teams is called "com.microsoft.skype.teams.Launcher". - "appium:automationName": the name of the driver you are using. Note: Appium can communicate to different platforms. This is achieved by installing a dedicated driver, designed for each platform. In our case, it would be [UiAutomator2](https://github.com/appium/appium-uiautomator2-driver) or [Espresso](https://github.com/appium/appium-espresso-driver), since they are both designed for Android platform. -### b) Using BrowserStack +### Option 2: Using BrowserStack BrowserStack serves as a cloud-based platform that enables developers to test both the web and mobile application across various browsers, operating systems, and real mobile devices. This can be seen as an alternative solution to the approach described earlier. The specific insights provided below relate to implementing such tests for a custom Microsoft Teams application: diff --git a/linkcheck.json b/linkcheck.json index 91b1bfcc12..9dd658826b 100755 --- a/linkcheck.json +++ b/linkcheck.json @@ -65,7 +65,8 @@ "https://www.pluralsight.com/courses/", "https://www.gartner.com/en/information-technology/glossary/citizen-developer", "https://www.onetrust.com/blog/principles-of-privacy-by-design/", - "https://docs.github.com/en/rest/commits/statuses" + "https://docs.github.com/en/rest/commits/statuses", + "https://blog.twitter.com/engineering/en_us/a/2015/diffy-testing-services-without-writing-tests.html" ], "only_errors": true, "cache_duration": "24h",