Skip to content

Commit

Permalink
Fix typos and broken links throughout the repository (microsoft#351)
Browse files Browse the repository at this point in the history
* fix typos in testing section

* fix typos in code reviews

* fix typos in CICD

* fix bare links and formatting in test section

* fix TOC in design reviews

* fix typos in devex

* clean up observability

* fix typos

* disable rule for duplicate heading as performance testing doc contains duplicate headings

* fix markdown lint

Co-authored-by: Tess Ferrandez <[email protected]>
  • Loading branch information
TessFerrandez and Tess Ferrandez authored Aug 11, 2020
1 parent 03585f1 commit 46e000f
Show file tree
Hide file tree
Showing 38 changed files with 160 additions and 185 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ prompted the pull request.
### Merging strategy

The preferred merging strategy for this repo is **linear**.
You can familiarize yourself with [merging strategies](./source-control/contributing/readme.md#merging-strategies) described in the Source Control section of this repo.
You can familiarize yourself with [merging strategies](./source-control/contributing/readme.md#merge-strategies) described in the Source Control section of this repo.

## Adding a new section

Expand Down
4 changes: 2 additions & 2 deletions CSE.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Microsoft CSE (Commercial Software Engineering)

Our team, CSE (Commercial Software Engineering), works side by side with customers to help them tackle their toughest technical problems both in the cloud and on the edge. We meet customers where they are, work in the languages they use, with the open source frameworks they use, on the operating systems they use. We work with enterprises and startups across many industries from financial services to manufacturing. Our work covers a broad spectrum of domains including IoT, machine learning, and high scale compute. Our "super power" is that we work closely with both our customers’ engineering teams and Microsoft’s product engineering teams, developing real-world expertise that we use to help our customers grow their business and help Microsoft improve our products and services.
Our team, CSE (Commercial Software Engineering), works side by side with customers to help them tackle their toughest technical problems both in the cloud and on the edge. We meet customers where they are, work in the languages they use, with the open source frameworks they use, on the operating systems they use. We work with enterprises and start-ups across many industries from financial services to manufacturing. Our work covers a broad spectrum of domains including IoT, machine learning, and high scale compute. Our "super power" is that we work closely with both our customers’ engineering teams and Microsoft’s product engineering teams, developing real-world expertise that we use to help our customers grow their business and help Microsoft improve our products and services.

We are very community focused in our work, with one foot in Microsoft and one foot in the open source communities that we help. We make pull requests on open source projects to add support for Microsoft platforms and/or improve existing implementations. We build frameworks and other tools to make it easier for developers to use Microsoft platforms. We source all the ideas for this work by maintaining very deep connections with these communities and the customers and partners that use them.

If you like variety, coding in many languages, using any available tech across our industry, digging in with our customers, hackfests, occasional travel, and telling the story of what you’ve done in [blog posts](https://www.microsoft.com/developerblog/) and at conferences, then come talk to us.
If you like variety, coding in many languages, using any available tech across our industry, digging in with our customers, hack fests, occasional travel, and telling the story of what you’ve done in [blog posts](https://www.microsoft.com/developerblog/) and at conferences, then come talk to us.

> You can checkout some of our work on our [Developer Blog](https://www.microsoft.com/developerblog/)
2 changes: 1 addition & 1 deletion SPRINT-STRUCTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ The purpose of this document is to:
- [ ] [Set up Source Control](source-control/readme.md)
- Agree on [best practices for commits](source-control/contributing/readme.md#commit-best-practices)
- [ ] [Set up basic Continuous Integration with linters and automated tests](continuous-integration/readme.md)
- [ ] [Set up meetings for Daily Standups and decide on a Process Lead](stand-ups/readme.md)
- [ ] [Set up meetings for Daily Stand-ups and decide on a Process Lead](stand-ups/readme.md)
- Discuss purpose, goals, participants and facilitation guidance
- Discuss timing, and how to run an efficient stand-up
- [ ] [If the project has sub-teams, set up a Scrum of Scrums](scrum-of-scrums/readme.md)
Expand Down
37 changes: 17 additions & 20 deletions automated-testing/e2e-testing/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ E2E testing is done with the following steps:
- Production like Environment setup for the testing
- Test data setup
- Decide exit criteria
- Choose the testing methods that most applicable to your system. For the definiton of the various testing methods, please see [Testing Methods](./testing-methods.md) document.
- Choose the testing methods that most applicable to your system. For the definition of the various testing methods, please see [Testing Methods](./testing-methods.md) document.

### Pre-requisite

Expand All @@ -72,7 +72,7 @@ E2E testing is done with the following steps:
- Execute the test cases
- Register the test results and decide on pass and failure
- Report the Bugs in the bug reporting tool
- Reverify the bug fixes
- Re-verify the bug fixes

### Test closure

Expand All @@ -91,7 +91,7 @@ The tracing the quality metrics gives insight about the current status of testin

## E2E Testing Frameworks and Tools

### **1) Gauge Framework**
### 1. Gauge Framework

![Gauge Framework](./images/gauge.jpg)

Expand All @@ -105,29 +105,29 @@ Gauge is a free and open source framework for writing and running E2E tests. Som
- Supports Visual Studio Code, Intellij IDEA, IDE Support.
- Supports html, json and XML reporting.

=> [Gauge Framework Website](https://gauge.org/)
[Gauge Framework Website](https://gauge.org/)

### **2) Robot Framework**
### 2. Robot Framework

![Robot Framework](./images/robot.jpg)

Robot Framework is a generic open source automation framework. The framework has easy syntax, utilizing human-readable keywords. Its capabilities can be extended by libraries implemented with Python or Java.

Robot shares a lot of the same "pros" as Gauge, with the exception of the developer tooling and the syntax. In our usage, we found the VS Code Intellisense offered with Gauge to be much more stable than the offerings for Robot. We also found the syntax to be less readable than what Gauge offered. While both frameworks allow for markup based test case definitions, the Gauge syntax reads much more like an English sentence than Robot. Finally, Intellisense is baked into the markup files for Gauge test cases, which will create a function stub for the actual test definition if the developer allows it. The same cannot be said of the Robot Framework.

=> [Robot Framework Website](https://robotframework.org/#introduction)
[Robot Framework Website](https://robotframework.org/#introduction)

### **3) TestCraft**
### 3. TestCraft

![TestCraft](./images/TestCraft-logo.png)

TestCraft is a codeless Selenium test automation platform. Its revolutionary AI technology and unique visual modeling allow for faster test creation and execution while eliminating test maintenance overhead.

The testers create fully automated test scenarios without coding. Customers find bugs faster, release more frequently, integrate with the CI/CD approach and improve the overall quality of their digital products. This all creates a complete end to end testing experience.

=> [TestCraft Website](https://www.testcraft.io/?utm_campaign=SoftwareTestingHelp%20&utm_source=SoftwareTestingHelp&utm_medium=EndtoEndTestingPage) or get it from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=testcraft.build-release-task)
[TestCraft Website](https://www.testcraft.io/?utm_campaign=SoftwareTestingHelp%20&utm_source=SoftwareTestingHelp&utm_medium=EndtoEndTestingPage) or get it from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=testcraft.build-release-task)

### **4) Ranorex Studio**
### 4. Ranorex Studio

![Ranorex Studio](./images/ranorex-studio2.png)

Expand All @@ -137,9 +137,9 @@ Run tests in parallel or on a Selenium Grid with built-in Selenium WebDriver. Ra

**Ranorex Studio** tests also integrate with Azure DevOps (AzDO), which can be run as part of a build pipeline in AzDO.

=> [Ranorex Studio Website](https://www.ranorex.com/ranorex-studio-test-automation/?utm_source=softwaretestinghelp&utm_medium=cpc&utm_campaign=softwaretestinghelp_what-is-end-to-end-testing) or read about its [integration with AzDO](https://www.ranorex.com/help/latest/interfaces-connectivity/azure-devops-integration/introduction/)
[Ranorex Studio Website](https://www.ranorex.com/ranorex-studio-test-automation/?utm_source=softwaretestinghelp&utm_medium=cpc&utm_campaign=softwaretestinghelp_what-is-end-to-end-testing) or read about its [integration with AzDO](https://www.ranorex.com/help/latest/interfaces-connectivity/azure-devops-integration/introduction/)

### **5) Katalon Studio**
### 5. Katalon Studio

![Katalon](./images/New-Logo-Katalon-Studio.png)

Expand All @@ -151,9 +151,9 @@ Built on top of Selenium and Appium, Katalon Studio helps standardize your end-t

Katalon is endorsed by Gartner, IT professionals, and a large testing community.

>Note: At the time of this writing, Katalon Studio extension for AzDO was NOT available for Linux.
> Note: At the time of this writing, Katalon Studio extension for AzDO was NOT available for Linux.
=> [Katalon Studio Website](https://www.katalon.com/) or read about its [integration with AzDO](https://docs.katalon.com/katalon-studio/docs/azure-devops-extension.html#installation)
[Katalon Studio Website](https://www.katalon.com/) or read about its [integration with AzDO](https://docs.katalon.com/katalon-studio/docs/azure-devops-extension.html#installation)

## Conclusion

Expand All @@ -165,10 +165,7 @@ Finally, the E2E test is often performed manually as the cost of automating such

## Resources

[Wikipedia: Software testing](https://en.wikipedia.org/wiki/Software_testing)

[Wikipedia: Unit testing](https://en.wikipedia.org/wiki/Unit_testing)

[Wikipedia: Integration testing](https://en.wikipedia.org/wiki/Integration_testing)

[Wikipedia: System testing](https://en.wikipedia.org/wiki/System_testing)
- [Wikipedia: Software testing](https://en.wikipedia.org/wiki/Software_testing)
- [Wikipedia: Unit testing](https://en.wikipedia.org/wiki/Unit_testing)
- [Wikipedia: Integration testing](https://en.wikipedia.org/wiki/Integration_testing)
- [Wikipedia: System testing](https://en.wikipedia.org/wiki/System_testing)
2 changes: 1 addition & 1 deletion automated-testing/e2e-testing/test_type_template2.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The document should start with a brief overview about the test type and what is

## Why ~Test type~ [The Why]

Start by describing the problem that this test type addresses, this should focus on the motivation behind the test type to aid the reader corelate this test type to the problem they are trying to resolve.
Start by describing the problem that this test type addresses, this should focus on the motivation behind the test type to aid the reader correlate this test type to the problem they are trying to resolve.

## ~Test type~ Design Blocks [The What]

Expand Down
4 changes: 2 additions & 2 deletions automated-testing/e2e-testing/testing-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ This method is used very commonly. It occurs horizontally across the context of

![Horizontal Test](./images/horizontal-e2e-testing.png)

The inbound data may be injected from various sources, but it then "flatten" into a horizontal processing pipeline that may include various components, such as a gateway API, data transformation, data validation, storage, etc... Throughout the entire Extract-Transform-Load (ETL) processing, the data flow can be tracked and monitored under the horizontal spectrum with little sprinkles of optional, and thus not important for the overal E2E test case, services, like logging, auditing, authentication.
The inbound data may be injected from various sources, but it then "flatten" into a horizontal processing pipeline that may include various components, such as a gateway API, data transformation, data validation, storage, etc... Throughout the entire Extract-Transform-Load (ETL) processing, the data flow can be tracked and monitored under the horizontal spectrum with little sprinkles of optional, and thus not important for the overall E2E test case, services, like logging, auditing, authentication.

## Vertical Test

Expand All @@ -16,7 +16,7 @@ In this method, all most critical transactions of any application are verified a

In such case, each layer (tier) is required to be fully tested in conjunction with the "connected" layers above and beneath, in which services "talk" to each other during the end to end data flow. All these complex testing scenarios will require proper validation and dedicated automated testing. Thus this method is much more difficult.

## E2E Test Cases Design Guidances
## E2E Test Cases Design Guidelines

Below enlisted are few **guidelines** that should be kept in mind while designing the test cases for performing E2E testing:

Expand Down
16 changes: 9 additions & 7 deletions automated-testing/performance-testing/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Performance testing is commonly conducted to accomplish one or more the followin

* To help in assessing whether a **system is ready for Release**:
* Estimating / Predicting the performance characteristics (such as response time, throughput) which an application is likely to have when it is released in to production. The results can help in predicting the satisfaction level of the users when interacting with the system. The predicted values can also be compared with agreed values (success criteria) for the performance characteristics when available.
* To help in accessing the adequacy of the infrastructure / managed service sku's to meet the desired performance characteristics of a system
* To help in accessing the adequacy of the infrastructure / managed service SKUs to meet the desired performance characteristics of a system
* Identifying bottlenecks and issues with the application at different load levels
* To compare the **performance impact of application changes**
* Comparing the performance characteristics of an application after a change to the values of performance characteristics during previous runs (or baseline values), can provide an indication of performance issues or enhancements introduced due to a change
Expand All @@ -21,17 +21,19 @@ Performance testing is commonly conducted to accomplish one or more the followin

## Key Performance Testing categories

### **Performance Testing**
<!-- markdownlint-disable no-duplicate-heading -->
### Performance Testing
<!-- markdownlint-enable no-duplicate-heading -->

This category is the super set of all sub categories of performance related testing. It validates/determines the speed, scalability or reliability characteristics of the system under test. Performance testing focusses on achieving the response times, throughput, and resource utilization levels which meet the performance objectives of a system
This category is the super set of all sub categories of performance related testing. It validates/determines the speed, scalability or reliability characteristics of the system under test. Performance testing focuses on achieving the response times, throughput, and resource utilization levels which meet the performance objectives of a system

### **Load Testing**
### Load Testing

This is the subcategory of performance testing which focusses on validating the performance characteristics of a system, when the system faces load volumes which are expected during production operation. **Endurance Test** or **Soak Test** is a load test carried over a long duration ranging from several hours to days.
This is the subcategory of performance testing which focuses on validating the performance characteristics of a system, when the system faces load volumes which are expected during production operation. **Endurance Test** or **Soak Test** is a load test carried over a long duration ranging from several hours to days.

### **Stress Testing**
### Stress Testing

This is the subcategory of performance testing which focusses on validating the performance characteristics of a system when the system faces extreme load. The goal is to evaluate how does the system handles being pressured to its limits, does it recover (i.e. scale-out) or does it just break and fail?
This is the subcategory of performance testing which focuses on validating the performance characteristics of a system when the system faces extreme load. The goal is to evaluate how does the system handles being pressured to its limits, does it recover (i.e. scale-out) or does it just break and fail?

## Key Performance testing activities

Expand Down
12 changes: 6 additions & 6 deletions automated-testing/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ Automation testing | It can be manual or automation testing | It can be manual o

## Sections within Testing

* [Unit testing](unit-testing/readme.md)
* [Integration testing](integration-testing/readme.md)
* [End-to-End testing](e2e-testing/readme.md)
* [UI testing](ui-testing/readme.md)
* [Synthetic Monitoring testing](synthetic-monitoring-tests/readme.md)
* [Performance testing](performance-testing/readme.md)
- [Unit testing](unit-testing/readme.md)
- [Integration testing](integration-testing/readme.md)
- [End-to-End testing](e2e-testing/readme.md)
- [UI testing](ui-testing/readme.md)
- [Synthetic Monitoring testing](synthetic-monitoring-tests/readme.md)
- [Performance testing](performance-testing/readme.md)
Loading

0 comments on commit 46e000f

Please sign in to comment.