From 73d05c83f5da27b6cc333c7246dd62d565a20916 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 14:04:13 -0400 Subject: [PATCH 1/7] export test plan to .md --- TEST-PLAN.md | 231 +++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 231 insertions(+) create mode 100644 TEST-PLAN.md diff --git a/TEST-PLAN.md b/TEST-PLAN.md new file mode 100644 index 0000000..adf5785 --- /dev/null +++ b/TEST-PLAN.md @@ -0,0 +1,231 @@ + +**Purpose of the Document** +Use the Test Plan document to describe the testing approach and overall framework that will drive the testing of the project. + +***Template Instructions*** +*Note that the information in italics is guidelines for documenting testing efforts and activities. To adopt this template, delete all italicized instructions and modify as appropriate* + +Table of Contents +[1 Introduction 3](#introduction) + +[1.1 Purpose 3](#purpose) + +[1.2 Project Overview 3](#project-overview) + +[2 Scope 3](#scope) + +[2.1 In-Scope 3](#in-scope) + +[2.2 Out-of-Scope 3](#out-of-scope) + +[3 Testing Strategy 3](#testing-strategy) + +[3.1 Test Objectives 3](#test-objectives) + +[3.2 Test Assumptions 4](#test-assumptions) + +[3.3 Data Approach 4](#data-approach) + +[3.4 Automation Strategy 4](#automation-strategy) + +[3.5 Test Case Prioritization 4](#test-case-prioritization) + +[4 Execution Strategy 4](#execution-strategy) + +[4.1 Entry Criteria 4](#entry-criteria) + +[4.2 Exit criteria 5](#exit-criteria) + +[4.3 Validation and Defect Management 5](#validation-and-defect-management) + +[5 Github Workflow 6](#github-workflow) + +[5.1 Issue Submission and Prioritization 6](#issue-submission-and-prioritization) + +[5.2 Pull Request Review and Testing 6](#pull-request-review-and-testing) + +[5.3 Continuous Integration and Deployment 6](#continuous-integration-and-deployment) + +[6 Environment Requirements 7](#environment-requirements) + +[6.1 Dataverse-Internal Environment 7](#dataverse-internal-environment) + +[6.2 AWS-Based Environments 7](#aws-based-environments) + +[7 Significantly Impacted Division/College/Department 7](#significantly-impacted-division/college/department) + +[8 Dependencies 7](#dependencies) + +1. # **Introduction** {#introduction} + + 1. ## **Purpose** {#purpose} + +*Provide a summary of the test strategy, test approach, execution strategy and test management.* + +Automate the end-to-end functionality of the DP-Creator-II application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. + +2. ## **Project Overview** {#project-overview} + +*A summary of the project, product, solution being tested.* + +2. # **Scope** {#scope} + + 1. ## **In-Scope** {#in-scope} + +*Describes what is being tested, such as all the functions/features of a specific project/product/solution.* + +\- User authentication (login, logout, user session management) +\- Data processing workflows (data upload, processing configurations) +\- UI elements related to data privacy settings + \- Data export functionalities + +2. ## **Out-of-Scope** {#out-of-scope} + +*Identify all features and combinations of features which will not be tested and the reasons.* + + \- Performance testing + \- Non-functional UI aspects not directly related to functionality + \- External API integrations that are mocked in this environment + +3. # **Testing Strategy** {#testing-strategy} + + 1. ## **Test Objectives** {#test-objectives} + +*Describe the objectives. Define tasks and responsibilities.* + + \- Ensure reliable execution of user flows across all primary functionalities. + \- Validate critical data inputs and outputs are consistent with user expectations. + \- Verify UI responsiveness and behavior on supported devices. + \- Conduct regression testing to maintain code stability with new updates. + +2. ## **Test Assumptions** {#test-assumptions} + +*List the key assumptions of the project and the test plan.* + +3. ## **Data Approach** {#data-approach} + + *Describe the approach on the test data maintained in QA environments for functional and user acceptance testing.* + + \- Use static datasets for predictable validation. + \- Implement data mocks for complex scenarios. + \- Develop data validation utilities to check outputs. + +4. ## **Automation Strategy** {#automation-strategy} + + \- Smoke Testing: Quick, high-level test cases to ensure critical paths work. + \- Regression Testing: Re-running existing tests to validate the latest changes. + \- E2E Testing: Testing all primary workflows from start to finish. + +5. ## **Test Case Prioritization** {#test-case-prioritization} + + \- High priority for critical features like data processing workflows and privacy configurations. + \- Moderate priority for UI/UX elements like buttons, forms, and navigation. + \- Lower priority for non-critical paths or backend validations covered in unit tests. + +4. # **Execution Strategy** {#execution-strategy} + + 1. ## **Entry Criteria** {#entry-criteria} + +* *The entry criteria refer to the desirable conditions in order to start test execution* +* *Entry criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* + +| Entry Criteria | Test Team | Technical Team | Notes | +| ----- | ----- | ----- | ----- | +| *Test environment(s) is available* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *Test data is available* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *Code has been merged successfully* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *Development has completed unit testing* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | +| *Test scripts are completed, reviewed and approved by the Project Team* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | + + 2. ## **Exit criteria** {#exit-criteria} + +* *The exit criteria are the desirable conditions that need to be met in order proceed with the implementation.* +* *Exit criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* + +| Exit Criteria | Test Team | Technical Team | Notes | +| ----- | ----- | ----- | ----- | +| *100% Test Scripts executed* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *90% pass rate of Test Scripts* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *No open Critical and High severity defects* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | +| *All remaining defects are either cancelled or documented as Change Requests for a future release* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | +| *All expected and actual results are captured and documented with the test script* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *All test metrics collected based on reports from daily and Weekly Status reports* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *All defects logged in Defect Tracker/Spreadsheet* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | +| *Test environment cleanup completed and a new back up of the environment* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | + + 3. ## **Validation and Defect Management** {#validation-and-defect-management} + +* *Specify how test cases/test scenarios should be validated* +* *Specify how defect should be managed* + * *It is expected that the testers execute all the scripts in each of the cycles described above.* + * *The defects will be tracked through Defect Tracker or Spreadsheet.* + * *It is the responsibility of the tester to open the defects, retest and close the defect.* + +Defects found during the Testing should be categorized as below: + +| Severity | Impact | +| ----- | ----- | +| *1 (Critical)* | *Functionality is blocked and no testing can proceed Application/program/feature is unusable in the current state* | +| *2 (High)* | *Functionality is not usable and there is no workaround but testing can proceed* | +| *3 (Medium)* | *Functionality issues but there is workaround for achieving the desired functionality* | +| *4 (Low)* | *Unclear error message or cosmetic error which has minimum impact on product use.* | + +5. # **Github Workflow** {#github-workflow} + + 1. ## **Issue Submission and Prioritization** {#issue-submission-and-prioritization} + +Bugs and feature requests are tracked in GitHub, and issues are prioritized based on their impact and urgency. Issues should be categorized into sprint goals for more efficient tracking. + +\- Process: + \- Review the issue backlog in GitHub. + \- Assign priority based on criticality, risk, and user impact. + \- Assign issues to the current sprint or defer to future sprints if necessary. + +2. ## **Pull Request Review and Testing** {#pull-request-review-and-testing} + +QA starts with a smoke test on the PR branch. Additional functional and regression testing is done based on the feature or fix included in the PR. +\- PR Testing: + \- Ensure the feature or fix fully addresses the reported issue. + \- Use the documentation to create test cases and validate functionality. + \- Perform boundary testing, testing with incorrect data, and edge cases. + \- Review server logs for any errors during testing. + \- Test both the default and alternate configurations for features requiring setup. + +3. ## **Continuous Integration and Deployment** {#continuous-integration-and-deployment} + +Continuous integration ensures that tests run automatically with each code change. Dataverse’s Jenkins server should be used to run automated tests, while GitHub Actions provide additional automation for builds and deployments. + +\- CI Steps: + \- Merge PRs only when all tests pass. + \- Use the Jenkins build process to deploy to a staging environment (\`dataverse-internal.iq.harvard.edu\`). + \- Validate each PR’s test results through Jenkins' "Test Result" page. + \- If tests fail, report the issue to the developer immediately for resolution + +6. # **Environment Requirements** {#environment-requirements} + + 1. ## **Dataverse-Internal Environment** {#dataverse-internal-environment} + +A staging environment (\`dataverse-internal.iq.harvard.edu\`) is used to deploy and test PRs. This environment replicates production but should be limited to QA purposes. AWS instances may also be utilized for additional testing environments. + +\- \*\*Setup\*\*: + \- Deploy the PR build using Jenkins. + \- Validate deployment success by checking the homepage for the correct build version. + \- Run smoke tests immediately after deployment. + +2. ## **AWS-Based Environments** {#aws-based-environments} + +For complex testing scenarios (e.g., multiple concurrent testers or heavy load testing), spinning up EC2 instances with sample data can be useful. Persistent AWS instances may be configured for this purpose. + +7. # **Significantly Impacted Division/College/Department** {#significantly-impacted-division/college/department} + +| Business Area | Business Manager | Tester(s) | +| ----- | ----- | ----- | +| | | | +| | | | +| | | | + +8. # **Dependencies** {#dependencies} + +*Identify any dependencies on testing, such as test-item availability, testing-resource availability, and deadlines.* + +[image1]: From 97c97131717c11a263418146eaa443d44771e0e5 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 14:06:10 -0400 Subject: [PATCH 2/7] rm toc: Github handles that --- TEST-PLAN.md | 51 --------------------------------------------------- 1 file changed, 51 deletions(-) diff --git a/TEST-PLAN.md b/TEST-PLAN.md index adf5785..81fca05 100644 --- a/TEST-PLAN.md +++ b/TEST-PLAN.md @@ -5,57 +5,6 @@ Use the Test Plan document to describe the testing approach and overall framewor ***Template Instructions*** *Note that the information in italics is guidelines for documenting testing efforts and activities. To adopt this template, delete all italicized instructions and modify as appropriate* -Table of Contents -[1 Introduction 3](#introduction) - -[1.1 Purpose 3](#purpose) - -[1.2 Project Overview 3](#project-overview) - -[2 Scope 3](#scope) - -[2.1 In-Scope 3](#in-scope) - -[2.2 Out-of-Scope 3](#out-of-scope) - -[3 Testing Strategy 3](#testing-strategy) - -[3.1 Test Objectives 3](#test-objectives) - -[3.2 Test Assumptions 4](#test-assumptions) - -[3.3 Data Approach 4](#data-approach) - -[3.4 Automation Strategy 4](#automation-strategy) - -[3.5 Test Case Prioritization 4](#test-case-prioritization) - -[4 Execution Strategy 4](#execution-strategy) - -[4.1 Entry Criteria 4](#entry-criteria) - -[4.2 Exit criteria 5](#exit-criteria) - -[4.3 Validation and Defect Management 5](#validation-and-defect-management) - -[5 Github Workflow 6](#github-workflow) - -[5.1 Issue Submission and Prioritization 6](#issue-submission-and-prioritization) - -[5.2 Pull Request Review and Testing 6](#pull-request-review-and-testing) - -[5.3 Continuous Integration and Deployment 6](#continuous-integration-and-deployment) - -[6 Environment Requirements 7](#environment-requirements) - -[6.1 Dataverse-Internal Environment 7](#dataverse-internal-environment) - -[6.2 AWS-Based Environments 7](#aws-based-environments) - -[7 Significantly Impacted Division/College/Department 7](#significantly-impacted-division/college/department) - -[8 Dependencies 7](#dependencies) - 1. # **Introduction** {#introduction} 1. ## **Purpose** {#purpose} From 2e26aa74656154e5d8e98cd85b5121b6e78b03d8 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 14:08:46 -0400 Subject: [PATCH 3/7] unnumber headers --- TEST-PLAN.md | 50 +++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/TEST-PLAN.md b/TEST-PLAN.md index 81fca05..30ff7e9 100644 --- a/TEST-PLAN.md +++ b/TEST-PLAN.md @@ -5,21 +5,21 @@ Use the Test Plan document to describe the testing approach and overall framewor ***Template Instructions*** *Note that the information in italics is guidelines for documenting testing efforts and activities. To adopt this template, delete all italicized instructions and modify as appropriate* -1. # **Introduction** {#introduction} +# **Introduction** {#introduction} - 1. ## **Purpose** {#purpose} +## **Purpose** {#purpose} *Provide a summary of the test strategy, test approach, execution strategy and test management.* Automate the end-to-end functionality of the DP-Creator-II application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. -2. ## **Project Overview** {#project-overview} +## **Project Overview** {#project-overview} *A summary of the project, product, solution being tested.* -2. # **Scope** {#scope} +# **Scope** {#scope} - 1. ## **In-Scope** {#in-scope} +## **In-Scope** {#in-scope} *Describes what is being tested, such as all the functions/features of a specific project/product/solution.* @@ -28,7 +28,7 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- UI elements related to data privacy settings \- Data export functionalities -2. ## **Out-of-Scope** {#out-of-scope} +## **Out-of-Scope** {#out-of-scope} *Identify all features and combinations of features which will not be tested and the reasons.* @@ -36,9 +36,9 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- Non-functional UI aspects not directly related to functionality \- External API integrations that are mocked in this environment -3. # **Testing Strategy** {#testing-strategy} +# **Testing Strategy** {#testing-strategy} - 1. ## **Test Objectives** {#test-objectives} +## **Test Objectives** {#test-objectives} *Describe the objectives. Define tasks and responsibilities.* @@ -47,11 +47,11 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- Verify UI responsiveness and behavior on supported devices. \- Conduct regression testing to maintain code stability with new updates. -2. ## **Test Assumptions** {#test-assumptions} +## **Test Assumptions** {#test-assumptions} *List the key assumptions of the project and the test plan.* -3. ## **Data Approach** {#data-approach} +## **Data Approach** {#data-approach} *Describe the approach on the test data maintained in QA environments for functional and user acceptance testing.* @@ -59,21 +59,21 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- Implement data mocks for complex scenarios. \- Develop data validation utilities to check outputs. -4. ## **Automation Strategy** {#automation-strategy} +## **Automation Strategy** {#automation-strategy} \- Smoke Testing: Quick, high-level test cases to ensure critical paths work. \- Regression Testing: Re-running existing tests to validate the latest changes. \- E2E Testing: Testing all primary workflows from start to finish. -5. ## **Test Case Prioritization** {#test-case-prioritization} +## **Test Case Prioritization** {#test-case-prioritization} \- High priority for critical features like data processing workflows and privacy configurations. \- Moderate priority for UI/UX elements like buttons, forms, and navigation. \- Lower priority for non-critical paths or backend validations covered in unit tests. -4. # **Execution Strategy** {#execution-strategy} +# **Execution Strategy** {#execution-strategy} - 1. ## **Entry Criteria** {#entry-criteria} +## **Entry Criteria** {#entry-criteria} * *The entry criteria refer to the desirable conditions in order to start test execution* * *Entry criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* @@ -86,7 +86,7 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure | *Development has completed unit testing* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | *Test scripts are completed, reviewed and approved by the Project Team* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | - 2. ## **Exit criteria** {#exit-criteria} +## **Exit criteria** {#exit-criteria} * *The exit criteria are the desirable conditions that need to be met in order proceed with the implementation.* * *Exit criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* @@ -102,7 +102,7 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure | *All defects logged in Defect Tracker/Spreadsheet* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | | *Test environment cleanup completed and a new back up of the environment* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | - 3. ## **Validation and Defect Management** {#validation-and-defect-management} +## **Validation and Defect Management** {#validation-and-defect-management} * *Specify how test cases/test scenarios should be validated* * *Specify how defect should be managed* @@ -119,9 +119,9 @@ Defects found during the Testing should be categorized as below: | *3 (Medium)* | *Functionality issues but there is workaround for achieving the desired functionality* | | *4 (Low)* | *Unclear error message or cosmetic error which has minimum impact on product use.* | -5. # **Github Workflow** {#github-workflow} +# **Github Workflow** {#github-workflow} - 1. ## **Issue Submission and Prioritization** {#issue-submission-and-prioritization} +## **Issue Submission and Prioritization** {#issue-submission-and-prioritization} Bugs and feature requests are tracked in GitHub, and issues are prioritized based on their impact and urgency. Issues should be categorized into sprint goals for more efficient tracking. @@ -130,7 +130,7 @@ Bugs and feature requests are tracked in GitHub, and issues are prioritized base \- Assign priority based on criticality, risk, and user impact. \- Assign issues to the current sprint or defer to future sprints if necessary. -2. ## **Pull Request Review and Testing** {#pull-request-review-and-testing} +## **Pull Request Review and Testing** {#pull-request-review-and-testing} QA starts with a smoke test on the PR branch. Additional functional and regression testing is done based on the feature or fix included in the PR. \- PR Testing: @@ -140,7 +140,7 @@ QA starts with a smoke test on the PR branch. Additional functional and regressi \- Review server logs for any errors during testing. \- Test both the default and alternate configurations for features requiring setup. -3. ## **Continuous Integration and Deployment** {#continuous-integration-and-deployment} +## **Continuous Integration and Deployment** {#continuous-integration-and-deployment} Continuous integration ensures that tests run automatically with each code change. Dataverse’s Jenkins server should be used to run automated tests, while GitHub Actions provide additional automation for builds and deployments. @@ -150,9 +150,9 @@ Continuous integration ensures that tests run automatically with each code chang \- Validate each PR’s test results through Jenkins' "Test Result" page. \- If tests fail, report the issue to the developer immediately for resolution -6. # **Environment Requirements** {#environment-requirements} +# **Environment Requirements** {#environment-requirements} - 1. ## **Dataverse-Internal Environment** {#dataverse-internal-environment} +## **Dataverse-Internal Environment** {#dataverse-internal-environment} A staging environment (\`dataverse-internal.iq.harvard.edu\`) is used to deploy and test PRs. This environment replicates production but should be limited to QA purposes. AWS instances may also be utilized for additional testing environments. @@ -161,11 +161,11 @@ A staging environment (\`dataverse-internal.iq.harvard.edu\`) is used to deploy \- Validate deployment success by checking the homepage for the correct build version. \- Run smoke tests immediately after deployment. -2. ## **AWS-Based Environments** {#aws-based-environments} +## **AWS-Based Environments** {#aws-based-environments} For complex testing scenarios (e.g., multiple concurrent testers or heavy load testing), spinning up EC2 instances with sample data can be useful. Persistent AWS instances may be configured for this purpose. -7. # **Significantly Impacted Division/College/Department** {#significantly-impacted-division/college/department} +# **Significantly Impacted Division/College/Department** {#significantly-impacted-division/college/department} | Business Area | Business Manager | Tester(s) | | ----- | ----- | ----- | @@ -173,7 +173,7 @@ For complex testing scenarios (e.g., multiple concurrent testers or heavy load t | | | | | | | | -8. # **Dependencies** {#dependencies} +# **Dependencies** {#dependencies} *Identify any dependencies on testing, such as test-item availability, testing-resource availability, and deadlines.* From 2fd4f511309c9d950629c327eeb6f8ce38ab3918 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 14:12:05 -0400 Subject: [PATCH 4/7] clean headers --- TEST-PLAN.md | 50 +++++++++++++++++++++++++------------------------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/TEST-PLAN.md b/TEST-PLAN.md index 30ff7e9..f2c6d79 100644 --- a/TEST-PLAN.md +++ b/TEST-PLAN.md @@ -5,21 +5,21 @@ Use the Test Plan document to describe the testing approach and overall framewor ***Template Instructions*** *Note that the information in italics is guidelines for documenting testing efforts and activities. To adopt this template, delete all italicized instructions and modify as appropriate* -# **Introduction** {#introduction} +# Introduction -## **Purpose** {#purpose} +## Purpose *Provide a summary of the test strategy, test approach, execution strategy and test management.* Automate the end-to-end functionality of the DP-Creator-II application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. -## **Project Overview** {#project-overview} +## Project Overview *A summary of the project, product, solution being tested.* -# **Scope** {#scope} +# Scope -## **In-Scope** {#in-scope} +## In-Scope *Describes what is being tested, such as all the functions/features of a specific project/product/solution.* @@ -28,7 +28,7 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- UI elements related to data privacy settings \- Data export functionalities -## **Out-of-Scope** {#out-of-scope} +## Out-of-Scope *Identify all features and combinations of features which will not be tested and the reasons.* @@ -36,9 +36,9 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- Non-functional UI aspects not directly related to functionality \- External API integrations that are mocked in this environment -# **Testing Strategy** {#testing-strategy} +# Testing Strategy -## **Test Objectives** {#test-objectives} +## Test Objectives *Describe the objectives. Define tasks and responsibilities.* @@ -47,11 +47,11 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- Verify UI responsiveness and behavior on supported devices. \- Conduct regression testing to maintain code stability with new updates. -## **Test Assumptions** {#test-assumptions} +## Test Assumptions *List the key assumptions of the project and the test plan.* -## **Data Approach** {#data-approach} +## Data Approach *Describe the approach on the test data maintained in QA environments for functional and user acceptance testing.* @@ -59,21 +59,21 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure \- Implement data mocks for complex scenarios. \- Develop data validation utilities to check outputs. -## **Automation Strategy** {#automation-strategy} +## Automation Strategy \- Smoke Testing: Quick, high-level test cases to ensure critical paths work. \- Regression Testing: Re-running existing tests to validate the latest changes. \- E2E Testing: Testing all primary workflows from start to finish. -## **Test Case Prioritization** {#test-case-prioritization} +## Test Case Prioritization \- High priority for critical features like data processing workflows and privacy configurations. \- Moderate priority for UI/UX elements like buttons, forms, and navigation. \- Lower priority for non-critical paths or backend validations covered in unit tests. -# **Execution Strategy** {#execution-strategy} +# Execution Strategy -## **Entry Criteria** {#entry-criteria} +## Entry Criteria * *The entry criteria refer to the desirable conditions in order to start test execution* * *Entry criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* @@ -86,7 +86,7 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure | *Development has completed unit testing* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | *Test scripts are completed, reviewed and approved by the Project Team* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | -## **Exit criteria** {#exit-criteria} +## Exit criteria * *The exit criteria are the desirable conditions that need to be met in order proceed with the implementation.* * *Exit criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* @@ -102,7 +102,7 @@ Automate the end-to-end functionality of the DP-Creator-II application to ensure | *All defects logged in Defect Tracker/Spreadsheet* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | | *Test environment cleanup completed and a new back up of the environment* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | -## **Validation and Defect Management** {#validation-and-defect-management} +## Validation and Defect Management * *Specify how test cases/test scenarios should be validated* * *Specify how defect should be managed* @@ -119,9 +119,9 @@ Defects found during the Testing should be categorized as below: | *3 (Medium)* | *Functionality issues but there is workaround for achieving the desired functionality* | | *4 (Low)* | *Unclear error message or cosmetic error which has minimum impact on product use.* | -# **Github Workflow** {#github-workflow} +# Github Workflow -## **Issue Submission and Prioritization** {#issue-submission-and-prioritization} +## Issue Submission and Prioritization** {#issue-submission-and-prioritization} Bugs and feature requests are tracked in GitHub, and issues are prioritized based on their impact and urgency. Issues should be categorized into sprint goals for more efficient tracking. @@ -130,7 +130,7 @@ Bugs and feature requests are tracked in GitHub, and issues are prioritized base \- Assign priority based on criticality, risk, and user impact. \- Assign issues to the current sprint or defer to future sprints if necessary. -## **Pull Request Review and Testing** {#pull-request-review-and-testing} +## Pull Request Review and Testing QA starts with a smoke test on the PR branch. Additional functional and regression testing is done based on the feature or fix included in the PR. \- PR Testing: @@ -140,7 +140,7 @@ QA starts with a smoke test on the PR branch. Additional functional and regressi \- Review server logs for any errors during testing. \- Test both the default and alternate configurations for features requiring setup. -## **Continuous Integration and Deployment** {#continuous-integration-and-deployment} +## Continuous Integration and Deployment Continuous integration ensures that tests run automatically with each code change. Dataverse’s Jenkins server should be used to run automated tests, while GitHub Actions provide additional automation for builds and deployments. @@ -150,9 +150,9 @@ Continuous integration ensures that tests run automatically with each code chang \- Validate each PR’s test results through Jenkins' "Test Result" page. \- If tests fail, report the issue to the developer immediately for resolution -# **Environment Requirements** {#environment-requirements} +# Environment Requirements -## **Dataverse-Internal Environment** {#dataverse-internal-environment} +## Dataverse-Internal Environment A staging environment (\`dataverse-internal.iq.harvard.edu\`) is used to deploy and test PRs. This environment replicates production but should be limited to QA purposes. AWS instances may also be utilized for additional testing environments. @@ -161,11 +161,11 @@ A staging environment (\`dataverse-internal.iq.harvard.edu\`) is used to deploy \- Validate deployment success by checking the homepage for the correct build version. \- Run smoke tests immediately after deployment. -## **AWS-Based Environments** {#aws-based-environments} +## AWS-Based Environments For complex testing scenarios (e.g., multiple concurrent testers or heavy load testing), spinning up EC2 instances with sample data can be useful. Persistent AWS instances may be configured for this purpose. -# **Significantly Impacted Division/College/Department** {#significantly-impacted-division/college/department} +# Significantly Impacted Division/College/Department | Business Area | Business Manager | Tester(s) | | ----- | ----- | ----- | @@ -173,7 +173,7 @@ For complex testing scenarios (e.g., multiple concurrent testers or heavy load t | | | | | | | | -# **Dependencies** {#dependencies} +# Dependencies *Identify any dependencies on testing, such as test-item availability, testing-resource availability, and deadlines.* From ce8defa2fb713f62af1dc8ad301550b3230dab06 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 15:03:07 -0400 Subject: [PATCH 5/7] fill in the blanks --- TEST-PLAN.md | 151 +++++++++++++-------------------------------------- 1 file changed, 38 insertions(+), 113 deletions(-) diff --git a/TEST-PLAN.md b/TEST-PLAN.md index f2c6d79..75c8aa6 100644 --- a/TEST-PLAN.md +++ b/TEST-PLAN.md @@ -1,180 +1,105 @@ - -**Purpose of the Document** -Use the Test Plan document to describe the testing approach and overall framework that will drive the testing of the project. - -***Template Instructions*** -*Note that the information in italics is guidelines for documenting testing efforts and activities. To adopt this template, delete all italicized instructions and modify as appropriate* - # Introduction ## Purpose -*Provide a summary of the test strategy, test approach, execution strategy and test management.* - -Automate the end-to-end functionality of the DP-Creator-II application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. +Automate the end-to-end testing of the application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. ## Project Overview -*A summary of the project, product, solution being tested.* - # Scope ## In-Scope -*Describes what is being tested, such as all the functions/features of a specific project/product/solution.* - -\- User authentication (login, logout, user session management) -\- Data processing workflows (data upload, processing configurations) -\- UI elements related to data privacy settings - \- Data export functionalities +- Starting the application with and without `--demo` and other CLI flags. +- Form filling and navigation between tabs. +- Conditionally disabled controls (for example, controls late in the flow when inputs have not been provided, and controls early in the flow after the release has been made). +- Report export. +- Runnability of any generated code. ## Out-of-Scope -*Identify all features and combinations of features which will not be tested and the reasons.* - - \- Performance testing - \- Non-functional UI aspects not directly related to functionality - \- External API integrations that are mocked in this environment +- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. +- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. +- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output. +- Design and usability. +- Correctness of DP calculations. # Testing Strategy ## Test Objectives -*Describe the objectives. Define tasks and responsibilities.* - - \- Ensure reliable execution of user flows across all primary functionalities. - \- Validate critical data inputs and outputs are consistent with user expectations. - \- Verify UI responsiveness and behavior on supported devices. - \- Conduct regression testing to maintain code stability with new updates. +- Ensure reliable execution of user flows across all primary functionalities. +- Validate critical data inputs and outputs are consistent with user expectations. +- Verify UI responsiveness and behavior on supported devices and browsers. +- Conduct regression testing to maintain code stability with new updates. ## Test Assumptions -*List the key assumptions of the project and the test plan.* +- OpenDP will correctly perform the required calculations. +- Users are able to successfully install the software. ## Data Approach - *Describe the approach on the test data maintained in QA environments for functional and user acceptance testing.* - - \- Use static datasets for predictable validation. - \- Implement data mocks for complex scenarios. - \- Develop data validation utilities to check outputs. +- Any CSVs used for testing will be checked in to the fixtures directory. +- Outputs will be checked for structure, but we will not expect any particular values. ## Automation Strategy - \- Smoke Testing: Quick, high-level test cases to ensure critical paths work. - \- Regression Testing: Re-running existing tests to validate the latest changes. - \- E2E Testing: Testing all primary workflows from start to finish. +- Doc tests: For simple functions, use doctests so the developer can just work with one file. +- For more complex functions and operations, use pytest unit tests. +- Run linting and type checking tools as part of tests. +- Use Playwright for end-to-end tests. +- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers. ## Test Case Prioritization - \- High priority for critical features like data processing workflows and privacy configurations. - \- Moderate priority for UI/UX elements like buttons, forms, and navigation. - \- Lower priority for non-critical paths or backend validations covered in unit tests. +- There's just one test suite, which is run on every PR, so there's nothing that needs to be ordered. # Execution Strategy ## Entry Criteria -* *The entry criteria refer to the desirable conditions in order to start test execution* -* *Entry criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* - -| Entry Criteria | Test Team | Technical Team | Notes | -| ----- | ----- | ----- | ----- | -| *Test environment(s) is available* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *Test data is available* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *Code has been merged successfully* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *Development has completed unit testing* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | -| *Test scripts are completed, reviewed and approved by the Project Team* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | +N/A: Tests are run automatically on every PR. ## Exit criteria -* *The exit criteria are the desirable conditions that need to be met in order proceed with the implementation.* -* *Exit criteria are flexible benchmarks. If they are not met, the test team will assess the risk, identify mitigation actions and provide a recommendation.* - -| Exit Criteria | Test Team | Technical Team | Notes | -| ----- | ----- | ----- | ----- | -| *100% Test Scripts executed* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *90% pass rate of Test Scripts* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *No open Critical and High severity defects* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | -| *All remaining defects are either cancelled or documented as Change Requests for a future release* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | -| *All expected and actual results are captured and documented with the test script* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *All test metrics collected based on reports from daily and Weekly Status reports* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *All defects logged in Defect Tracker/Spreadsheet* | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | | -| *Test environment cleanup completed and a new back up of the environment* | | ![C:\\Users\\arxp\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.IE5\\7F9Z3IW4\\MC900441310\[1\].png][image1] | | +N/A: Tests are run automatically on every PR. ## Validation and Defect Management -* *Specify how test cases/test scenarios should be validated* -* *Specify how defect should be managed* - * *It is expected that the testers execute all the scripts in each of the cycles described above.* - * *The defects will be tracked through Defect Tracker or Spreadsheet.* - * *It is the responsibility of the tester to open the defects, retest and close the defect.* - -Defects found during the Testing should be categorized as below: - -| Severity | Impact | -| ----- | ----- | -| *1 (Critical)* | *Functionality is blocked and no testing can proceed Application/program/feature is unusable in the current state* | -| *2 (High)* | *Functionality is not usable and there is no workaround but testing can proceed* | -| *3 (Medium)* | *Functionality issues but there is workaround for achieving the desired functionality* | -| *4 (Low)* | *Unclear error message or cosmetic error which has minimum impact on product use.* | +N/A: PRs should not be merged with failing tests. # Github Workflow -## Issue Submission and Prioritization** {#issue-submission-and-prioritization} - -Bugs and feature requests are tracked in GitHub, and issues are prioritized based on their impact and urgency. Issues should be categorized into sprint goals for more efficient tracking. +## Issue Submission and Prioritization -\- Process: - \- Review the issue backlog in GitHub. - \- Assign priority based on criticality, risk, and user impact. - \- Assign issues to the current sprint or defer to future sprints if necessary. +[See README](https://github.com/opendp/dp-creator-ii/#conventions). ## Pull Request Review and Testing -QA starts with a smoke test on the PR branch. Additional functional and regression testing is done based on the feature or fix included in the PR. -\- PR Testing: - \- Ensure the feature or fix fully addresses the reported issue. - \- Use the documentation to create test cases and validate functionality. - \- Perform boundary testing, testing with incorrect data, and edge cases. - \- Review server logs for any errors during testing. - \- Test both the default and alternate configurations for features requiring setup. +- Reviewers should read and understand code. +- Reviewers are not expected to checkout or manually test code. ## Continuous Integration and Deployment -Continuous integration ensures that tests run automatically with each code change. Dataverse’s Jenkins server should be used to run automated tests, while GitHub Actions provide additional automation for builds and deployments. +Github CI will run tests on each PR. We require tests to pass before merge. -\- CI Steps: - \- Merge PRs only when all tests pass. - \- Use the Jenkins build process to deploy to a staging environment (\`dataverse-internal.iq.harvard.edu\`). - \- Validate each PR’s test results through Jenkins' "Test Result" page. - \- If tests fail, report the issue to the developer immediately for resolution +We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact. # Environment Requirements ## Dataverse-Internal Environment -A staging environment (\`dataverse-internal.iq.harvard.edu\`) is used to deploy and test PRs. This environment replicates production but should be limited to QA purposes. AWS instances may also be utilized for additional testing environments. - -\- \*\*Setup\*\*: - \- Deploy the PR build using Jenkins. - \- Validate deployment success by checking the homepage for the correct build version. - \- Run smoke tests immediately after deployment. +N/A ## AWS-Based Environments -For complex testing scenarios (e.g., multiple concurrent testers or heavy load testing), spinning up EC2 instances with sample data can be useful. Persistent AWS instances may be configured for this purpose. +N/A # Significantly Impacted Division/College/Department -| Business Area | Business Manager | Tester(s) | -| ----- | ----- | ----- | -| | | | -| | | | -| | | | +N/A # Dependencies -*Identify any dependencies on testing, such as test-item availability, testing-resource availability, and deadlines.* - -[image1]: +N/A From 9755bf1c13b0690a711997e6a611f64c87546fc0 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 15:19:18 -0400 Subject: [PATCH 6/7] fill in overview --- TEST-PLAN.md | 54 ++++++++++++++++++++++++++++------------------------ 1 file changed, 29 insertions(+), 25 deletions(-) diff --git a/TEST-PLAN.md b/TEST-PLAN.md index 75c8aa6..854ab44 100644 --- a/TEST-PLAN.md +++ b/TEST-PLAN.md @@ -1,14 +1,18 @@ -# Introduction +# DP Creator II Test Plan -## Purpose +## Introduction + +### Purpose Automate the end-to-end testing of the application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. -## Project Overview +### Project Overview + +The single user application will be installed with pip, and offer a web application (running on localhost) that guides the user through the application of differential privacy on a data file they provide. -# Scope +## Scope -## In-Scope +### In-Scope - Starting the application with and without `--demo` and other CLI flags. - Form filling and navigation between tabs. @@ -16,7 +20,7 @@ Automate the end-to-end testing of the application to ensure it meets user requi - Report export. - Runnability of any generated code. -## Out-of-Scope +### Out-of-Scope - Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. - Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. @@ -24,26 +28,26 @@ Automate the end-to-end testing of the application to ensure it meets user requi - Design and usability. - Correctness of DP calculations. -# Testing Strategy +## Testing Strategy -## Test Objectives +### Test Objectives - Ensure reliable execution of user flows across all primary functionalities. - Validate critical data inputs and outputs are consistent with user expectations. - Verify UI responsiveness and behavior on supported devices and browsers. - Conduct regression testing to maintain code stability with new updates. -## Test Assumptions +### Test Assumptions - OpenDP will correctly perform the required calculations. - Users are able to successfully install the software. -## Data Approach +### Data Approach - Any CSVs used for testing will be checked in to the fixtures directory. - Outputs will be checked for structure, but we will not expect any particular values. -## Automation Strategy +### Automation Strategy - Doc tests: For simple functions, use doctests so the developer can just work with one file. - For more complex functions and operations, use pytest unit tests. @@ -51,55 +55,55 @@ Automate the end-to-end testing of the application to ensure it meets user requi - Use Playwright for end-to-end tests. - Use test matrix on Github CI if we think there will be any senstivity to versions or browsers. -## Test Case Prioritization +### Test Case Prioritization - There's just one test suite, which is run on every PR, so there's nothing that needs to be ordered. -# Execution Strategy +## Execution Strategy -## Entry Criteria +### Entry Criteria N/A: Tests are run automatically on every PR. -## Exit criteria +### Exit criteria N/A: Tests are run automatically on every PR. -## Validation and Defect Management +### Validation and Defect Management N/A: PRs should not be merged with failing tests. -# Github Workflow +## Github Workflow -## Issue Submission and Prioritization +### Issue Submission and Prioritization [See README](https://github.com/opendp/dp-creator-ii/#conventions). -## Pull Request Review and Testing +### Pull Request Review and Testing - Reviewers should read and understand code. - Reviewers are not expected to checkout or manually test code. -## Continuous Integration and Deployment +### Continuous Integration and Deployment Github CI will run tests on each PR. We require tests to pass before merge. We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact. -# Environment Requirements +## Environment Requirements -## Dataverse-Internal Environment +### Dataverse-Internal Environment N/A -## AWS-Based Environments +### AWS-Based Environments N/A -# Significantly Impacted Division/College/Department +## Significantly Impacted Division/College/Department N/A -# Dependencies +## Dependencies N/A From 6224bb025080004e945d53f59b0b4dc219ea7026 Mon Sep 17 00:00:00 2001 From: Chuck McCallum Date: Fri, 1 Nov 2024 15:21:33 -0400 Subject: [PATCH 7/7] add to test plan --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 887ff00..04901ee 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,8 @@ If Playwright fails in CI, we can still see what went wrong: - Inside the zipped artifact will be _another_ zip: `trace.zip`. - Don't unzip it! Instead, open it with [trace.playwright.dev](https://trace.playwright.dev/). +For more details, see the [TEST-PLAN](TEST-PLAN.md). + ### Conventions Branch names should be of the form `NNNN-short-description`, where `NNNN` is the issue number being addressed.