-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a test plan #123
Add a test plan #123
Changes from all commits
73d05c8
97c9713
2e26aa7
2fd4f51
ce8defa
9755bf1
6224bb0
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,109 @@ | ||||||
# DP Creator II Test Plan | ||||||
|
||||||
## Introduction | ||||||
|
||||||
### Purpose | ||||||
|
||||||
Automate the end-to-end testing of the application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing. | ||||||
|
||||||
### Project Overview | ||||||
|
||||||
The single user application will be installed with pip, and offer a web application (running on localhost) that guides the user through the application of differential privacy on a data file they provide. | ||||||
|
||||||
## Scope | ||||||
|
||||||
### In-Scope | ||||||
|
||||||
- Starting the application with and without `--demo` and other CLI flags. | ||||||
- Form filling and navigation between tabs. | ||||||
- Conditionally disabled controls (for example, controls late in the flow when inputs have not been provided, and controls early in the flow after the release has been made). | ||||||
- Report export. | ||||||
- Runnability of any generated code. | ||||||
|
||||||
### Out-of-Scope | ||||||
|
||||||
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
With "it's" I get slightly confused. I don't think you're referring to "test timeouts", which would be "they're". |
||||||
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
Or plural? Results tables? 🤔 |
||||||
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is it possible to test that the outputs are within a certain range, just as a sanity check? |
||||||
- Design and usability. | ||||||
- Correctness of DP calculations. | ||||||
|
||||||
## Testing Strategy | ||||||
|
||||||
### Test Objectives | ||||||
|
||||||
- Ensure reliable execution of user flows across all primary functionalities. | ||||||
- Validate critical data inputs and outputs are consistent with user expectations. | ||||||
- Verify UI responsiveness and behavior on supported devices and browsers. | ||||||
- Conduct regression testing to maintain code stability with new updates. | ||||||
|
||||||
### Test Assumptions | ||||||
|
||||||
- OpenDP will correctly perform the required calculations. | ||||||
- Users are able to successfully install the software. | ||||||
|
||||||
### Data Approach | ||||||
|
||||||
- Any CSVs used for testing will be checked in to the fixtures directory. | ||||||
- Outputs will be checked for structure, but we will not expect any particular values. | ||||||
|
||||||
### Automation Strategy | ||||||
|
||||||
- Doc tests: For simple functions, use doctests so the developer can just work with one file. | ||||||
- For more complex functions and operations, use pytest unit tests. | ||||||
- Run linting and type checking tools as part of tests. | ||||||
- Use Playwright for end-to-end tests. | ||||||
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
||||||
### Test Case Prioritization | ||||||
|
||||||
- There's just one test suite, which is run on every PR, so there's nothing that needs to be ordered. | ||||||
|
||||||
## Execution Strategy | ||||||
|
||||||
### Entry Criteria | ||||||
|
||||||
N/A: Tests are run automatically on every PR. | ||||||
|
||||||
### Exit criteria | ||||||
|
||||||
N/A: Tests are run automatically on every PR. | ||||||
|
||||||
### Validation and Defect Management | ||||||
|
||||||
N/A: PRs should not be merged with failing tests. | ||||||
Comment on lines
+62
to
+74
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm confused what all these N/A's are here. The sentences themselves sound fine and useful. Also, I suppose you should make "criteria" Title Case to match the other one. |
||||||
|
||||||
## Github Workflow | ||||||
|
||||||
### Issue Submission and Prioritization | ||||||
|
||||||
[See README](https://github.com/opendp/dp-creator-ii/#conventions). | ||||||
|
||||||
### Pull Request Review and Testing | ||||||
|
||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. These comments are all out of order, sorry, but can we have a bullet for "- Reviewers will indicate that that they need the creator of the PR to address a concern by assigning the creator to the issue". (This would happen while the issue is in the "In Review" column. That's what we do for Dataverse, anyway, and it seems to work fine. The creator unassigned themselves after they've changed the code or at least written a reply to the concern.) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with @pdurbin that it would be good to follow the process we use in Dataverse, using an 'In Review' status, and assigning the reviewer/coder if there are changes to be made during the review. |
||||||
- Reviewers should read and understand code. | ||||||
- Reviewers are not expected to checkout or manually test code. | ||||||
|
||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggested this in Slack but perhaps we could have a bullet for the following idea: Should we have an additional column after "in review"? For Dataverse after someone clicks "approve" automation moves the PR to the next column. We call this column "ready for QA". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Personally, I find checking out and running the code helpful to understanding it, so I usually do it. I don't think we need to say that reviewers are not expected to do that. Also, since we don't have a QA step, running the code helps to find bugs in the review step. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we have a bullet to say "- Reviewers should indicate they are reviewing a PR by dragging it to "In Review" and assigning it to themselves."? |
||||||
### Continuous Integration and Deployment | ||||||
|
||||||
Github CI will run tests on each PR. We require tests to pass before merge. | ||||||
|
||||||
We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
By "break" I assume you're talking about merge conflicts. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it's a good practice to have PRs up to date with main before they are reviewed, especially if there are merge conflicts. Or, if a PR depends on another branch, it can point to that branch instead of main. |
||||||
|
||||||
## Environment Requirements | ||||||
|
||||||
### Dataverse-Internal Environment | ||||||
|
||||||
N/A | ||||||
|
||||||
### AWS-Based Environments | ||||||
|
||||||
N/A | ||||||
|
||||||
## Significantly Impacted Division/College/Department | ||||||
|
||||||
N/A | ||||||
|
||||||
## Dependencies | ||||||
|
||||||
N/A | ||||||
Comment on lines
+97
to
+109
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm confused by N/A here. Should it be TBD instead? 🤔 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Single user application" is a phrase I haven't heard much. I think I know what you mean but perhaps this could be simplified to just "application".