Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a test plan #123

Closed
wants to merge 7 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@ If Playwright fails in CI, we can still see what went wrong:
- Inside the zipped artifact will be _another_ zip: `trace.zip`.
- Don't unzip it! Instead, open it with [trace.playwright.dev](https://trace.playwright.dev/).

For more details, see the [TEST-PLAN](TEST-PLAN.md).

### Conventions

Branch names should be of the form `NNNN-short-description`, where `NNNN` is the issue number being addressed.
Expand Down
109 changes: 109 additions & 0 deletions TEST-PLAN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
# DP Creator II Test Plan

## Introduction

### Purpose

Automate the end-to-end testing of the application to ensure it meets user requirements and maintains reliability, especially in the context of privacy-preserving data processing.

### Project Overview

The single user application will be installed with pip, and offer a web application (running on localhost) that guides the user through the application of differential privacy on a data file they provide.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Single user application" is a phrase I haven't heard much. I think I know what you mean but perhaps this could be simplified to just "application".


## Scope

### In-Scope

- Starting the application with and without `--demo` and other CLI flags.
- Form filling and navigation between tabs.
- Conditionally disabled controls (for example, controls late in the flow when inputs have not been provided, and controls early in the flow after the release has been made).
- Report export.
- Runnability of any generated code.

### Out-of-Scope

- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but it's not the focus of automated testing.
- Performance testing: Test timeouts may incidentally identify components which are slow ([issue](https://github.com/opendp/dp-creator-ii/issues/116)), but performance is not the focus of automated testing.

With "it's" I get slightly confused. I don't think you're referring to "test timeouts", which would be "they're".

- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Rendering of preview charts: Might [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.
- Rendering of preview charts: We might a [add results table](https://github.com/opendp/dp-creator-ii/issues/122) to complement graphs, but we're not going to try to make any assertions against images.

Or plural? Results tables? 🤔

- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Testing exact values: The outputs is randomized, so we can not test for any particular value in the output.
- Testing exact values: The outputs are randomized, so we can not test for any particular value in the output.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to test that the outputs are within a certain range, just as a sanity check?

- Design and usability.
- Correctness of DP calculations.

## Testing Strategy

### Test Objectives

- Ensure reliable execution of user flows across all primary functionalities.
- Validate critical data inputs and outputs are consistent with user expectations.
- Verify UI responsiveness and behavior on supported devices and browsers.
- Conduct regression testing to maintain code stability with new updates.

### Test Assumptions

- OpenDP will correctly perform the required calculations.
- Users are able to successfully install the software.

### Data Approach

- Any CSVs used for testing will be checked in to the fixtures directory.
- Outputs will be checked for structure, but we will not expect any particular values.

### Automation Strategy

- Doc tests: For simple functions, use doctests so the developer can just work with one file.
- For more complex functions and operations, use pytest unit tests.
- Run linting and type checking tools as part of tests.
- Use Playwright for end-to-end tests.
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Use test matrix on Github CI if we think there will be any senstivity to versions or browsers.
- Use test matrix on Github CI if we think there will be any sensitivity to versions or browsers.


### Test Case Prioritization

- There's just one test suite, which is run on every PR, so there's nothing that needs to be ordered.

## Execution Strategy

### Entry Criteria

N/A: Tests are run automatically on every PR.

### Exit criteria

N/A: Tests are run automatically on every PR.

### Validation and Defect Management

N/A: PRs should not be merged with failing tests.
Comment on lines +62 to +74
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused what all these N/A's are here. The sentences themselves sound fine and useful.

Also, I suppose you should make "criteria" Title Case to match the other one.


## Github Workflow

### Issue Submission and Prioritization

[See README](https://github.com/opendp/dp-creator-ii/#conventions).

### Pull Request Review and Testing

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These comments are all out of order, sorry, but can we have a bullet for "- Reviewers will indicate that that they need the creator of the PR to address a concern by assigning the creator to the issue".

(This would happen while the issue is in the "In Review" column. That's what we do for Dataverse, anyway, and it seems to work fine. The creator unassigned themselves after they've changed the code or at least written a reply to the concern.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @pdurbin that it would be good to follow the process we use in Dataverse, using an 'In Review' status, and assigning the reviewer/coder if there are changes to be made during the review.

- Reviewers should read and understand code.
- Reviewers are not expected to checkout or manually test code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggested this in Slack but perhaps we could have a bullet for the following idea:

Should we have an additional column after "in review"? For Dataverse after someone clicks "approve" automation moves the PR to the next column. We call this column "ready for QA".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I find checking out and running the code helpful to understanding it, so I usually do it. I don't think we need to say that reviewers are not expected to do that. Also, since we don't have a QA step, running the code helps to find bugs in the review step.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a bullet to say "- Reviewers should indicate they are reviewing a PR by dragging it to "In Review" and assigning it to themselves."?

### Continuous Integration and Deployment

Github CI will run tests on each PR. We require tests to pass before merge.

We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We do not require branches to be up to date with main: There is a small chance that one PR may break because of changes in a separate before, but that can be addressed after the fact.
We do not require branches to be up to date with main: There is a small chance that one PR may break (have merge conflicts) because of changes in a separate before, but that can be addressed after the fact.

By "break" I assume you're talking about merge conflicts.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good practice to have PRs up to date with main before they are reviewed, especially if there are merge conflicts. Or, if a PR depends on another branch, it can point to that branch instead of main.


## Environment Requirements

### Dataverse-Internal Environment

N/A

### AWS-Based Environments

N/A

## Significantly Impacted Division/College/Department

N/A

## Dependencies

N/A
Comment on lines +97 to +109
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused by N/A here. Should it be TBD instead? 🤔

Loading