-
Notifications
You must be signed in to change notification settings - Fork 1
Working Practices
This page is for describing "how our work works", i.e. the processes and procedures that help us manage our work efficiently.
All work items are logged as GitHub Issues. We distinguish between the following types of work items.
Used for most things, e.g. for developing an evolution of the product or documenting something. If during review or testing some rework is required in order to meet the ticket requirements then this work is done under the initial ticket (rather than raising a new bug ticket).
The User Story format can be used if helpful (as an X, I want Y, so that Z).
The INVEST mnemonic criteria gives further guidance.
These are for raising issues that have been seen outside of the work being done on a particular task.
The following criteria must be met before an item of work can start.
For both tasks and bugs:
- Ticket has been reviewed by the majority of the team
For tasks:
- Has sufficient verifiable acceptance criteria
- Has been estimated in terms of story points
For bugs:
- Has reproduction steps if possible
- Ideally is reproducible
- Has expected behaviour
- Has actual behaviour
For all software changes:
- Unit tested (100% coverage for new tickets)
- Automation tests added (where practical)
- Any relevant documentation updated
For all changes:
- Peer reviewed
- Independently reviewed as meeting the acceptance criteria
Our focus is on finishing tickets before starting new ones, we pull new tickets from the Ready column once we've done everything we can against the in-progress tickets. (Minimising the number of in-progress tickets helps to reduce context switching).
The exception to this is when we have a ticket of sufficient priority to justify stopping current work partway through, for example to resolve an issue that is preventing users from using vAirify. These high-priority tickets are marked in GitHub as P0s (via the Priority field of the ticket). P0s are given priority over anything else and are discussed first during the stand-up regardless of where they are in the workflow.
Getting Started and Overview
- Product Description
- Roles and Responsibilities
- User Roles and Goals
- Architectural Design
- Iterations
- Decision Records
- Summary Page Explanation
- Deployment Guide
- Working Practices
- Q&A
Investigations and Notebooks
- CAMs Schema
- Exploratory Notebooks
- Forecast ETL Process
- In Situ air pollution data sources
- Notebook: OpenAQ data overview
- Notebook: Unit conversion
- Data Archive Considerations
Manual Test Charters
- Charter 1 (Comparing ECMWF forecast to database values)
- Charter 2 (Backend performance)
- Charter 3 (Forecast range implementation)
- Charter 4 (In situ bad data)
- Charter 5 (Filtering ppm units)
- Charter 7 (Forecast API input validation)
- Charter 8 (Forecast API database sizes)
- Charter 9 (Measurements summary API input validation)
- Charter 10 (Seeding bad data)
- Charter 11 ()Measurements API input validation
- Charter 12 (Validating echart plot accuracy)
- Charter 13 (Explore UI after data outage)
- Charter 14 (City page address)
- Charter 15 (BugFix diff 0 calculation)
- Charter 16 (City page chart data mocking)
- Charter 17 (Summary table logic)
- Charter 18 (AQI chart colour banding)
- Charter 19 (City page screen sizes)
- Charter 20 (Date picker)
- Charter 21 (Graph consistency)
- Charter 22 (High measurement values)
- Charter 23 (ppm -> µg m³)
- Charter 24 (Textures API input validation)
- Charter 25 (Graph line colours)
- Charter 26 (Fill in gaps in forecast)
- Charter 27 (Graph behaviour with mock data)
- Charter 28 (Summary table accuracy)
- Re‐execute: Charter 28
- Charter 29 (Fill in gaps in situ)
- Charter 30 (Forecast window)
- Charter 31 (UI screen sizes)