GitHub Action
Web Performance Audits with Lighthouse
Audit deployed web sites with my artisanal blend of WPT Network Emulation Profiles, Puppeteer, Chrome headless, Lighthouse, and Github Actions.
- Uses Puppeteer to start up Chrome with network emulation settings defined by WebPageTest.
- Supports saving of artifacts to the Github Action run.
- Supports custom Lighthouse configuration via JavaScript file.
- Supports Lighthouse budget.json for failing PRs.
- Supports Lighthouse-based data model scores.js for failing PRs based on any audit score or category.
- Posts results of audit run as a comment on your PR.
While I love automated web performance testing, you should always keep in mind that:
- That this tool is the first line of testing and is at best a guidepost given the emulation layer.
- Verify your web performance on actual hardware and devices that your users are using (and you can even run lighthouse reports on those devices too).
- When in doubt, run a trace in DevTools.
Now let's have some fun and automate a web perf workflow!
This is best run against on: [pull_request]
, so you'll get the report comment, but you technically do not have to use it this way.
name: Audit Web Performance
on: [pull_request]
jobs:
perf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Generate Lighthouse Report
uses: justinribeiro/lighthouse-action@master
with:
secret: ${{ secrets.GITHUB_TOKEN }}
url: https://${{ github.sha }}-dot-${{ secrets.GAE_PID }}.appspot.com
wptConnectionSpeed: threegfast
- name: Saving Lighthouse Audit Artifacts
uses: actions/upload-artifact@master
with:
name: lighthouse-artifacts
path: './results'
If you don't want the PR comment, the secret is optional.
If you don't want to use the WPT profiles or you want to customize your Lighthouse configuration with headers or custom runs, or you just want to use your existing lighthouse budget.json, this is available via the lighthouseConfiguration
and lighthouseBudget
options:
name: Audit Web Performance
on: [pull_request]
jobs:
perf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Generate Lighthouse Report
uses: justinribeiro/lighthouse-action@master
with:
with:
secret: ${{ secrets.GITHUB_TOKEN }}
url: https://justinribeiro.com/
lighthouseBudget: .github/test/budget.json
lighthouseScoringBudget: .github/test/scores.js
lighthouseConfiguration: .github/test/custom-config.js
- name: Saving Lighthouse Audit Artifacts
uses: actions/upload-artifact@master
with:
name: lighthouse-artifacts
path: './results'
For full details on how to define a lighthouse configuration file, see Lighthouse Configuration for more information.
For full details on how to define a lighthouse budget.json file, see Performance Budgets (Keep Request Counts Low And File Sizes Small) for more information.
scores.js
is based on the Lighthouse JSON output data model and maps one-to-one with the existing keys and definitions within lighthouse. This allows an easy to use, vastly configurable file to test any audit score or raw numeric value that you find important. No having to wait for me to add special parameters. :-)
Let's look at an example scores.js
file:
module.exports = {
audits: {
'service-worker': {
score: 1,
},
'first-contentful-paint': {
score: 1,
numericValue: 100,
},
'first-meaningful-paint': {
score: 1,
numericValue: 100,
},
},
categories: {
performance: {
score: 0.95,
},
accessibility: {
score: 0.95,
},
},
};
In this case we can test for either one or two specific keys:
score
: decimal ranging from 0 to 1. If you wanted to verify that your performance is above 95, you'd enter 0.95 like above.numericValue
: decimal starting at zero. Commonly holds the value out of the trace in milliseconds, good for testing if you want those raw performance numbers.
For a full view of available audits and categories, output a JSON file from the lighthouse cli to get started:
$ lighthouse --output json --output-path=./sample.json --chrome-flags="--headless" https://YOUR_URL_HERE.com
Required The URL you'd like Lighthouse to audit.
Optional This is the default secret for your repo as generated by Github Actions. Use the ${{ secrets.GITHUB_TOKEN }}
for this to allow commenting on your PRs.
Optional Profiles based on WebPageTest project connectivity samples, defined in chrome.js.
Defaults to threegfast
. Can be any of the following: twog
, threegslow
, threeg
, threegfast
, fourg
, lte
.
Optional File path to custom lighthouse configuration JavaScript file. See Lighthouse Configuration for more information.
Optional File path to custom budget.json file; will fail PRs based on budget result. See Performance Budgets (Keep Request Counts Low And File Sizes Small) for more information.
Optional File path to custom scores.js file; will fail PRs based on scoring result. See the above section in this README "Using scores.js to increase your audit power" for more information.
Path to the folder with Lighthouse audit results.
For a live example of this action in practice, see my blog-pwa repo workflow.
- I wanted a simpler, cleaner example anyone could jump into, fork, and make their own.
- None of the other offerings post on a PR the audit result.
- I wanted to explore harsher network conditioning, which I already do in things like my lighthouse-jest-example fairly successfully.
- Add fail case configuration
- Add server warming flags (which I've found skews scores on a lot of deployments)
- Add multi-url auditing