Pimp your Pipeline with Lighthouse CI
In the world of Web Development, since as far back as I can remember, two topics that continuously reoccur are performance and best-practices. Tools like Lighthouse make it really easy to understand these areas by surfacing key metrics and suggesting guidelines on how to improve.
But running the occasional audit on your site will only get you so far. Of course it’s good if you can get your scores above 90, but how are you maintaining those scores? Are you running a report after every deployment to make sure you’re still in the green? What if you could prevent a detrimental deployment from happening at all…
Enter, Lighthouse CI!
Lighthouse CI is a fantastic suite of tools, maintained by the friendly folks at Google, that can do things like…
- Run Lighthouse audits on your app via the command line
- Fail and report on any scores that don’t meet given thresholds
- Graph and compare your scores over time, tying each report to a commit
In this article, I’ll run through my implementation of Lighthouse CI in Azure DevOps and hopefully inspire a few others to do something similar.
Before we get stuck in, it’s worth mentioning that the documentation is very good! If you’re interested in testing it out, just head on over to the repo — it’s pretty simple to follow along and get something up and running.
However, if — like me — you work in a large department with architectural constraints and don’t have the luxury of spinning up a container that’s an exact replica of production, you might hit some hurdles!
What are we working with?
In my team, we use Azure Pipelines. Specifically, the multi-stage YAML variety — if you’re using Azure DevOps, I definitely recommend it. The deployment process is quite straightforward; once a pull request is approved, it is completed and merged into master where the pipeline is automatically triggered. The following stages occur:
- Build: Bundle, create artifacts, publish artifacts etc.
- Deploy: Deploy to staging slots.
- Test: Run UI tests, security tests, load tests etc.
- Release: If everything passes, swap staging and production slots.
The goal of the next steps is to integrate Lighthouse CI into the “Test” stage and run a report on the staging site before releasing.
Let’s do this!
Start by installing the CLI. Make sure you’re in the root of your project and install it as a dev dependency.
npm install -D @lhci/cli
Now add some config; create a folder called
lighthouse and a file inside called
config.json with the following contents:
- url: The URL to run the report against (this can be an array if you want to run it against multiple URLs).
- numberOfRuns: This tells the CLI to run 5 audits. When asserting the results, an average is used. I found 5 to be a good balance between speed and accuracy but you can tweak this number to suit your environment.
- puppeteerScript: The path to a Puppeteer script to run before the audit. In my case, I needed to set a cookie before hitting the staging site. If you need to set cookies, headers etc, this is a good place to do it.
- chromePath: This tells the CLI to use the Chromium build that comes with Puppeteer. I couldn’t get Puppeteer to work without this! If you run into similar issues, just make sure you also install Puppeteer —
npm install -D puppeteer.
If you need a puppeteer script, add
pre-collect.js into the
lighthouse folder along with anything you need to do before the audit:
Next, add an npm script to your projects’
"lh:collect": "lhci collect --config ./lighthouse/config.json"
…and give it a whirl:
npm run lh:collect
After the results are collected, they are stored in a folder called
.lighthouseci — go into the folder and open up one of the HTML files. Pretty cool!
Asserting the results
Let’s make the audit fail if it doesn’t meet our requirements. Open up the config and update it to include some assertions like this:
This tells the CLI to report and fail if any of the categories’ scores are below 90.
It’s important to understand the difference between
warn . Both will report on failed assertions, but
warn will actually fail the command with an exit status of 1, which you can rely on to prevent releases.
Now, let’s add the npm script for
"lh:assert": "lhci assert --config ./lighthouse/config.json"
…and give it a spin:
npm run lh:assert
Nice work! It’s all coming together — just one more step before we integrate this into the pipeline.
Uploading the results
If you want to see your results over time and tie them to commit messages, then you’ll need to set up a Lighthouse CI Server. Get this container set up somewhere at a URL that can be accessed from your build.
Once it’s up and running, we can create a new project on the server. Make sure Lighthouse CI is installed globally using
npm install -g @lhci/cli and run the wizard:
Follow all the steps and take note of the token. Time to update the config file again:
Add the final npm script…
"lh:upload": "lhci upload --config ./lighthouse/config.json"
…and give it a go:
npm run lh:upload
Visit your server, open your project and you should see the results in the dashboard. Great job!
Lastly, the YAML pipeline
This is probably exactly as you’d expect — essentially we run through each of the npm scripts in order. But there are a couple of things to note.
Firstly, Azure DevOps does some funky stuff with checkouts causing Lighthouse CI to attempt to read the current branch name and error. Adding the following environment variable to the job will solve this:
For the full list of LHCI variables, visit the CLI docs.
The other thing to note is that you probably don’t need the entire git history on the checkout. So I’ve opted for:
- checkout: 'self'
Other than that, it should be good to go. Here’s the full YAML file (with the other stages stubbed out):
And here’s the full example that can be used as a reference.
It’s fair to say, myself and the team spent a significant amount of time procrastinating before integrating Lighthouse into our pipeline because it wasn’t completely clear how to make it work in our environment.
After the release of Lighthouse CI, we decided to put aside some time, knuckle down and get it done. It really wasn’t that hard, and it’s working like a charm. Despite the fact we are a large team who deploy multiple times per day, we’ve been able to maintain scores of above 90 for all categories.
We’re lucky enough to be in an industry where tools like this are easily accessible and completely free. So even if this article didn’t give you all the answers, do yourself a favour and figure out how to incorporate performance metrics into your process — your users will thank you!