It is far easier to achieve a perfect 10/10/10/10 score in Google Lighthouse for a static site, versus a traditional LAMP stack which requires server-side processing to some degree for each request, or a JAMstack architecture which will traditionally have fast time-to-first-byte (TTFB) and a slower time-to-interactive (TTI).

This is my personal portfolio, somewhere to display my CV, but mostly just something to tack on to the domain I use for my email address. It doesn’t need to be perfectly composed with microservices and or somewhere to flex architecture decisions. A simple static site which is almost free to host is all I need for this.

This website is served by Netlify and developed with Hugo. This affords a cheap solution, and less maintenance headaches than if I was to use Laravel or Symfony.

Under the hood

Under the hood of the website, Hugo is the framework I chose to generate my portfolio with. I made the decision to use Hugo because it is a single-binary solution; I don’t need to worry about having parity with production, or worrying about dependency woes with NPM, for example. Hugo as it is won’t instantly deliver a perfect score of 10 all-round on Lighthouse; introducing CSS, JS, and images to the website will wipe out any gains – but on the upside, this is not an SPA or PWA with a large initial payload for new visitors with a fresh cache.

I chose Tailwind as the framework to style the website up, mostly as an explorative exercise. My experience is mostly with BEM and ITCSS methodologies; either with a reset stylesheet or Bootstrap to, well, Bootstrap a project. I had hesitations at first, and it’s still slightly uncomfortable to use at times (Tailwind’s @apply directive feels a bit like an anti-pattern, but it’s essential for consistency and to avoid deviating outside of the framework with raw inline styling). Out of the box, Tailwind has a huge footprint – much larger than Bootstrap or Foundation. Tailwind is unlike SASS/LESS-based frameworks, but it still requires some processing in order to make it production-ready.

I’ve used minimal JS to build this site, as it simply does not require it. For ease of use, I’ve added a little bit of JS to the site to handle the hamgburger navigation. I could probably handle the logic with CSS, but I didn’t find the markup to be semantic enough – JS was the simplest solution to that. I don’t run any tooling to transpile my JS; it’s only a handful of lines with some basic DOM searching & event handling.

Next steps

Initially, the main factors which hurt the website’s rank on Google Lighthouse was down to large file downloads, blocking requests, and accessibility flaws. The biggest win was by introducing PurgeCSS to my pipelines. Hugo allows for some tooling to be added to its build process, and PostCSS fits in quite nicely. Tailwind requires PostCSS to make it suitable for production use; but PostCSS allows for some niceties such as autoprefixer (automatically add vendor prefixes to your CSS rules, where necessary), and PurgeCSS.

PurgeCSS is one of the most useful plugins I’m using, as that cuts my CSS down from 5.5MB to just 3.5KB (prior to gzip/Brotli compression). Amazing gains; Google Lighthouse no longer complains about massive files or masses of unused CSS, both of which affect many metrics such as TTI, LCP, CLS, etc.

This neatly ties into web fonts. When first building this portfolio, I just copied & pasted a couple of link tags to pull in some files from Google Fonts, and that was another hinderance as we’re connecting to another domain, that’s another DNS lookup, and that directly affects the TTFB metric among others. Even if users have previously encountered the same font on a different website, it is not available in their cache when viewing this website. So this led me down the rabbit hole of self-hosting those fonts. This meant I could drop a bunch of superfluous font files which aren’t necessary, and then inline the font declarations into the HTML (rather than a separate request). This has the compound benefit of declaring the fonts ahead of when they are used in the CSS file, and I could denote those font files as requiring “preload” (<link rel="preload" as="font" href="...">). A simple trick which cut down more time from the TTFB, and alleviating some of the “render blocking resource” issues which Lighthouse mentions.

Accessibility was the second largest factor in a degraded Lighthouse score, and also a bad habit. I’m usually quite good at keeping the markup semantic, but I find it easy to overlook certain patterns such as ARIA attributes, or labels on all elements. Google Lighthouse helps in this regard, as it penalises your score heavily, and it will list all offending elements that you need to address. Most importantly, and one which I’m not great at identifying is contrasting colours. Google Lighthouse wants you to achieve a contrast which meets the WCAG AAA standard, and again, it will highlight the offending elements to you with screengrabs of what needs to be addressed. Performance is great, but it’s equally as important to ensure that whoever visits your website doesn’t have a hard time doing so.

Parallel requests. Initially, each SVG on the homepage was pulled in from an external file, and that quickly filled up the network request stack. The website was simply trying to load too many files at once, which dragged my score down on Lighthouse for performance. As most of these SVGs are tiny, inlining them into the HTML itself only added a few bytes to each request, but saved on network requests, and they still render quicker when inlined than if they were cached. This negatively affects the TTFB, only marginally, but it improves the CLS and total blocking time metrics. A good trade-off.

With the rise of Responsive Web Design (albeit that was many, many years ago now), it’s easy to overlook adding width/height attributes to images, because I mostly want them to be displayed with width: 100%. Now, it’s important to add the width/height attributes to the image tag in order to hint at the aspect ratio, even if it the image will be displayed at width: 100% which is a variadic width. By not setting these values, Google Lighthouse may mark you down on LCP/CLS metrics, and users may see that the page “jumps” once these images load in. Not great if a user is trying to interact with the site whilst inputs and buttons may be moving around.

Maintenance

As mentioned, with this being a simple portfolio which doesn’t directly generate income, I can’t invest too much time in ongoing maintenance. This is alleviated by choosing a simple binary like Hugo, and GitHub which now ships with Dependabot – Dependabot ensures that my NPM dependencies are always up-to-date.

Secondly, was Travis (and now GitHub Actions) to run Google Lighthouse against every change that I make to the website so that I don’t regress on any performance or accessibility factors. Although, with each update to Google Lighthouse, the metrics become tougher to deliver.

Summary

In summary, Google Lighthouse ensures that I employ best practices for both performance and accessibility purposes. By being hinted at these errors over and over, I’m likely to develop better development habits.

Reduce your network requests, trim down your assets, inline assets where possible, and remember to make your website accessible to everybody.

Credits

Most of my deep understanding of front end development is thanks to Harry Roberts, the man behind CSS Wizardry.