Optimising webfonts with Glyphhanger
So Webfonts are cool, but you know what else is cool? Not sending loads of bytes over the wire that aren’t required to display a web page. I remember hearing about Glyphhanger many years ago but I could never figure out how to get it to work in a reasonable way. Seeing Simon Dann posting about doing it with Eleventy reminded me of it, and I finally managed to get it set up in a reasonable(?) fashion.
There are some interesting tradeoffs around how and when to run this process - I’ve optimised for fast builds, at the expense of having to run the subsetting script manually and commit the output. If I forget and a couple of characters end up in the wrong font, it’s not the end of the world.
Installing in WSL
I’m using WSL2 in Windows 11, which generally works very well but has occasional quirks. The Glyphhanger documentation only really covers MacOS, so here’s how I went about getting it working. I wouldn’t try this under native Windows, but if I was setting this up on MacOS I’d recommend using Homebrew there too.
Install Puppeteer dependencies with apt
(This step should be Linux/WSL only, I expect this to work out the box on MacOS.) Glyphhanger uses Puppeteer under the hood (there is a JSDOM option (--jsdom
) but it still seems to require Puppeteer to be installed - possibly a bug?). Since Puppeteer is driving a headless Chrome instance, you need to be able to install and Chrome. So based on the troubleshooting guide, install the required dependencies:
sudo apt install libgtk-3-dev libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2
Install fonttools with brew
Glyphhanger also uses fonttools
, which is a Python package. I would rather saw off my own hand than try and navigate the minefield of getting Python working by hand, but fortunately some kind soul thought to put it on Homebrew, which works on Linux (including WSL) as well.
brew install fonttools
I cannot reiterate this enough. Getting a Python environment set up is the closest to a nervous breakdown a computer has ever brought me. Just use Homebrew.
Getting URLs
Before we can subset the font, we need to get a list of all the characters we need to include in the subset. If you only have a couple of pages you can pass them directly on the CLI:
glyphhanger http://kylemacquarrie.co.uk http://kylemacquarrie.co.uk/blog --subset=./path/to/fonts/*.ttf --output=./path/to/fonts/subset --formats=woff2
If you have a larger site, you can pass a single URL (e.g. your home page) and --spider
, which will recursively follow all relative links it finds. Unfortunately this is slightly fragile - on my site, it follows the link to the RSS feed then blows up because Puppeteer can’t execute anything on that page; there are other issues reported on Github relating to href:tel
phone number links and the like. The current filthy hack I’m using is to parse the sitemap and pass a list of every URL in. I think I prefer this to having to comment out the RSS link, or adding special code into the application to handle this (e.g. hide the RSS link based on an environment variable or similar), but it’s not ideal and won’t scale well if you have more or less than one sitemap.
It’s also worth noting you have to use TTF files as the input - trying to subset an existing WOFF2 won’t work.
Subset
Here’s the script I’m using to build up and run the CLI command.
// scripts/subset/index.mjs
import { readFile } from 'node:fs/promises'
import { resolve } from 'node:path'
// read the sitemap XML
const sitemap = await readFile(resolve('./dist/sitemap-0.xml')).then((data) =>
data.toString()
)
// split into lines like <url><loc>https://kylemacquarrie.co.uk/blog/2/</loc></url>
const lines = sitemap.split('<url>')
// discard the first line which is all XML metadata
lines.shift()
// for each line, strip everything except the URL
const urls = lines
.map((line) =>
line
.replace('<loc>', '')
.replace('</urlset>', '')
.replace('</loc></url>', '')
// the sitemap builds with the production url but we're running this locally
.replace('https://kylemacquarrie.co.uk', 'http://localhost:3000')
)
// make it into one big string
.join(' ')
const fontPath = `./src/assets/fonts`
// build the command
const command = [
'glyphhanger',
urls,
`--subset=${fontPath}/*.ttf`,
`--output=${fontPath}/subset`,
'--formats=woff2',
].join(' ')
// write it to stdout so we can pipe it into bash in the next step
process.stdout.write(command)
Then run it with node scripts/subset/index.mjs | bash
or add it as an npm script
{ "scripts": { "subset": "node scripts/subset/index.mjs | bash" } }
and run it with npm run subset
.
Results
The total size of the woff2
files went from 65kb to 28kb. I did have to adjust some line-heights as the conversion that FontSquirrel uses is obviously a bit different, but nothing too major.
Next Steps
That’s a pretty decent saving. We could go further and use the --family
option to only use characters that use a specific font, but we’d have to run the script once for each font we use which gets a bit tedious. It might be worth it for the smallest possible size though.
You could automate this as part of your build process, but as with my previous scripts, I don’t really want to add additional dependencies that will slow down the build (especially if it might involve debugging a Python installation on a machine I don’t own) so for now I’m just running it manually against a local version - Astro’s default scripts make it easy to check the production build locally by doing npm run build && npm run preview
.
It should be possible to refactor the script to cache the unicode range between runs as well, and only regenerate the fonts when a character has been added/removed. Running it against the site’s markdown content might be better too.