The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

Average Users and Accessibility

Or, why to design around misuse-cases.

The longer I use consumer computing, the more I’m convinced that it shouldn’t require much knowledge at all. They have familiarity with some aspects of consumer computers, while they are on average more ignorant of its underlying nature, and those who are interested will learn more about the bits behind the pixels. But it’s clear that we shouldn’t expect the user to know too much, as many will not.

It’s quite a challenge, and it deserves an example. One that comes to mind is Unicode and emoji. The average user saw something like Wordle and gladly spammed Twitter with a bunch of emoji boxes that made those tweets ugly to the ears of users with screenreaders. And in general, the misuse of emoji and unicode characters for purposes other than they’re intended are harmful to accessibility.

But the average user that just wants to make a cool looking thing doesn’t know about accessibility. And while learning about it is good, and it can lead to people supporting accessibility-first not just in computing but in general—which has the knock-on effect of making the world better regardless of abilities—we generally recognize people will only learn about what they want. If they find it interesting, they’ll learn, but most people are too busy or uncurious.

So we necessarily should design with the ideas:

  1. Lots of people will want to misuse the system.
  2. Fewer people will want to abuse the system.
  3. We need to design to defend against [2], but allow for [1].

(The distinction between misuse and abuse is one of malice. Using a fork to stick a note onto a cork board is misuse. Using a fork as a weapon is abuse.)

Going back to the unicode and emoji example, what does that mean? Let’s take the Wordle example. They want to paste their scores, and we don’t want to screw over screenreaders, and we also don’t want to have to design some extra input (e.g., alt text) that lots of people won’t use. What we want to do is have some way to let the text-meaning be included alongside the emoji, so the paster doesn’t have to do anything, but the screenreader can make the distinction. This would likely involve the use of the Javascript Clipboard API, adding an extra MIME type (something like text/srfriendly?), but where the consuming application requests paste data of both the text/plain and (if available) text/srfriendly.

The result could be the consuming application getting the collated data to enable the display version, but allowing screenreaders to fall back on the superior version for their application.

(We should still have alt text for fallback for images, of course. Indeed, there should be an initiative to let us embed alt text in the image file rather than needing to add it separately. [Actually! See Lexdis 2.0: 27 June 2022: “Exploring the embedding of accessible image descriptions into image metadata”, which links to and quotes from an announcement by the International Press Telecommunications Council (IPTC) that its photo metadata standard now includes fields for textual descriptions. Neat.] It would be more convenient for most people to add the alt text at the point when they create the image than when they embed it. (Though we should still let them provide an alternative alt text at point of embed. It can be prefilled with the version from the image.))


The addition of a MIME type for applications would help those cases, but contexts where users frequently misuse unicode should also allow for the alternate input to be added, whether by the original user, by tools meant to offer cleaned versions, or through crowd suggestions. The average misuser of unicode isn’t going to stop doing that. Only by giving the option to misuse while still filling in the gap misuse creates will cover all cases.

Doing a Typography, Part II

Why is it so hard to make a curve look natural?!

Having been iterating on this font project awhile, I thought I’d mention some of the things I’ve noticed.

One of the big issues I’ve had on the few times I’ve tried to make typefaces is getting the weight right. I don’t know if there are solid rules, especially when some designs are meant to be thicker or thinner than usual. Given I’m trying to make a font to be used for text, rather than for display, a medium weight is best. But what the hell is that?

Looking at glyphs in FontForge can be deceiving. Your design needs to be thicker than you’d think (at least until your eyes adjust). While designing, you need the shapes to be big, to be able to work on them. You’re seeing them as they would be used for display. Display typefaces can get away with being thinner (or thicker) than text faces, because the eyes don’t have to work as hard to make out their features. But when you go to test or use the design as text, it looks quite different.

It’s still a tricky thing, and individual widths in a font will still require manual testing. I don’t know if there’s a good rule or way to decide on a number. The one I’m making uses roughly a tenth of the height as its standard width. Some of the fonts I’ve looked at seem to vary more than others, and I don’t know if the tenth is a good rule or not (for sans fonts; for serif fonts, there tends to be width variations on different strokes, so there are probably two or three widths that you’d need to balance between each other).

In order to test the weight (and other features) I went with LaTeX, which has a fontsmpl (font sample) package that helped a lot. I can make changes, generate the output font file, and re-render the PDF all with minimal effort (though I could write a shell script to make it even more minimal). That’s the second pain point: testing the font.

The main benefit of using a PDF to test is that I can guarantee the output is the current state of the generated font. Ideally I would do most of my testing in Firefox on the web. I do some of that, using a custom stylesheet I can toggle on and off, but Firefox does not reload the whole font every time it changes on disk. But it does, at some point, reload parts of it. So if you have Firefox open to a page using your font, and you change it, depending on how you change it, it may do some strange things with what it displays after some delay.

Showing improper rendering of a partially-reloaded font in Firefox.
A simple calendar page with the words “Hello world!” selected yet not visible (other than a weird artifact) at the top due to the weird font reload behavior with Firefox. Also, the fours disappeared.

I believe it has to do with Firefox reloading the shape data of glyphs, but not refreshing the glyph tables (or the equivalent for however the font files work internally). So if you add or remove glyphs and regenerate, Firefox will (after some unknown delay) load some of the new data, and how it does that can result in some weird garbled mess, including wrong characters, inverted rendering (where the bowl’s counter (or white-space) is filled), other stuff I forgot or suppressed the memory of.

To counter that, I try only to test out on Firefox once, and if I regenerate I either stop testing with it or reload the browser first. (You could generate the font with another name, and switch which name the browser is using, but I haven’t wanted to do all that.)

Shows the changes in the font between an early version and a recent one.
The third paragraph above, shown twice. On top is an early version, with the bottom showing the improvements made over about a month.

The other risk with testing in the browser is that this here web is full of typos (particularly social media). Errant spaces are the worst. You think you have an overlooked kerning issue, only to find out someone put a space there. You have to be skeptical when testing on social sites like Twitter. It may not be the font, it may well be the tweet. But social media yields solid variety in testing. People put words in all caps, you get good coverage on numbers, symbols.

One of the biggest mistakes I made was trying to do some things too soon. It helped in learning, but it also caused (and continues to cause) a lot of reworking. Leaving kerning tables alone, not trying to make an italic version, small caps, a bold version, and not worrying too much about composite glyphs/accented glyphs would have made the initial font creation a lot less interesting but after going back to fix things a few times, I’m trying not to mess with them until the basic version is more finalized.

Kerning by classes is great, but it takes some trial and error to figure out good classes. Also, Fontforge may overkern certain pairs if you use the automatic kerning. I haven’t disabled it, but that does mean going back and check everything. (In general, FontForge has a lot of things it can do for you semi-automatically, after which you’ll need to double-check and clean up or improve its results. In my experience this is a bit clumsy, yielding a mix of excellent and baffling results.)

One of the things I eventually learned was that it’s much easier to check the box in Glyph info that lets you create overlapped shapes. This lets you focus more on getting individual strokes looking good than trying to figure out how the combination should work. But you have to turn that setting—”Mark for Unlink, Remove Overlap before Generating”—on for each glyph that uses it, plus for any that reference an overlapping glyph. Even then, how they overlap may need tweaking to avoid problems with overlapping hints. (The quickest way to enable the mentioned glyph setting seems to be to use the context menu when it pops up validation errors during font file generation.)

In some cases, like A, you might only have an overlapping bar with an inverted V shape. But for others, like X, it’s easier to build from a pair of slanted lines for the main strokes, than to get their crossing right using a single crossed shape. If you want to change the width or angle of the strokes that you’d write by overlapping, it’s much less trouble if they’re two separate shapes than if they’re part of a more complex single.

And speaking of X, use a lot of guides. One for the x-height (the height of lowercase x; the dotted line on those grade-school handwriting forms), another at what I call the curved x-height (for lowercase letters like a, b, c, etc. that have curved tops, they should be slightly higher than the x-height (known as overshoot)). Another guide for the curved-bottom, and a third for curved-top for capitals. (The top of the normal bounding area is already marked with a built-in guide.) There are probably more I should use, like marking the en-width and em-width off? Shrug.

Wikipedia: “Typeface anatomy” is a good place to learn some of the terms used to describe various parts of glyphs. Hovering over a glyph in FontForge provides some useful information, including the Unicode indexes for related characters (and in the case of composites, the ones that it comprises).

But there’s too much to know and a lot of it is down to taste. My current goal is for it to be good enough to (eventually) use it on this site. (That will probably require subsetting it out in a separate file in order to save size for all the accented characters and so on that would otherwise go unused. We’ll see.)

Doing a Typography

So many letters, someone should write a song to help remember them all.

I’ve been working on a new typeface. The filesystem shows the last time I made one was 2014. The one I made before that, I’m not sure when it was, but no later than 2004. Not a skill I’ve kept up with too much, but it’s always an interesting challenge.

There’s a paradox in recreating the alphabet. You have to stick to the known forms, so that readers won’t be confused, but you also have to find some way to make it different enough to be appealing. You want the variations to be consistent between letters, to give your creation a unique feel, but they should be subtle enough to make the font feel consistent with others (particularly if replacement characters are needed; that is, if you aren’t providing full coverage of the thousands of glyphs you could).

It’s a low-stress activity. You see your progress with every letterform completed. You work on one letter at a time. You can tweak them endlessly.

It’s a low-knowledge activity. I use FontForge, which does a lot of work for me. I don’t understand how manual hinting would work, but it has auto-hinting. I don’t know how to control for all the little problems I cause, but it has a “Find Problems” feature that will fix most of them for me.

The website Design with FontForge has some good information to get started, and it’s written with a you-can-do-it tone. I don’t believe that existed the last time I worked on a font. It really is something more people should give a try.


When starting a font, the first question you have to answer is, “What kind of font?” It could be a serif font, or maybe sans-serif. It could be monospace, or it could be variable width. Or it could be a display font—one meant for short bursts of text (signs, headings, like that), not suitable for paragraphs and long reading.

The second question is, “How much coverage should it have?” If you want a usable typeface, you’re talking at least 85 characters: the alphabet twice (for uppercase and lowercase), ten digits, and about 32 other punctuation and special characters available on an plain US keyboard. Plus space. If you want to go all-out, you can create a true italics version, which isn’t simply an oblique rendering of the normal version, but features its own glyphs. You can add ligatures and kerning and wade deep into the Unicode Basic Multilingual Plane or even beyond it. (Don’t believe I’ve ventured beyond Latin-1, myself.)

On the other hand, some characters are easier than others. Hyphen and equals share enough similarity that you can get at least all three done quickly, and you can tweak them later if needed. For brackets, braces, and parentheses, making one gets you its partner without much trouble. And uppercase E and uppercase F are usually pretty similar, just chop off that lower bar on the E. You can rotate your six to make a nine. If you’re feeling adventurous, you can even rotate your uppercase N to get uppercase Z, which you can tweak to lowercase z as well. Your stop and comma can be reused to make your colon and your semicolon.

But outside these quicker ones, it’s a matter of one-at-a-time, building the shape, refining, fixing the flaws.


Things look nicer the fewer points you can use, at least initially. The curves won’t get lumpy, won’t require a lot of fiddling to change. You’ll learn a lot about how little attention you’ve paid over the years to these shapes your eyes have passed over billions of times. The basic scribble of your handwriting is very different from forming the letters as vector shapes.

The good news there is that you can open finished fonts inside of FontForge and look at how others made their letters. I should probably do that more. I could stand to learn a lot from that stored knowledge.

But for now I’m just picking a letter and getting it into a rough shape, then straightening what’s meant to be straight and curving what’s meant to be curved, getting the thicknesses consistent, and then moving on. Once I get my basic coverage done, I’ll come back and work on consistency between letters.

It’s a nice, relaxing artistic experience.