The longer I use consumer computing, the more I’m convinced that it shouldn’t require much knowledge at all. They have familiarity with some aspects of consumer computers, while they are on average more ignorant of its underlying nature, and those who are interested will learn more about the bits behind the pixels. But it’s clear that we shouldn’t expect the user to know too much, as many will not.
It’s quite a challenge, and it deserves an example. One that comes to mind is Unicode and emoji. The average user saw something like Wordle and gladly spammed Twitter with a bunch of emoji boxes that made those tweets ugly to the ears of users with screenreaders. And in general, the misuse of emoji and unicode characters for purposes other than they’re intended are harmful to accessibility.
But the average user that just wants to make a cool looking thing doesn’t know about accessibility. And while learning about it is good, and it can lead to people supporting accessibility-first not just in computing but in general—which has the knock-on effect of making the world better regardless of abilities—we generally recognize people will only learn about what they want. If they find it interesting, they’ll learn, but most people are too busy or uncurious.
So we necessarily should design with the ideas:
- Lots of people will want to misuse the system.
- Fewer people will want to abuse the system.
- We need to design to defend against , but allow for .
(The distinction between misuse and abuse is one of malice. Using a fork to stick a note onto a cork board is misuse. Using a fork as a weapon is abuse.)
text/srfriendly?), but where the consuming application requests paste data of both the
text/plain and (if available)
The result could be the consuming application getting the collated data to enable the display version, but allowing screenreaders to fall back on the superior version for their application.
(We should still have alt text for fallback for images, of course. Indeed, there should be an initiative to let us embed alt text in the image file rather than needing to add it separately. [Actually! See Lexdis 2.0: 27 June 2022: “Exploring the embedding of accessible image descriptions into image metadata”, which links to and quotes from an announcement by the International Press Telecommunications Council (IPTC) that its photo metadata standard now includes fields for textual descriptions. Neat.] It would be more convenient for most people to add the alt text at the point when they create the image than when they embed it. (Though we should still let them provide an alternative alt text at point of embed. It can be prefilled with the version from the image.))
The addition of a MIME type for applications would help those cases, but contexts where users frequently misuse unicode should also allow for the alternate input to be added, whether by the original user, by tools meant to offer cleaned versions, or through crowd suggestions. The average misuser of unicode isn’t going to stop doing that. Only by giving the option to misuse while still filling in the gap misuse creates will cover all cases.