In many games, the first thing the game does is to teach you how to play it. You walk across a flat surface, until you meet an obstacle. At this point, you’ve learned to walk. The obstacle teaches you to jump. Next, maybe there’s an enemy you have to deal with. When the game introduces a new mechanic, say a bounce pad, it shows you one, you jump into it and get launched into the air. By teaching you these things through interaction and constraints, you learn quickly. No manual needed.
This kind of learn-by-doing practice could be used in a variety of products. Learning to navigate almost any device’s interface, for example, could be done through this design pattern. The idea of device navigation is already known to the user, but how to navigate, which buttons to use, may not be obvious. Instead of reading the manual, teaching the user through direct use would be simpler, and it would also help encode the navigation map in the user’s brain through use, which is far better than through reading the manual.
If a website introduces a new feature, like reporting offensive comments, it could offer readers an example comment. “This is a training comment you could find offensive. Click the flag icon to try reporting it.” You click the flag. It shows a list of reasons. You pick one, click Report. Done. You’ve learned how to report a bad comment.
This may seem trivial, but it’s quite common for most complex websites to lack documentation or manuals. And who would want to read one anyway? Rather than developing documentation, this learn-by-doing design would stay up-to-date automatically, if they build the system to treat the dummy items as real ones in most respects. If they later add a CAPTCHA or some other new aspect to the comment reporting system, it would adjust automatically and nobody would have to go update the (non-existent) documentation to match the changes.
As computing matures, we’ll soon get to the point where the system learns how you want to use it as much as you learn how to use the system. This kind of built-in learning will be essential for that as well. Some games already detect whether a player prefers inverted mouse control by asking them to “look up” and whichever way they move the mouse automatically corresponds to up.
That’s also a kind of learn-by-doing. But it’s the computer learning about the user by having the user perform the task. It’s not like the computer is going to go read the user’s blog and see if they wrote about how they prefer inverted mouse controls. Learn-by-doing is low-friction in most cases. (If you accidentally move the mouse the wrong way, you’ll have to go find the setting and fix it.)
But there’s another question to ask about learn-by-doing for computing in the future: how much is too much? How much should a system try to adapt to the user, and how much should the user need to learn a system? There are benefits in both directions, and some balance can be found, but it will take effort. The question isn’t unlike “how clean is too clean?” in the face of needing to train immune systems and avoid allergies from lack of exposure.