The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

The Distributed Would-be Web 3.0

The transition from Web 1.0 to Web 2.0 should inform any future upgrade.

The cryptocoin community wants to make Web 3.0 some kind of distributed-systems push where everything self-assembles and auto-contracts and has to depend on proof-of-whatever in order to function. Not exactly sure what it has to do with the web itself, but they’re trying to call it Web 3.0.

With the Web-version meme being resurged, it’s a good time to consider what Web 1.0 was, what Web 2.0 was, and what Web 3.0 could be.

Web 1.0

Actually unversioned, the original web had design limits imposed by the technology and computing speed of the day. It was full of proprietary extensions and oddities like blink and marquee and embedded MIDIs.

At this stage, many companies were still trying to get online at all. Websites still had “under construction” and “pardon our progress” type stuff. There was the Blue Ribbon Campaign to keep free speech online.

But the applications and systems of Web 1.0 were limited in certain respects. They were static, often not built by the server but hand-written. We were served simpler pages for a simpler time.

Web 2.0

In one word, Web 2.0 was about AJAX, which was actually six words: asynchronous javascript and extensible markup language. Rather than every user interaction triggering a pageload, it would rewrite parts of the document on the fly.

The utility of the term Web 2.0 was the fact that it pointed to user-facing changes. Under the hood there have been other changes in how the web works, but they were more gradual, and they didn’t have the same kind of surfer-facing impact.

The rise of web applications is basically forgotten. It has completely blended in to our brains. Nobody thinks twice when we see new content appear in a feed or when moving mail around in a web interface without intervening page-loads.


As we try to formulate what Web 3.0 could be, it’s important to recognize the type of shift it implied when we went from Web 1.0 to Web 2.0. It represented a change that helped on both sides: users got nicer pages, and developers could send small chunks rather than rebuild the entire page with every action.

The Distributed Web 3.0

The cryptocoin community’s version is a distributed system. Different parts of current applications could be handled by independent services, with coordination happening by magic and smart contracts. Smart contracts are a way that computers can sell or buy resources without human intervention.

Let’s say you break the system into a few parts:

  • Content
  • Language
  • UI or design
  • Advertising

The user wants to look at some piece of content. They request it from a distribution system, which sees the user speaks a different language than the content. It invokes a translation service first, before requesting a design appropriate for the user’s preferences (print or mobile or desktop or VR). Finally the design site-agent asks for appropriate advertising given the user’s location, language, whatever other factors. The user gets back their hyper-custom piece of content.


It’s hard to see what a smart-contract, proof-of-whatever would really deliver users over our current systems. It’s not to say that distributed applications aren’t a nice idea where they make sense. It is simply that it doesn’t seem to have the same kind of win-times-N benefits that the previous revolution did.

Security, reproducibility, caching, all sorts of questions come forth with a distributed web. They may be solved, but even if they are, they don’t provide an AJAX-style gain to users.

For other purposes, there are similar questions for distributed systems. How a distributed equivalent of Amazon might operate, for example. It would need some kind of curation mechanism, which Amazon is already pretty bad at. But where is the curation mechanism for a distributed marketplace? Any reasonable approach could be adopted agnostic of whether it applied to a distributed site or one like Amazon.

In some ways, due to technological maturity and, more importantly, business and social adoption, there may never be a true Web 3.0. Seeing how the tracking and advertising industries have metastasized, the web itself is less inviting all the time. One might look to Gemini, a successor to the old Gopher protocol, rather than try to breathe fresh life into an increasingly dystopian web.

The Evolution of Writing on the Web

Experimental post about how writing should evolve to fit the modern web surfer.

A non-functional wire sculpture of a toilet.
By CCRI Artdepartment

Why New Styles are Needed

  • Articles should be swifter for the web.
  • People want to read less as they have more to read.
  • Top ten lists and bullet-point articles make it easier to get through.
  • You can skip around between parts, pick up where you left off.

How Did We Write?

  • Old writing was based on a longer attention span.
  • It had deeper stylistic integrity that the form afforded.
  • Structures of sections of paragraphs of sentences of words.

The New Style, Evolved from the Old

  • Headings of bullets of sentences of words.
  • Pictures to anchor each part (not shown here).
  • Barer sentences, with less complexity.
  • Like a powerpoint put through a wringer.

The Old Style Lives On

  • Books and periodicals, along with some traditionalist sites.
  • Nice for articles you want to go deeper into.
  • Side-by-side old and new allows for reader choice.

Benefits of the New Style

  • Students learn outline forms more easily.
  • Reading comprehension goes up for the new form.
  • Discussion is simplified through easily-referenced sentences?
  • Improves collaborative editing and creation.

Downsides of the New Style

  • Students dislike the old style even more than they already did.
  • Comprehension of the old style diminishes further.
  • Discussions are based on less nuance (Fox Newsier discussions prevail).

I Dunno.

  • I’m curious whether this sort of writing style should become more dominant.
  • I think it has some benefits for the way people use the modern web.
    • Easier to read casually.
    • Possibly more accessible to AI.
    • Less opportunity for verbosity.
  • So I wrote this in a version of what the new style may be, to see what it’s like.
  • Oy vey. I’m hoping that there can be some balance. I do think language and writing styles need to evolve to fit the needs of readers, and long-winded writing can be a pain to read (especially as the number of things to read grows), but let’s hope it won’t be a bullet-point-riddled future.
  • One promising alternative is that AI will allow for real-time reorganization/editing of long texts to elicit the parts the reader is most interested in.

What is a Website?

Questioning when a website (in this case Wikipedia) is more than a mere website.

The question comes to mind of real-world places, like the Grand Canyon, libraries, street corners you know, museums. And institutions, great institutions (in the abstract, anyway) like the U.S. Congress and the U.S. Supreme Court and grand institutions of learning like M.I.T. and Harvard University.

We have a certain outlook for real-world places that root abstract concepts. But on the web we still refer to the greats as mere websites.

Wikipedia is a website, yes. But is it not one of several behemoths, great beasts of the modern netscape (err, not the company obviously, though they did loom in their day). Great institutions with all signs of the lasting legacy of the Harvards and M.I.T.s and so on.

There is a certain leveling and democracy in Alice’s Blog being on the same footing with a Wikipedia. But at the same time, it seems we should be looking for new names for great Internet-based institutions. That we should be able to call Wikipedia a website, but also call it something which evokes its importance and lasting nature.

We have another term, web application. It fits certain sites. But when I think of an application, I think of a shell that provides functionality. I don’t associate the data of the application with being what it provides. If Wikipedia is an application that provides encyclopedic articles, well, where’s the competing application that relies on the same data set?

And there can be, don’t get me wrong. You can download Wikipedia’s database and write an application (web-based or platform-based) and pull those articles up (you can also download MediaWiki, the software powering Wikipedia). Others have come up with some innovative ways to, e.g., pull articles over DNS. The main application-like part of Wikipedia is its editing functionality.

So maybe Wikipedia is both a website and a web application. At least in part. But that still doesn’t account for the community behind it. Or that its most essential nature is as a repository of articles.

You could try portal or property or destination after web. Maybe some other term. But I think an important step, one that will eventually happen, is to drop the web. That at some point the articles of Wikipedia will be the headliner, and whatever built-in editing and display they want on the web will be the website. There may be platform-based alternatives (or alternative web applications) to provide the editing and display.

This is already partially true for how Wikipedia and the other Wikimedia Foundation (the organization behind Wikipedia) sites handle things like images. Image files and other media that are embedded in Wikipedia actually live on the Wikimedia site and may be reused across language versions and on other Wikimedia Foundation sites.

But that trend can be extended to other uses, and once enough uses for a system exist, the web frontend is truly a frontend rather than the raison d’être for the backend. It reminds me of the story behind the GNOME Sudoku application; apparently the author wrote a solver for Sudoku puzzles, and it grew an interface up around it. Sometimes that process works in the other direction.