The site uses cookies that you may not want. Continued use means acceptance. For more information see our privacy policy.

The Distributed Would-be Web 3.0

The transition from Web 1.0 to Web 2.0 should inform any future upgrade.

The cryptocoin community wants to make Web 3.0 some kind of distributed-systems push where everything self-assembles and auto-contracts and has to depend on proof-of-whatever in order to function. Not exactly sure what it has to do with the web itself, but they’re trying to call it Web 3.0.

With the Web-version meme being resurged, it’s a good time to consider what Web 1.0 was, what Web 2.0 was, and what Web 3.0 could be.

Web 1.0

Actually unversioned, the original web had design limits imposed by the technology and computing speed of the day. It was full of proprietary extensions and oddities like blink and marquee and embedded MIDIs.

At this stage, many companies were still trying to get online at all. Websites still had “under construction” and “pardon our progress” type stuff. There was the Blue Ribbon Campaign to keep free speech online.

But the applications and systems of Web 1.0 were limited in certain respects. They were static, often not built by the server but hand-written. We were served simpler pages for a simpler time.

Web 2.0

In one word, Web 2.0 was about AJAX, which was actually six words: asynchronous javascript and extensible markup language. Rather than every user interaction triggering a pageload, it would rewrite parts of the document on the fly.

The utility of the term Web 2.0 was the fact that it pointed to user-facing changes. Under the hood there have been other changes in how the web works, but they were more gradual, and they didn’t have the same kind of surfer-facing impact.

The rise of web applications is basically forgotten. It has completely blended in to our brains. Nobody thinks twice when we see new content appear in a feed or when moving mail around in a web interface without intervening page-loads.

As we try to formulate what Web 3.0 could be, it’s important to recognize the type of shift it implied when we went from Web 1.0 to Web 2.0. It represented a change that helped on both sides: users got nicer pages, and developers could send small chunks rather than rebuild the entire page with every action.

The Distributed Web 3.0

The cryptocoin community’s version is a distributed system. Different parts of current applications could be handled by independent services, with coordination happening by magic and smart contracts. Smart contracts are a way that computers can sell or buy resources without human intervention.

Let’s say you break the system into a few parts:

  • Content
  • Language
  • UI or design
  • Advertising

The user wants to look at some piece of content. They request it from a distribution system, which sees the user speaks a different language than the content. It invokes a translation service first, before requesting a design appropriate for the user’s preferences (print or mobile or desktop or VR). Finally the design site-agent asks for appropriate advertising given the user’s location, language, whatever other factors. The user gets back their hyper-custom piece of content.

It’s hard to see what a smart-contract, proof-of-whatever would really deliver users over our current systems. It’s not to say that distributed applications aren’t a nice idea where they make sense. It is simply that it doesn’t seem to have the same kind of win-times-N benefits that the previous revolution did.

Security, reproducibility, caching, all sorts of questions come forth with a distributed web. They may be solved, but even if they are, they don’t provide an AJAX-style gain to users.

For other purposes, there are similar questions for distributed systems. How a distributed equivalent of Amazon might operate, for example. It would need some kind of curation mechanism, which Amazon is already pretty bad at. But where is the curation mechanism for a distributed marketplace? Any reasonable approach could be adopted agnostic of whether it applied to a distributed site or one like Amazon.

In some ways, due to technological maturity and, more importantly, business and social adoption, there may never be a true Web 3.0. Seeing how the tracking and advertising industries have metastasized, the web itself is less inviting all the time. One might look to Gemini, a successor to the old Gopher protocol, rather than try to breathe fresh life into an increasingly dystopian web.

What is a Website?

Questioning when a website (in this case Wikipedia) is more than a mere website.

The question comes to mind of real-world places, like the Grand Canyon, libraries, street corners you know, museums. And institutions, great institutions (in the abstract, anyway) like the U.S. Congress and the U.S. Supreme Court and grand institutions of learning like M.I.T. and Harvard University.

We have a certain outlook for real-world places that root abstract concepts. But on the web we still refer to the greats as mere websites.

Wikipedia is a website, yes. But is it not one of several behemoths, great beasts of the modern netscape (err, not the company obviously, though they did loom in their day). Great institutions with all signs of the lasting legacy of the Harvards and M.I.T.s and so on.

There is a certain leveling and democracy in Alice’s Blog being on the same footing with a Wikipedia. But at the same time, it seems we should be looking for new names for great Internet-based institutions. That we should be able to call Wikipedia a website, but also call it something which evokes its importance and lasting nature.

We have another term, web application. It fits certain sites. But when I think of an application, I think of a shell that provides functionality. I don’t associate the data of the application with being what it provides. If Wikipedia is an application that provides encyclopedic articles, well, where’s the competing application that relies on the same data set?

And there can be, don’t get me wrong. You can download Wikipedia’s database and write an application (web-based or platform-based) and pull those articles up (you can also download MediaWiki, the software powering Wikipedia). Others have come up with some innovative ways to, e.g., pull articles over DNS. The main application-like part of Wikipedia is its editing functionality.

So maybe Wikipedia is both a website and a web application. At least in part. But that still doesn’t account for the community behind it. Or that its most essential nature is as a repository of articles.

You could try portal or property or destination after web. Maybe some other term. But I think an important step, one that will eventually happen, is to drop the web. That at some point the articles of Wikipedia will be the headliner, and whatever built-in editing and display they want on the web will be the website. There may be platform-based alternatives (or alternative web applications) to provide the editing and display.

This is already partially true for how Wikipedia and the other Wikimedia Foundation (the organization behind Wikipedia) sites handle things like images. Image files and other media that are embedded in Wikipedia actually live on the Wikimedia site and may be reused across language versions and on other Wikimedia Foundation sites.

But that trend can be extended to other uses, and once enough uses for a system exist, the web frontend is truly a frontend rather than the raison d’être for the backend. It reminds me of the story behind the GNOME Sudoku application; apparently the author wrote a solver for Sudoku puzzles, and it grew an interface up around it. Sometimes that process works in the other direction.