The cryptocoin community wants to make Web 3.0 some kind of distributed-systems push where everything self-assembles and auto-contracts and has to depend on proof-of-whatever in order to function. Not exactly sure what it has to do with the web itself, but they’re trying to call it Web 3.0.
With the Web-version meme being resurged, it’s a good time to consider what Web 1.0 was, what Web 2.0 was, and what Web 3.0 could be.
Actually unversioned, the original web had design limits imposed by the technology and computing speed of the day. It was full of proprietary extensions and oddities like
marquee and embedded MIDIs.
At this stage, many companies were still trying to get online at all. Websites still had “under construction” and “pardon our progress” type stuff. There was the Blue Ribbon Campaign to keep free speech online.
But the applications and systems of Web 1.0 were limited in certain respects. They were static, often not built by the server but hand-written. We were served simpler pages for a simpler time.
The utility of the term Web 2.0 was the fact that it pointed to user-facing changes. Under the hood there have been other changes in how the web works, but they were more gradual, and they didn’t have the same kind of surfer-facing impact.
The rise of web applications is basically forgotten. It has completely blended in to our brains. Nobody thinks twice when we see new content appear in a feed or when moving mail around in a web interface without intervening page-loads.
As we try to formulate what Web 3.0 could be, it’s important to recognize the type of shift it implied when we went from Web 1.0 to Web 2.0. It represented a change that helped on both sides: users got nicer pages, and developers could send small chunks rather than rebuild the entire page with every action.
The Distributed Web 3.0
The cryptocoin community’s version is a distributed system. Different parts of current applications could be handled by independent services, with coordination happening by magic and smart contracts. Smart contracts are a way that computers can sell or buy resources without human intervention.
Let’s say you break the system into a few parts:
- UI or design
The user wants to look at some piece of content. They request it from a distribution system, which sees the user speaks a different language than the content. It invokes a translation service first, before requesting a design appropriate for the user’s preferences (print or mobile or desktop or VR). Finally the design site-agent asks for appropriate advertising given the user’s location, language, whatever other factors. The user gets back their hyper-custom piece of content.
It’s hard to see what a smart-contract, proof-of-whatever would really deliver users over our current systems. It’s not to say that distributed applications aren’t a nice idea where they make sense. It is simply that it doesn’t seem to have the same kind of win-times-N benefits that the previous revolution did.
Security, reproducibility, caching, all sorts of questions come forth with a distributed web. They may be solved, but even if they are, they don’t provide an AJAX-style gain to users.
For other purposes, there are similar questions for distributed systems. How a distributed equivalent of Amazon might operate, for example. It would need some kind of curation mechanism, which Amazon is already pretty bad at. But where is the curation mechanism for a distributed marketplace? Any reasonable approach could be adopted agnostic of whether it applied to a distributed site or one like Amazon.
In some ways, due to technological maturity and, more importantly, business and social adoption, there may never be a true Web 3.0. Seeing how the tracking and advertising industries have metastasized, the web itself is less inviting all the time. One might look to Gemini, a successor to the old Gopher protocol, rather than try to breathe fresh life into an increasingly dystopian web.