What happened to Web 3.0 — the semantic web?

, 1,340 words

...or "Why isn't the future happening"?

Despite over a decade of discussion, "Web 3.0" -- the next model for the Internet -- has not happened and is not about to happen any time soon. Why?

Concepts for the social and semantic web were created early

Broader access to the Internet came around 1993. This first incarnation of the web was largely read-only, with fairly simple mark-up, little dynamism and browser incompatibilities which propelled use of Java Applets and Flash. Notable uses of the technology were directories, portals and search engines.

By 1999 the term “social web” (or Web 2.0) was coined. Conceptually, represented a fairly simple enhancement over the first iteration of the web. The idea was that sites would be read-write: users could contribute to pages and the interface and user-experience delivering this would be rich. This technology could be used to deliver personalised content, with users interacting with each other and with content creators.

In parallel with the social web, the groundwork was being laid in 1999 for the “semantic web” (or Web 3.0). Every device would be connected to the Internet, and devices and sites would expose their data in a structured way to be “mashed up” and reused elsewhere. This would form an Internet of Things, where information could be simply, reliably and automatically extracted and applied to different contexts. Mobile devices would be able to retrieve and reuse information such as pricing or review scores from the web, fridges could automatically reorder groceries, and cars could drive themselves.

Whereas the first web served information for use by people, this third web would be built on contextualised data and metadata which machines understand, trade and build on. Machines interpreting data in this way underpins the popular Silicon Valley vision of the singularity, and of future where autonomous software agents can understand and make decisions on behalf of their owners.

The dotcom crash and the birth of the social web

Fewer than five years after being thought up, the social web was alive and healthy, enabled in the most part by the dotcom crash. Throughout the boom, development had been constrained by the performance, flexibility and cost of proprietary technology. Monolithic systems from Oracle, Sun, Vignette and Microsoft (amongst others) restricted functionality, prevented innovation and drained budgets. Whilst few platform companies went bust directly, the crash hammered their market share and drove users to Open Source technology.

Open technology and standards let developers build applications for tens of thousands of dollars, in contrast to the millions required before. This extra attention attracted expert developers and companies such as Google who made significant commitments sponsoring development, and opening up their own technology.

At the same time, Microsoft's nascent focus on internet-ready desktops and their part in the browser wars were shaking up Internet Explorer, enabling vital techniques such as Ajax. Non-web internet protocols dwindled in popularity, with technologies such as ICQ and FTP being replaced by web-based social networks and content delivery networks. Java Applets rapidly died, and Flash began a slow and insecure death, lingering on until the iPhone finally killed it.

As Web 2.0 matured it was reinforced by fundamental changes to what could be done with simple pages. HTML5 enabled delivery of video through the browser, and CSS3 brought responsive design, enabling desktop sites to render cleanly to mobile devices. Newer internet heroes Igor Sysoev and Roberto De Loris built on Linux, creating Nginx and uWSGI, which let developers create rich, event based software-as-a-service ("SaaS") applications for only a few thousand dollars resource cost. Today's web has come so far that the new trio of social, mobile and local applications (SoMoLo) is sometimes referred to as Web 2.5.

So why no Web 3.0?

So, the inevitable question. If both 2.0 and 3.0 were designed at the same time, why has only one of them been implemented?

In part this is because the transition to 2.0 was necessary and inevitable. The first internet giants such as Sun, Oracle, Cisco and the telcos were able to punitively extract money from pioneers building sites. 2.0 turned this on its head, decimating value-extracting platform providers and moving power to the companies running the sites. Some companies – like Yahoo! – floundered, but Amazon, Google and others were able to embrace Web 2.0.

The new giants -- Google, Facebook, Amazon -- offered valuable, pervasive services to their users for free, or at almost no cost, and were not beholden to closed platform providers. Users took to blogging, writing wikis, podcasting and commenting on sites which were more dynamic than ever, and site owners took advantage of revenues from publishing, transaction and advertising systems.

Web 3.0 levels sites which exploit poor distribution or accessibility of data

Whilst the second iteration of the web offered benefits for user and site owner alike, the benefits of 3.0 are asymmetric. Using the web as an API to create sites and services with easy access to data is great for consumers, new market entrants, and for the few giants that sit on top of it all. For the vast majority of transactional sites, however, these benefits would come at a cost: sharing their data removes a barrier to competition.

This new web would bring with it the ability to uniquely identify content, assess data accuracy, and uniformly compare like for like pricing. These three are enough to level the margins on most gambling, ticketing, and commodity ecommerce. Sites generating revenue from or contributing to disinformation or inefficiency of information distribution would be wiped out or bypassed.

As an example, the musician Watsky was playing in London this May. The venue's site sold tickets directly for £11.25 and had plenty available. The top Google results for the gig were for TicketMaster/LiveNation and ViaGoGo. The former requires a 4-page journey which involves typing "sustorsc roonov" into a Captcha box before one can learn no tickets are available. The latter offers some for £56.82 each, with a further 15% customer service fee and £10 for postage. That is a 670% markup.

These businesses, which trade around fear, uncertainty, doubt (FUD) and misdirection as pertains to information, would be wiped out by Web 3.0. Use of FUD as a sales strategy brought a lot of the value for Web 1.0 platform companies; these days it's Web 2.0 vertical companies which use it to extract outsize margins.

The third iteration of the web could enable one's phone to quickly and simply tell whether tickets were available and buy them without having to visit a multitude of sites, pay a premium to an aggregator, or wrestle a Captcha form. A software agent might, on one’s behalf, identify free time in the calendar and automatically find, book, and arrange relevant events.

Without 3.0, many companies are striking out alone in narrow verticals, trying to build technology which works as if 3.0 had been realised. As examples, Skuuudle are a small company trying to make price comparison easy, Hipmunk are rather more advanced and are doing flight comparison, and Google are famously pulling data together for their fleet of self-driving cars. Technology like this will only pervasively be available out of narrow verticals with Web 3.0.

Are there other iterations of the web which might be implemented first?

Seth Godin writes off the third web as unrealisable, and suggest a Web 4.0 where the semantic agents of 3.0 are fudged to work on the imperfect data available now. However, at best that's Web 2.5, like Om Malik’s "alive web", the “card or canvas web”, or the “web squared”.

Others have tried to redefine the current web as 3.0, and even the odd over-excited venture capitalist has tried to redefine 3.0 as "mobile" or something equally daft, before getting shot down in the visitor comments – a fittingly 2.0 result.

It is likely the big three’s platform offer will become more appealing for use by smaller players, but also more likely they will be consumed by it. The semantic web will arrive, though perhaps it is still five years away.