Mostly, I’m compelled to write about things I’d like to understand better. I care about telling broadly compelling stories but that’s just the creative constraint that helps me focus. I like moving between the personal and the general, using anecdotal evidence to sort through big cultural ideas and experiences. I know it’s not very scientific but it’s enjoyable, and based on my own highly unscientific insight, it’s how most of us make our way through life. Thanks to the Internet, I can write and publish almost anything I like and, in my case, roughly 1 for every 500 million people on earth are likely to come across it. If you are reading this, you are very rare indeed. More people have been to space. I’m that good.
Personal anecdotes may be dangerously incomplete – our lived experiences are relatively few and narrow – but they’re all we have. They are the metaphors that help us understand what we haven’t or can’t experience, making space for abstract concepts and other people. Anyway, this isn’t another self-conscious writing and language piece, we’re here to learn about Web 3.0, you and me both.
You might have heard about Web 3.0 recently and before it, Web 2.0, alerting you to the notion there had been a Web 1.0, or you might not have. These are pretty loose terms that attempt to demark generations of the World Wide Web. Saying something is ‘sooo Web 2.0’ is like calling someone ‘sooo Millennial’ in that it’s pretty vague, definitely generalized but roughly useful. When I was building the website for a business about a decade ago a few developers bandied around the term ‘Web 2.0’ to sound cutting edge, which was the first I’d heard of it. It’s taken the full decade but I believe I have my head around it.
Web 1.0 was mostly read-only, with static pages like directories offering no interaction. It was skeuomorphic (sounds like skew-) in that it mimicked what we knew – the Yellow Pages, magazines, libraries…
Skeuomorphism was new to me too. If you already knew about it, just play along so I don’t feel dumb. Basically, it’s when something new takes some ornamental design cues from something old, familiar and related. The trash can icon or the bound calendar symbol on your ‘desktop’ are good examples. The ‘desktop’ usually found on your laptop is also a good example. You could say they’re like metaphors – referencing something familiar to understand something new – but that would be using metaphor as a metaphor for skeuomorphism and that shit is mind-bending.
Now Web 1.0 was partly limited by internet speeds and processing power – the technology of its day, but it was also limited by what we can call imagination. Tim Berners-Lee invented the World-Wide-Web and surely knew it had potential but 30 years and apparently 3 generations later and we’re in wholly unimagined territory for him or any of us because how we end up using something open and decentralized is more significant than what it’s capable of.
Skeuomorphism is also sometimes meant as a criticism say, for using the Web like a digital Yellow Pages, or using your iPhone like a phone, or a watch, or a flashlight. It’s a criticism of failing to see the true potential in some new technology to create value we haven’t yet imagined and solve problems we never knew we had. It’s much easier to imagine how new technology might replace older technology than it is to imagine the iPhone creating the app store and the possibility for Uber or Waze or Shazaam. One main reason it’s hard to predict the future is because it’s collaborative, we’ll all make it, but as individuals, we only have what we know, the past and the present, and our metaphors.
With Web 2.0, apparently, we gained the ability to interact; to be creators and consumers on Wikipedia or Facebook or YouTube. We had to go through Encarta – a traditional encyclopedia in digital form, to get to Wikipedia, which was dynamic and essentially user-generated. Also, it was a bit shit for a while, which is common. I remember editing articles on Wikipedia. My mate Jono Stiebel created a page for himself with the title “…director of vehicular operations at Regal Parking”. That was the company my parents ran at the time where Jono worked as a casual attendant. In short, it was a novelty, like the iPhone, like Airbnb… because they belonged to a world that didn’t yet exist and we were thinking about them through the lens of a world we would gradually, at least partly leave behind us. And because they had to get better, to get better content and more users.
Web 2.0 has been community-focused and social of course. Suddenly we could create and share and participate in groups and communities and we all got on Facebook because we all got on Facebook. If you were on there early enough, you can go back and look at how you really didn’t know what exactly you were on, what it was for or how to use it. Seriously, do it, it’s a laugh. In fairness, we were still deciding; we are still deciding. We weren’t especially concerned about a healthy and competitive social media industry, the attention economy or evil algorithms, we wanted to go to the place where all our friends were, we wanted to go to the place with the most products (Amazon), we wanted to find stuff (Google).
As a kid growing up in the 80’s and 90’s, we had a VHS at home and about 6 movies we owned and watched on repeat until the tape wore out and it was all fuzz. We had a Betamax as well which dad stubbornly held onto as superior technology for years after they stopped producing tapes for it -technology without creators and consumers is just a footnote. One of the tapes was the original Charlie and the Chocolate Factory and I can still remember Mike Teavee getting shrunk down and sucked into the television, screaming “Look mom, I’m on TV!”. In fairness, I probably watched it 300 times but I remember it because it was fucking cool. I remember walking around technology stores back then when they’d have the video camera on a tripod hooked up to the telly and you’d walk past and do a double-take because holy shit balls you were on TV! Or when dad got a tape recorder with a microphone and me and my brother would sit in a dark room and make our own little radio shows. We didn’t have anything to say but man, what a rush.
It’s very new that we could take a free online course on how to write or paint or make a movie with a million times better production value than Charlie and The Chocolate Factory on an iPhone and share it with potentially everyone later that day. We’re still learning what it means to live in a world of creators. Look at me, I’m writing about things I barely understand, poisoning the minds of 1 in every 500 million people on earth and this won’t even be the worst thing you read today.
Most of what gets created today online is created for free by amateurs. On Web 2.0 we’ve not been very concerned about ownership, most of us are just excited to be on TV; to be able to participate. It’s been good to ‘own the rails’ - to be Facebook or Google, Amazon or Apple just like VHS before them. These are the places most of us go to internet but people have been struggling increasingly with the distribution of wealth, power and influence because we know from Betamax that the creators and consumers are an essential part of the ecosystem. We’re starting to want a say and a meaningful share of the value we create. As we might have predicted, our old models are beginning to look increasingly skeuomorphic – derivative and outdated.
If Web 1.0 is read-only and Web 2.0 is read-write, Web 3.0 is read-write-execute and we’re still working out exactly what that means in practice. We still mostly think of the ability to execute as being concentrated into the hands of relatively few people and businesses, whether in building a website or an app or a house for that matter. We’re accustomed to a world where these require considerable expertise and it can be hard to imagine something different, but we were once accustomed to a world where only a few of us could read. We were once accustomed to a world where most of us could read but there were relatively few places to access content and relatively few of us were privileged and entrusted to create and control it. These are big changes and they take time, often many generations to properly understand and explore them.
Web 3.0 is sometimes called The Semantic Web, a word borrowed from the study of logic and language meaning meaning. The Semantic Web is the Web from the point the machines start understanding each other because we finally reach a level where meaning can be encoded and decoded in the data - and not just analysed - understood. Meaning. It’s a bit like how people understand each other. We draw on our thoughts and experiences, which are subjective, and encode what it means to us into language. A good listener can hear the language and decode its meaning using their own subjective thoughts and experiences. I tell a story about love or pain or success and you use the metaphor of your own experiences of these to understand it.
But the machines; the software and hardware, applications and devices… don’t just understand each other, we’re teaching them to understand us. We can talk to our devices, and not in a binary language or a programming language like Java, in our own languages. For years we’ve been able to build websites or apps without the need to code. We can drag and drop discreet applications like an online store or a contact form or a social media feed into place on a page as we might drag and drop ingredients into a bowl, personalising a site for our needs as we might a recipe to our tastes.
We’re not just lay-people using apps or websites, we’re creating them, we’re executing. Through API’s, software intermediaries which allow different applications to talk to each other, these applications can communicate with our accounting software and our inventory and customer management applications, and even other people in one unbroken Web. We could design a house like that with only basic expertise and handy templates learned and found online, dragging and dropping, and then one day soon we might just press print.
To an extent, this is all skeuomorphic – derivative of our past rather than an accurate account of our future. A world where everyone can read, write and execute is going to look increasingly unrecognizable because it’s less authored. It will be a story made increasingly by its many billion autonomous characters, building on each other’s work in new ways and in different directions. Read-write-execute, sounds pretty fun, right?
Good on you for making it this far. If you’re keen to go deeper, most of what I’ve misunderstood and reorganised in the above has come from my reading, watching and listening to Chris Dixon, Naval Ravikant, Vitalik Buterin, Azeem Azhar, Rachel Botsman, Steven Pinker and Scott Galloway among an eclectic mix of others over the last year or so. Enjoy your weekend folks.