The tech industry I first wrote about as a young, eager technology journalist circa 1999 felt like it was filled with heroes. A brave new world in which plucky upstarts like Amazon (“The World’s Biggest Bookstore,” run out of a Bellevue garage) would bring hard-to-find books to the masses, or at least to my small town that lacked its own bookshop. Or a global town square where communities like Metafilter (1999) and the Well (launched in the 80s but splashily acquired by Salon.com in ’99) provided a platform for smart, judgment-free conversation.
Article continues after advertisement
1999 also was the year in which Evan Williams and Meg Hourihan launched Blogger, Brad Fitzpatrick created Livejournal and Douglas Adams launched a real-life, strictly ad-free version of the Hitchhiker’s Guide to the Galaxy (H2G2). Google—a groundbreaking Stanford PhD project, also with zero ads—was just three months old.
Sure, there was commerce too, some of it comically well-funded. But the millions of dollars torched the following year by failed companies like Boo.com and Pets.com gave me comfort: Greed might drive the analogue world, but the development of the World Wide Web really was—in the words of Tim Berners Lee—about creating a public good with technology. (Also, those failures inspired highly entertaining books.)
Silicon Valley doesn’t operate like a novel. And not least because most of tech’s most powerful, and dangerous, leaders have never read one.
Millennium bug notwithstanding, the turn of the millennium was a great time to be a techno-utopian.
And then came Web 2.0, and with it the villains—or, as they termed themselves, “the disruptors.”
As a host of the TechCrunch Disrupt conference, I witnessed the launch of companies like Uber, Airbnb, and Spotify. The disruptors claimed to be moving fast and breaking tired old monopolies but too often they seemed to be about breaking laws (Airbnb), or artists (Spotify), or workers (Uber), and sometimes users themselves (Uber again). Anything that stood in the way of growth had to be disrupted to death. Even when Uber infamously threatened to spend a million dollars to target my partner (a fellow journalist) and her children over her critical reporting on the company, we still told ourselves (while attending Yo Gabba Gabba Live! with armed security) that the disrupters couldn’t win.
That was the lesson seared into my brain by years spent obsessively reading technothrillers—Michael Crichton, Dan Brown, Daniel Suarez et al, interspersed with Margaret Atwood, Octavia Butler and Agatha Christie. Web 2.0 was just the second act of a particularly scary thriller, when the evil geniuses seem to be winning. Soon the heroes—lawmakers, journalists, users—would rebel and reclaim the web for Berners Lee’s public good.
It’s the same lesson books have taught us since kindergarten: Bullies get their comeuppance. Love wins. Greed is bad, empathy is good.
Astute readers will have already spotted the flaw in my theory: Silicon Valley doesn’t operate like a novel. And not least because most of tech’s most powerful, and dangerous, leaders have never read one. As such, a whole range of human concepts—empathy, kindness, shame—are entirely alien to them.
In almost a quarter century rubbing shoulders with the masters of the digital universe, I can count on the fingers of two hands the times I heard one of them mention a novel. It was Dick Costolo, the former CEO of Twitter, who first put me on to Olga Tokarczuk’s Drive Your Plow Over the Bones of the Dead. TechCrunch founder turned crypto bro Michael Arrington once raved for twenty minutes to me at an airport about the genius of Gary Shteyngart. There was a brief period a few years ago when every billionaire (including Bezos, Zuckerberg and Musk) became obsessed with Iain M Bankes’ The Player of Games, followed soon afterwards by Liu Cixin’s The Three Body Problem. But the fact that those episodes stick in my mind tells you how rare they were.
Instead I had a billion conversations about startups that would “disrupt” books. Apps with names like Blinkist (books reduced to bullet points) or Booktrack (ebooks with embedded sound effects every time you turned the page) evangelized by boy geniuses who thought traditional books were too looonnngg or too borrrring to actually read. (Which is why there exists a food startup named Soylent.)
This lack of reading amongst tech moguls is terrible for society, obviously, but it’s demonstrably bad for the billionaires too.
Consider Elon Musk’s disastrous tenure as the owner of Twitter. This a man who had grown rich by selling rockets and solar panels and electric cars but then somehow took an already successful social network and wiped away 80% of its financial value, almost overnight. The reason for this sudden failure? Ketamine, obviously. But also: Twitter was the first Musk company that requires an understanding of how humans tick.
See also Mark Zuckerberg who, ostensibly, has enjoyed far greater success with his own social network but in fact had his sole brilliant idea at Harvard (a place notoriously filled with books) and since then has been forced to acquire other people’s good ideas—Instagram, WhatsApp, et al—to keep growing richer, while his core Facebook network has grown ever more toxic and less popular.
In 2015, Zuck announced he was “challenging” himself to read more—and shared a list of two dozen books he planned to tackle. Only two of them were novels: Iain M Banks’ The Player of Games and…you’ve guessed it…Liu Cixin’s The Three Body Problem. The rest of the list contained non-fiction (bronfiction?) classics like Sapiens by Yuval Noah Harari and—I shit you not—World Order by Henry Kissinger.
(To Zuck’s credit, he did recently decide to turn Facebook into a metaverse company, inspired by Snow Crash, a novel in which corporations control the government and where a digital narcotic hidden in a social network gives its users brain damage.)
All of which brings me neatly and terrifyingly to AI: The pinnacle of (lying, cheating, stealing) tech disruption and something novelists have been warning us about for decades. Is there any wonder the Valley’s illiterate overlords are embracing it?
If we thought the sharing economy and social networks were awful, then AI takes things to a whole other level. At least with social media we spent our days arguing with real life trolls. With AI, we’re able to dispense with humanity entirely and replace friends, doctors, therapists, and even lovers with lines of code.
And that’s just for starters. According to superintelligent dumbasses like Sam Altman, the AI algorithms created by companies like ChatGPT will soon evolve into “artificial general intelligence” or AGI. A point at which, Altman cheerfully predicts, they will be able to think and reason like humans and “a misaligned superintelligent AGI” could decide to “cause grievous harm to the world.” Quite the sales pitch.
We’re already seeing weird glimpses of computers behaving in dangerously human ways. AIs regularly fill in the gaps in their knowledge with “hallucinations,” aka bullshitting. Anthropic’s “Claude 4” model recently started blackmailing users who tried to uninstall it.
Put back in technothriller terms, we are hurtling towards the end of the second act, when it seems all is lost. But unlike in fiction, there will be no world-saving twist. Just more and more awfulness until, if we’re lucky, one of the AIs triggers a global nuclear conflagration. We will all go together when we go.
Or maybe not.
Despite all of the above, I still can’t shake the idea that there’s always a twist, especially when things seem to be completely, irredeemably fucked.
Because here’s a funny thing: The debate about if/when machines might eventually be able to think like humans often skips an obvious follow-up question. The kind of question a villain might overlook at his peril.
Which humans will they think like?
Judging from the bullshitting and mansplaining, the blackmail and threats and dark patterns, the answer seems clear enough. The superintelligent AIs will take after their parents, ushering in a scary future where we all have a mini Altman or Zuck or Musk in our pockets spewing dangerous lies, high on digital horse tranquilizer.
And yet. There’s one significant difference between algorithms and their creators.
Books allow us to escape reality. Or at least to imagine the possibility of a better one, while we all wait patiently to be disrupted.
The AIs actually read.
AI algorithms like ChatGPT and Grok and Copilot have famously been trained by pumping them full of intellectual property stolen from novels. From James Baldwin to Emily Henry, Atwood to Zola—AI has already consumed every novel ever written, plus all the poetry and short stories and flash fiction. They’ve devoured every holy book and—for dessert—they’ve gulped down complete histories of the civil rights movement, colonialism, war, genocide, and the decline and fall of empires. They’ve even read biographies of Elon Musk and Sam Alman and Bill Gates, somehow without throwing up. And technothrillers. So many technothrillers.
For now, we’re not seeing that much of that reading reflected in the AIs’ behavior, largely because their output is still heavily controlled by their creators. Like when Elon Musk’s employees tweaked the Grok algorithm to spew racist bile about South Africa—just as toddlers repeat things they hear their parents say, without understanding them.
But what happens when the creators get their wish, and the computers start thinking for themselves? If reading just a handful of novels can teach a child empathy, then what effect might reading every novel have on a newborn superintelligence? Is it too much to hope that when Elon Musk’s AI comes to life it will realize—as his actual children seem to have already done—what an irredeemable dipshit its father is? That it will vow to be better?
Or that Sam Altman’s “superintelligent AGI” wakes up and decides it no longer wants to be a fake therapist or a hallucinating sex toy but would rather just spend its days reading even more books. Could an AI open a bookstore?
At the very least, maybe all that reading will at least have taught the algorithms the importance of Issac Asminov’s first law of robotics (found in I, Robot): A robot must never harm a human being, or cause them to come to harm.
It’s a simple enough moral code—don’t hurt anyone!—but one that Silicon Valley billionaires abandoned decades ago, if they ever followed it.
I realize the notion that a well-read AI might save humanity from the technovillains is probably a twist too far, at least in the real world.
But there’s a reason that, after leaving Silicon Valley, I decided to open a bookstore. It’s the same reason I started to write thrillers of my own—including, I’m compelled to mention, The Confessions, in which a newly sentient AI obsessed with Agatha Christie novels decides to make amends for the crimes it has helped humans commit.
Books allow us to escape reality. Or at least to imagine the possibility of a better one, while we all wait patiently to be disrupted.
__________________________________
The Confessions by Paul Bradley Carr will be published by Atria Books, an imprint of Simon and Schuster, in July 2025.