“Science-Fiction is not predictive, it is descriptive.”
-Ursula K. Le Guin.
I’ve spent the last 30 years of my life being obsessed with sci fi. It probably started with Space Lego, and imagining the lore behind Blacktron, The Space Police, and the Ice Planet folks.
I loved Star Wars for a few years, but only truly between that wild west frontier time of post-Return of The Jedi, but pre-prequel. The Expanded Universe was unpolished, infinite, and amazing. Midichlorian hand-waving replaced mystique with…nonsense.
As I grew older I started to take science fiction more seriously.
In 2006 I pursued a Master’s in Arts & Media, and was focused on the area of “cyberculture”: online communities, and the intersection of our physical lives with digital ones. A lot of my research and papers explored this blurring by looking deeply at Ghost In the Shell, Neuromancer, and The Matrix (and this blog is an artefact of that time of my life). Even before then and during my undergraduate degree as early as 2002 (going by my old term papers) I was starting to mull over the possibility that machines could think, create, and feel on the same level as humans.
For the past four or five years I’ve run a Sci-fi book club out of Vancouver. Even through the pandemic we kept meeting (virtually) on a fairly regular cadence to discuss what we’d just read, what it meant to us, and to explore the themes and stories.
I give all of this not as evidence of my expertise in the world of Artificial Intelligence, but of my interest.
Like many people, I’m grappling with what this means for me. For us. For everyone.
Like many people with blogs, a way of processing that change is by thinking. And then writing.
As a science-fiction enthusiast, that thinking uses what I’ve read as the basis for frameworks to ask “What if?”
In the introduction to The Left Hand Of Darkness (from which the quote that starts this article is pulled), Le Guin reminds us that the purpose of science-fiction is as a thought experiment. To ask that “What if?” about the current world, to add a variable, and to use the novel to explore that. As a friend of mine often says at our book club meetings, “Everything we read is about the time it was written.”
In Neuromancer by William Gibson the characters plug their minds directly into a highly digitized matrix and fight blocky ICE (Intrusion Countermeasures Electronics) in a virtual realm, but don’t have mobile devices and rely on pay phones. The descriptions of a dirty, wired world full of neon and chrome feel like a futuristic version of the 80s. It was a product of its time.
At the same time, our time is a product of Neuromancer. It came out in 1984, and shaped the way we think about the concepts of cyberspace and Artificial Intelligence. It feels derivative when you read it in 2023, but only because it was the source code for so many other instances of hackers and cyberpunk in popular culture. And I firmly believe that the creators of today’s current crop of Artificial Intelligence tools were familiar with or influenced by Neuromancer and its derivatives. It indirectly shaped the Artificial Intelligence we’re seeing now.
Blindsight by Peter Watts , which I’ve regularly referred to as the best book about marketing and human behaviour that also has space vampires.
It was published in 2006, just as the world of “web 2.0” was taking off and we were starting to embrace the idea of distributed memory: your photos and thoughts could live on the cloud just as easily as in the journal or photo albums on your desk. And, like now, we were starting to think about how invasive computers had become in our lives, and how they might take jobs away. How digitization meant a boom of one kind of creativity, but a decline in other more important areas. About how it was a little less clear about the role we had for ourselves in the world. To say too much more about the book would be to spoil it. The book also introduced me to the idea of a “Chinese Room” which helped me understand the differences between Strong AI and Weak AI.
Kim Stanley Robinson’s Aurora is about a generation ship from Earth a few hundred years after its departure and a few hundred years before its planned arrival. Like a lot of his books it deals primarily with our very human response to climate change. But nestled within the pages, partially as narrator and partially as character, is the Artificial Intelligence assistant Pauline. In 2023, it’s hard not to read the first few interactions with her as someone’s first flailing questions with ChatGPT as both sides figure out how they work.
It was published in 2015, a few years after Siri had launched in 2011. While KSR had explored the idea of AI assistants as early as the 1993 in his books, it felt like fleshing out Pauline as capable of so much more might have been a bit of a response to seeing what Siri might amount to with more time and processing power.
The Culture Series is about a far-future version of humanity that lives onboard enormous ships that are controlled by Minds, Artificial Intelligences with almost god-like powers over matter and energy. The books can be read in any order, the Minds aren’t really the main characters or focus (with the exception of the book Excession), but at the same time the books are about the minds. The main characters - who mostly live at the edge of the Culture - have their stories and adventures. But throughout it you’re left with this lingering feeling that their entire plot, and the plot of all of humanity in the books, might just be cleverly orchestrated by the all-powerful Minds. On the surface living in the Culture seems perfectly utopian. They were also written over the span of 25 years (1987-2012) and represent a spectrum of how AI might influence our individual lives as well as the entire direction of humanity.
****
My feeling of optimistic terror about our own present is absolutely because of how often I’ve read these books. It’s less a sense of déjà vu (seen before), and more one of déjà lu (read before).
The terror comes from the fact that in all these books the motivations of Artificial General Intelligence is opaque, and possibly even incomprehensible to us. The code might not be truly sentient, but that doesn’t mean we’ll understand it. We don’t know what it wants. We don’t know how they’ll act. And we’re not even capable of understanding why.
Today’s AI doesn’t have motivation beyond that of its programmers and developers. But it eventually will. And that’s frightening.
And more frightening is that, with AI, with might have reduced art down to an algorithm. We’ve taken the act of creating something to evoke emotion, one of the most profoundly human acts, and given it up in favour of efficiency.
The optimism stems from the fact that in all these books humans are still at the forefront. They live. They love. They have agency. We’re still the authors of our own world and the story ahead of us.
And there are probably other books out there that are better at predicting our future. Or maybe better, to use Le Guin’s words, to describe our present.