What will AI do to Reading?
"Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it." –Pawel Sysiak, "AI Revolution 101"
Q: If an AI could read a novel and understand its meaning, bring to bear on that meaning every bit of art and information ever recorded, and spit out an evaluation, how would this affect us as readers? And how does the thought of this make you feel?
When I was little, my mom won, in a ball toss, a Fred Flintstone doll. I adored my mom through that doll. I didn’t care for the Flintstones, and if I did, Fred would be my least favorite character. He told everyone what to do. I hated it.
It's important to talk about the difference between reading and writing: as a poet, a teller, among other tellers, reading about the act of writing, it's easy to forget that we are the glad minority, and that what we hope to do is be heard. That the vast majority of all human effort toward literature is in hearing, not telling, and that all of the lionizing we do of artists is actually a mirror trick: What we are actually concerned with, what we actually love about art, is the experience of it being done by/for/with/on/to us.
What will AI do to reading? Maybe nothing. An infinitely brilliant AI is still only another reader. It cannot read a passage about the Flintstones and tell me a meaning that is truer than the one I know. Or I might be wrong. Maybe this AI, being brilliant, is also infinitely good at listening.
Let’s go ahead and grant the singular-ity, let’s grant that AI will soon be able to read and “comprehend” a text to the satisfaction of a literary scholar. This will be an impressive feat of engineering, whether the engineer is human, a prior AI, or indeed the algorithm of evolution itself, and one that would have been of great use to me back in high school, when I was writing essays about books I hadn’t read.
But high school was a long time ago, and what long-term use do any of us have for Cliff’s Notes, however thorough? I don’t read—I don’t read literature—to find out what a book says; I read literature to discover what it means to me. And because reading is—sometimes, often enough—so central to how I go about making meaning in my life, I volunteer a lot of my time to it.
Someone or something else knowing a lot about Plato affects my reading of Plato only insofar as it informs it, that is insofar as it offers a meaningful interpretation against which to compare my own. At best, then, an AI’s reading would be this limited tool. And it might be something much worse. Because there is no such thing as an objective or neutral reading of Plato, anything that claims to be is merely the worst kind of the opposite: one lacking self-awareness. An AI’s reading would only be as good as its unique, biased, creative, and subjective point of view allowed.
A computer can beat a human at chess. Can it feel joy in victory? A reflexive step further: Can it experience meaning in victory? Can it choose to try to win?—or for that matter to lose? In other words, can it value? The capacity to value is the hallmark of consciousness. Here’s how we’ll know when AI is worth paying attention to about literature: when it rereads Plato—or if Plato is unsatisfactory, perhaps some more perfect AI-composed text—for the third, fourth, hundred millionth time. Then we’ll know it considers this is a meaningful (not practical) use of its time—or if time is irrelevant, its other limited resources.
Intelligence is not consciousness, and consciousness is what matters—is what matters means. Maybe one day AI will be conscious and enjoy idiosyncratic points of view we can interact with. Whether or not this transpires, I’ll be here reading my Plato.
How I feel about it all just boils down to who is pro-gramming and funding these AIs and why. That will tell us a lot about how they will be deployed and at what cost.
I think we often forget that it is humans that create technology. It is never ahistorical or asocial.
Tech isn't always dystopic, shiny, metal. What we frequently define as technology today doesn't encompass its true meaning in the long-term. Technology is also notebooks, pens, forks, typewriters—the things for which we hold immense nostalgia. Every time we extend our mind and/or body onto an object to make it work for us, we are employing technology. Cups, toilets are just as much technological instruments as calculators, microscopes, and cinema.
Recently graduating from college, I've got a few fresh memories from reading intensive seminars where my classmates might have well as been AI robots. Regurgitating without internalizing. Lack of creative analysis that dares to defy the rules of literary criticism or contributions to interpretive meaning. I wouldn't want them to be building reading AIs.
AIs could teach us about what we think we do when we read. What is left out of the programming may reveal elements of reading that we take for granted, or thing of which we are not even aware.
AI will be great for writers and terrible for the publishing industry.
Initially, AI will do little more than make books increasingly accessible. Computers will read to seniors with failing vision and people with disabilities, but before long, they will form opinions.
AI won’t truly be what it aspires to until computers develop preferences. This step, which is far beyond simply understanding things, may take a decade or two, but it is coming, and once intelligence can truly be manufactured, authors will see a boom in the number of readers on the planet. Sadly, these readers will likely never buy books.
Imagine: an artificial being wants to read the latest Steven King (let’s not even get into the repercussions of computers reading horror). It simply accesses the digital files and, in less than a second, digests the story, forms an opinion, and tells all its AI friends about it. There may even be some humans advanced enough to join the conversation.
Stories will continue to prosper.
Extrapolate a little further and you have AI authors. Eventually humans as we know them will be left behind on the evolutionary trail of progress, but if creativity is an intrinsic component of intelligence, it will long outlive capitalism and the need for profit.
In the conversation of Artificial Intelligence, there is only one aspect of it that really intrigues me. That concept is AEI (Artificial Emotional Intelligence). This is the process of computers being able to mimic and create emotional responses that could better help them understand the reactions of humans and, potentially, other robots equipped with the same system.
As a writer, it doesn't really make me sad or scared to hear that a computer can create these complex and emotional pieces of art. Honestly, I think that says more about us as people than them as machines. If a robot can make these stories, then what have we as humans been doing this whole time? An even better question than that is, does it really matter?
I feel like we're worrying about something that doesn't care about us or our presence. It's kinda like when movies were created. Everybody thought that films would murder television. Before that, everyone thought television would murder books. Before that, everyone thought books would murder regular, human story telling and interaction.
At the end of the day, if we as people are still learning and growing, does it matter if there's something out there that's smarter or more creative then us? If it does, then we should've and would've stopped creating or doing anything along time ago. So I'm going to keep reading books. As a matter of fact, I might even listen to Siri explain and read a book to me because, why not? It's 2016. Technology is crazy and I'm glad I get to use it.
I think the question hinges on the meaning of the word “evaluation,” and the notion that what humans do is read and then spit out evaluations. We can (and should) certainly identify work within the context of the “conversation” it’s participating in (i.e., is it responsive to other work, is it reactive, reductive, etc.) and evaluate it in those terms, but isn’t a work most successful when it taps instead into a reader’s specific experience?
A story, film, poem, painting, play, or piece of music might trigger certain sensory responses or memories—something only a sentient and sensitive reader can have (right?)—and the conversation between the work and those responses is actually what become the work’s meaning. (See also: A tree falling in a forest.) In other words, it seems to me that an unemotional and unbiased reader could/would, in fact, only catalog for cross-references to other pieces of art and information, but would never properly read for meaning.
Maybe meaning sprouts from emotional bias itself. Wouldn’t we all, otherwise, love all the same works in the same amounts and for the same reasons?
Reading is a human action. This is the only way we can understand it. And because I'm not ready to give AI agency to do anything to reading at all, the question is not "What will AI do to reading?" but "What will we do to reading through our technological inventions?"
Already, reading has changed. Most of what we read now lives on screens, always shifting. A text message appears at the top of the gleaming screen where we are reading about something else. We are navigated away from what we were following—how many of you have even made it this far in this text?
But the drive to read literature is not a drive to consume factoids and data. It is something else.
I remember one winter in my mid-twenties, when I was barely employed and trying to be a writer. I was full of doubt about my future and living in an apartment I shared with Pilates instructors who came and went in a frenzy of health-shakes and success. I decided that if all I did that whole year was read War & Peace, the year wouldn’t have been a total waste. I'll never forget how that book became a comfort to me through loneliness and uncertainty.
Limited time imbues every decision we make with meaning. That we know we will die is what makes us human. AI can have no such anxieties. It will finish Infinite Jest and Moby Dick and War & Peace and Les Miserables and Middlemarch. It has all the time in the world.
For starters I see effective reading as something that’s not just learned (the AI-defeat of the world Go champion proved machines are perfectly capable of out-learning us). Beyond taking in words, it requires us to emote. It's doubtful for the time being whether a work with the empathic register of Anna Karenina could get read and discussed, let alone conceived and written by a robot. Still, what does it suggest that AI “pets” are dispatched to nursing homes to give elders the care and connection that they lack? That such experiments have been successful?
Literature as a discipline rests on emotional intelligence, which is synonymous with the consciousness linked to our cellular wetware. The manga series Ghost In the Shell imagines something different: its heroine is a cyborg struggling with her gheist, a soul inside a body intended to perform like a machine. The Turing Test in the 1950s was already raising questions about a computer’s potential for this kind of behavior. If you and a robot are discussing Tolstoy today, it's one thing for her to draw on facts about culture and politics in Moscow in the late 19th century; it’s something else entirely for her to say that she began hating Vronsky in chapter twelve.
The issue here is subjectivity. Will AI join our species in feeling things like passion, anger, and indifference (likewise with characters in the stories it writes)? With our nursing home robots already leading us there, what would happen if AI demonstrated an inner life as complex and responsive and nuanced as ours? What we know is that it already controls what we read—reducing to algorithm what's published, marketed, and gets recommendations and press on platforms from Kindle to Goodreads. And whatever happens, I'm skeptical that AI as a tool of the market economy wouldn't compromise what stories get told.
I’m probably less concerned about living in a world populated by mechanical or partially human agents than their presence leading to a worse cultural myopia than the one we already face. This could happen not just at the level of alien “rights” but our very discourse, via a mix of mechanization and market forces. That’s troubling to me as a reader and writer. It would be a revolution. If humanity ushers it in, and a hunch tells me that in my lifetime we will, I expect we'll be doing so with fear and trembling. As we should.
I take issue with the idea that a true super-intelligence would be unemotional or non-sensual. I know this is a controversial point, but I bet this entire Sicilian cream bun on my desk right here that in order to truly understand the meaning of events, to actually understand the meaning of literature, one must have some comprehension of emotional weight and emotional valence, and have the ability to be stirred. Otherwise you might be like the computer engineering student I encountered in one of my poetry classes, who—no offense to him—when reading Mary Oliver's poem "Wild Geese" replied "I don't understand this poem, it is very lonely, because the narrator says they are lonely and alone. And it says that WE are alone. It doesn't offer any solution."
The poem says "Tell me about your despairs, yours, and I will tell you mine... whoever you are / no matter how lonely / the world offers itself to your imagination." Readers have told Mary Oliver that when they were so isolated they wanted to commit suicide, this poem saved them. This is what an AI must feel, must understand.
Another point to consider is that AIs are not singular entities, though we might imagine them as these individual, atomized program-beings, a sort of cyborgerized, basement computer nerd. We kind of reverse-anthropomorphize AIs; make them less human. But they are conjoined, and multi-various, and if as/more intelligent than humans, infinitely complex.
So wouldn't the effect of AI society play out in the production of literature? How might they, as a people, value culture, cultural production, and literature? What about the effect of the power structure of their society, and the economy in which literature is being made? Very little has been written about the potential exploitation of AIs that may occur at the end of the capitalocene, how they may be required to work constantly, given that they are not viewed as humans and endowed with our (albeit poorly applied) own human rights and labor rights. In such a case, would literature seem like merely another form of labor? Would it have any meaning for them?
I'm eating this cream bun and thinking that an unemotional, non-sensual "super-intelligence," or an exploited AI who must produce manuscripts like a machine makes paperclips, would have a lot of trouble creating literature that deepens the experience of consciousness to the extent that human literature can. If AIs do become so fully developed as to be emotional, to have complex algorithms that actually replicate emotional and sensual processes—that allow for felt experiences of exclusion, isolation, solitude, empathy, hope, trust, warmth—then I welcome the literature of this species. I am curious. I'd like to see what symbioses can be created between our literature and theirs. I want to know what another species makes of this alarming, chaotic, tasty, infinite universe.
In my "real job," I'm a software developer for a large IT company that's pouring a lot of money into AI (though I'm not directly involved in it).
The scenario of the "what if?" question doesn't strike me as very likely. Words are inexact things at the best of times. Most commercial applications of AI have been to take a small task that would otherwise need a human ("Does this picture contain a cat?" "Is this blog comment abusive?") but to do it at a massive scale ("Which of these million pictures contain a cat?") so you don't need to hire a small army to get it done in a reasonable time. There isn't a pressing commercial need for the definitive meaning of a story or poem, so I think the jobs of literary critics and English professors are safe—at least from that threat.
A lot of work's being done on "sentiment analysis"—teaching the computer to work out how a writer feels about a topic. (Some companies have decided it's cheaper to rush out a product or service and wait for Twitter to tell them it's crap than it is to test it properly first.) It's now being applied to longer texts, and can make good guesses at how the reader will feel after having read them. An example I saw recently was an online dating profile, with discussion of how the computer could help the writer to make himself more appealing.
That set me thinking about how fiction genres can be categorised according to the emotional response they try to evoke in the reader. The computer could determine the response to a story and compare it with the "ideal" for the story's genre. Publishers could mine their slush piles to find the most romantic romances, the most thrilling thrillers, and so on. To those who say this shows a cynical approach to manuscript selection, I reply, they already publish the books they think will make the most money. This just brings some objectivity to the matter. A more interesting application of this technology might be to allow a reader to select a book based on how (the computer says) it will make them feel.
My answer will sound sassy because I tend to be sassy: anyone can write a novel. Anyone at all. iPhone's Siri can write a novel. Just ask her a bunch of questions in a series, and eventually words will be produced into sentences. It might be a tad bit incoherent, but it could be constructed into a "novel" nonetheless. Also, computers/phones have been writing poems as of late: check out the literary journal, Curated AI. A writer friend of mine, Ingrid Rojas Contreras, and her phone wrote a little haunting ditty here.
What do you mean by "in an effort not to be rendered obsolete?" You see, that is the very essence of great literature—to make humans not obsolete. Anyone can write a novel, yes, even a computer. I'm not sure if it will be a great one. "Great" is subjective to a certain boundary, and in this particular neoliberal culture, "great" is defined by monetary value, which is silly to me.
Does understanding a literary text mean "spit[ting] out an evaluation"? Evaluation is defined by Merriam-Webster as: "to judge the value or condition of (someone or something) in a careful and thoughtful way." So, we're talking about a value system, about considering an object's "merit" or "worth." And to what ends? "Value" as in "monetary" value in this neoliberal context, or "value" as in "giving meaning to life" in a Dr. Viktor Frankl kind of context?
The whole deal with novel-writing, since the Gutenberg printing press, was to inscribe meaning into daily life. That's why we have mythology, why we have religion, why we revere the Greeks to the Egyptians to ancient cave writings on walls: humans have always chased and needed "meaning" in their lives, and storytelling's function, its beating heart, is recording down a sense of primordial value for humanity. This is why it has been around for centuries.
If AIs could "understand" books like how we "understand" books, then I believe the conversations will be the same kind of struggles we have with each other. Think about Mary Shelley's Frankenstein. Your question is what that book was all about: "When I looked around I saw and heard of none like me. Was I, then, a monster, a blot upon the earth, from which all men fled and whom all men disowned?" (Except, exchange "monster" with "computer.")
This is why literature is important. It has already asked these questions, but in other forms. That's what literature does: it probes the questions of meaning and the universe, and we usually come up with answers that only beg for more questions. We always come to this: what is "meaning" in life, and why do we need it? AIs, if they become sentient beings, will ask the very same questions we would ask: why do we live?