An AI Reading List
It’s time to close some browser tabs.
When I started working with “modern” AI, back in 2018, I did so because I felt it was important to bring non-expert, non-technical, voices into the conversation about what the future of the technology could be. I approached this through art and aesthetics because AI, at the time, was fairly removed from our daily experience. It seemed that art could be a way for people to have an encounter with the technology and to develop their own intuitions for what it was and how it worked. Perhaps I was being idealistic, but I really believed that this would be a way to bring diversity to the technology, and wrestle some power from the seeming inevitability of big-tech’s domination of how it would be used.
Over the past several months, particularly with the emergence and almost global domination of the latest AI systems, such as ChatGPT, it feels as though we’ve moved into a new era. The stories around AI have moved from “coming soon” to “you can use it today.” It’s endlessly in the news. And it’s available to everyone.
David Young: Writing is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
As I’ve been trying to keep up to date I feel as though my original motivations for working with the technology are beginning to shift. And as I started to go through the links that I’m including in this post I had thought that I would accomplish some sort of synthesis — a new framework for how I thought about AI. But because there is so much hype around the field, it’s hard to figure out what claims, or opinions, will be lasting.
So, as a kind-of capture of the moment, I’m posting here a set links to the things that I find interesting, and some quotes that stood out to me. (It’s also a chance for me to finally close some of my browser tabs.) Consider this a summer reading list. Or maybe our first book club. Let me know what you think…
This Medium post from Mike Kuniavsky gives a great introduction to how the latest version of AI systems work. It then goes on to introduce the concept of “hallucinations” and the problems that develop when we use them:
What happens when happenstance details become too believable to let go of? Many cognitive biases focus on how we tend to overvalue what we can see, what we have, what we have recently seen, relative to what we haven’t yet seen. We preferentially anchor on the tangible. So what happens when everything is instantly tangible?
AI machines aren’t ‘hallucinating’. But their makers are.
This thoughtful and very in-depth piece by Naomi Klein is very critical of the tech industry and the myths it wraps itself in. It’s worth reading as she goes on to critique several of the so-called promises of AI.
Why call the errors “hallucinations” at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI’s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector’s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species.
[But] it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations
Sam Altman on What Makes Him ‘Super Nervous’ About AI
This interview with the co-founder of OpenAI is a vivid example of the hyped-up beliefs driving the industry.
This [AI] is going to elevate humanity in ways we still can’t fully envision. And our children, our children’s children, are going to be far better off than the best of anyone from this time. And we’re just going to be in a radically improved world. We will live healthier, more interesting, more fulfilling lives; we’ll have material abundance for people...
AI’s Greatest Lie
I’m a huge fan of the Substack The Algorithmic Bridge by Alberto Romero. This post was an eye opener for me… did you know where the term “Artificial Intelligence” came from?
John McCarthy, the father of artificial intelligence, came up with the catchy term to get funding: "I invented the term artificial intelligence…when we were trying to get money.”
The post talks about how this term lead to the anthropomorphism that has corrupted the entire field. “Complex information processing” is much more appropriate and less susceptible to marketing hype.
You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills
This New York Times piece by Yuval Harari, Tristan Harris and Aza Raskin has a fascinating discussion of how AI’s use of language, “the operating system of human culture,“ could be fatal for us. I was particularly struck by the notion of how AI, as the core component of social media, has already done irreparable damage to our society:
Social media was the first contact between A.I. and humanity, and humanity lost. ... In social media, primitive A.I. was used not to create content but to curate user-generated content. ... selecting those that will get the most virality, the most reaction and the most engagement.
While very primitive, the A.I. behind social media was sufficient to create a curtain of illusions that increased societal polarization, undermined our mental health and unraveled democracy. Millions of people have confused these illusions with reality.
And then I read this depressing story about how social media and the TicToc algorithm are changing what we eat.
Whoever Controls Language Models Controls Politics
Here Hannes Bajohr talks about the companies that create large language models (LLMs) embed their ideologies into the systems they create, which are then frozen and under their control.
What we are faced with, then, is a new oligopoly that concentrates language technologies in the hands of a few private companies. … the future of political opinion-forming and deliberation will-be decided in LLMs.
Why NFTs and A.I. Image Generators Are Really Just ‘Onboarding Tools’ for Tech Conglomerates
I saw Hito Steyerl give a lecture about this several weeks ago, and really appreciated her artist-first perspective. She calls the images and text generated by modern AI systems “statistical renderings.”
...these renderings do not relate to reality. They relate to the totality of crap online. So that’s basically their field of reference, right? Just scrape everything online and that’s your new reality.
In 2021, we had NFTs. In 2022, we have statistical renderings. [These companies] onboard people into new technological environments; with NFTs, people learned how to use crypto wallets, ledgers, and metamasks, and learn all this jargon. With the renderings, we have basically the same phenomenon.
They are onboarding tools ... Companies try to establish some kind of quasi-monopoly over these services and try to draft people to basically buy into their services or become dependent on them
How Mark Zuckerberg Led the Tech Industry Into a Metaverse Wasteland
This New Yorker article, while primarily focused on the Metaverse, is a good reminder of the tech industry’s obsession with the new and, sadly, people’s willingness to believe the hype. AI continues as yet another technology to bring them more money and power.
Consider what crypto looked like from the very top: not just a potentially promising area for investment, a modest but meaningful grassroots phenomenon among users, or an engine for wealth, but also the crude fantasy of total regulatory freedom, a path to a stateless, tech-centric world. AI, too, represents, among other things, a profound tech-exec fantasy: an endless supply of cheap and obedient labor and a chance to take ownership of the means, of, well, everything.
AI Can’t Benefit All of Humanity
Another great post from Alberto Romero on how, even if there are positive benefits from AI, it will serve to further the gap between the rich and the poor.
With time, newer generations reap the rewards of the technological seeds their ancestors saw which elevates everyone’s quality of life to some degree. However, it’s not through an equal process: the rich improve their quality of life more than the poor. There’s a tech-driven (and AI-driven) well-being gap that increases, not decreases, with technological progress.
What we have is that first, technology disrupts the poor more, and then, even if it eventually benefits everyone, it benefits the rich more.
And for a real example of this, read this story about how OpenAI uses Kenyan workers to flag toxic content, which causes serious psychological harm to those workers.
The end of creativity
The Culture Creating AI is Weird. Here’s Why That Matters.
This may be the smartest podcast I’ve ever listened to. It’s hard to get a good and simple quote, but Erik Davis talks about how California’s counter-culture has been absorbed by the tech community and how AI is a kind of evangelical or absolutist religion.
So the exuberant embrace of the idea of uploading ourselves into computers … And so what you can imagine happening is rather than having it be some kind of spiritual transcendence is that it gets mutated into a technological possibility on the forward timeline. So rather than having it be something that I can transcend now through various esoteric practices, instead it’s something that’s adhering in the technological development is going to produce a moment in the future of something like transcendence. … We’re at this one inflection point of evolution, and we either jump on it or we don’t.
But I’m including the podcast in the creativity section because of this idea (which connects a lot to Hito Steyerl thoughts).
Far from serving away from a norm, these systems make the future by conservatively iterating the past. Even the apparent creativity of large language models relies on the novel shuffling of a gargantuan deck of cards that already exists.” … instead of being an opening to something totally different, completely unpredictable, it’s actually a narrowing to the completely predictable, literally built on prediction engines.
Striking movie and TV writers worry that they will be replaced by AI
I didn’t realize that AI was a serious topic in the ongoing writers strike.
No Writer Is Safe From AI
ChatGPT “is ‘good enough’ for most writers and editors”
All Consuming: AI
A podcast from photographer Noah Kalina, talking about how AI will impact photography.
Art and artificial intelligence
I was interviewed on this NPR 1A radio program.
The reality may be less interesting
Your job is (probably) safe from artificial intelligence
I love pieces like this in The Economist. It looks at previous technology innovations and their impact on global finance and suggests that significant impact, especially short-term, is exceedingly rare.
What does this [AI] mean for the economy? Many have grand expectations. … [one study] suggests that “widespread ai adoption could eventually drive a 7% or almost $7trn increase in annual global gdp over a ten-year period.” … A few economists, only half-jokingly, hold out the possibility of global incomes becoming infinite.
[on the other hand] financial markets, the researchers conclude, “are not expecting a high probability of…ai-induced growth acceleration…on at least a 30-to-50-year time horizon.”
There’s so much more
I realize now that this collection of links barely scrapes the surface of the themes swirling around AI these days. And this post can never be complete. So I need to wrap it up.
I think I’ll take a step back and share one of my favorite quotes. It’s from Walter Benjamin’s 1940 essay “Theses on the Philosophy of History.”
A Klee painting named Angelus Novus shows an angel looking as though he is about to move away from something he is fixedly contemplating. His eyes are staring, his mouth is open, his wings are spread. This is how one pictures the angel of history. His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The angel would like to stay, awaken the dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.
David Young: Writing is a free reader-supported publication. To receive new posts consider becoming a subscriber.