The AI apocalypse isn't what you should be worried about
We've got plenty of other AI problems to deal with first.
This week Sam Altman, the CEO of OpenAI, testified in front of Congress about the potential impacts of AI.
My worst fears are that we - the field, the technology, the industry - cause significant harm to the world. I think that could happen in a lot of different ways. It’s why we started the company. It’s a big part of why I’m here today and why we’ve been here in the past and, and we’ve been able to spend some time with you. I think if this technology goes wrong, it can go quite wrong.
And he’s not the only expert troubled by the potential impacts. I’m still thinking about the recent interview with Geoffrey Hinton, one of the “Godfathers of AI,” where he discusses his regrets and fears from his long career in AI.
However, I’ve found that most people, even very smart people, don’t have a good mental model of what concrete AI harms would look like. Pop culture has painted an extreme portrait of Skynet and T-1000s taking over. But there are plenty of other harms that AI can cause, and is causing today, that don’t involve the end of the world.
What AI apocalypse?
If you know all about AGI, ASI, and alignment, feel free to skip this section.
Before getting to the concrete harms, it’s worth discussing the AI apocalypse scenario that many are afraid of. To dramatically oversimplify - and I'm going to be doing that a lot, so bear with me - some people believe AI will become sentient and drive humanity to extinction. In some cases, there is a belief that this could happen within five to ten years. How might that come about?
It starts with AGI: artificial general intelligence. For decades, we've dreamed about computers that can keep up with human intellect. Rosey from the Jetsons, C-3PO, the Terminator. And for most of those decades, AGI has been a sci-fi pipe dream. But OpenAI has forced us to take the possibility much more seriously. In a recent paper, Microsoft researchers discuss “sparks” of AGI in GPT-4.
We don’t know whether it’s ultimately possible, but for the sake of argument, let’s assume that we can successfully build AGI. The thing is, there isn’t an obvious biological limit on a machine’s potential intelligence. We don’t know whether adding more training data and more computing power will hit some IQ upper bound. And we really don’t know what happens when we reach a point where AI learns faster than humans.
Which brings us to ASI: artificial super intelligence, a byproduct of AGI. It’s not wholly unreasonable to imagine that after inventing AGI, we can add CPUs until we reach ASI. And with ASI, we now have something that thinks faster, learns faster, and operates faster than we do. It will be the first time Homo Sapiens will deal with something smarter than ourselves. So if we did build ASI, there is a belief that it could easily wipe us out.
How exactly would that happen? There are plenty of suggestions, in a variety of shapes and sizes. Most of them start with the idea of something like Skynet: an AI djinn escapes its container and outplays humanity before we even have time to react. Some of the more concrete ideas include:
Nukes. A fairly straightforward scenario: the AI hacks the necessary military bases/three-letter agencies to get the nuclear codes, and launches nukes.
Bioweapons. The idea is an ASI would design a bioweapon (e.g., a genetically engineered virus), have it remotely synthesized in a lab, and release it into the world.
Nanotech. One nanotechnology concern is accidentally releasing nanobots that mindlessly consume all organic matter. The “gray goo” scenario, if you will. The AI twist is an ASI invents said nanotech and releases it.
And last but not least, the Paperclip Maximizer, as proposed by philosopher Nick Bostrom:
Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.
Mo AI, mo problems
That’s all fairly scary. Fortunately, plenty of capable people are working on it. There is an entire field of AI safety, and a key component of that is AI alignment. From Wikipedia:
An AI system is considered aligned if it advances the intended objectives. A misaligned AI system is competent at advancing some objectives, but not the intended ones.
Alignment problems and solutions come in many forms, too many to list here. Some are working on teaching AI our values through human feedback. Others are trying to break down the black box that is modern machine learning, in an effort to predict and control how an AI will behave.
It’s a good thing that we’re considering and working on these scenarios. Regardless of where AI goes, having more control and alignment is helpful to reducing harm. As ChatGPT gets complaints about being “too restrictive,” it’s still likely better than a fully-open chatbot that has no regard for trust and safety.
But. Hyper-focusing on the “AI will kill us all” argument sidesteps the very real problems that AI presents today. And there is a real danger that the loudest, most sensationalist fears suck up all the oxygen when thinking about AI risks.
I get it - if you truly believe AI can lead to extinction, why work on anything else? Yet until we come much closer to creating AGI, it doesn’t make sense to me to ignore the current downsides of AI1. A very rough analogy here is something like clean energy: cold fusion might happen someday, but meanwhile, we can still work on solar panel efficiency and battery storage. It shouldn’t be an either/or: it should be a both/and.
Plus, AI doesn't have to be super intelligent or sentient to cause harm. Malware does enormous amounts of damage without being capable of thought.
So let’s dive into some of the issues AI is already causing. Many of these problems have been around for a while; AI is an accelerant. But some are brand new, and we’re confronting them for the first time.
Bias
AI bias is far from a new issue, though we’re seeing it crop up in new ways. Up until now, most AI has been predictive, rather than generative. Predictive models identify patterns based on sample data, and predict future outcomes based on the pattern. Think text autocomplete, or Netflix recommendations, or cancer detection. The new hotness is generative AI, which creates brand new data/text/audio/music based on training samples.
Yet even with simple predictive AI, we already have big problems with bias.
In 2015, Amazon found their hiring algorithm was tilted against women. Part of the algorithm’s prediction was based on how many similar candidates had submitted resumes, which heavily favored men over women. In 2019, researchers found that AI used in hospitals often suggested white patients needed more care than black patients. It did so without ever considering race - instead, the result stemmed from past medical spending. The idea was people with a history of more medical bills likely needed more care. But for various societal reasons, white patients typically spent more on healthcare than black patients, which biased the algorithm.
These are only a couple of examples, and if you dig deeper you'll find plenty more - remember the “racist soap dispenser”? Yet, these models only deal basic yes/no answers or numerical scores. With generative AI, we're giving the model creative control over the output. As AI-generated content ramps up, this has a big impact to shape our assumptions. What happens when every short story written by ChatGPT, and every stock photo created by Midjourney, assumes that doctors are male and nurses are female?
In some cases, people are handing over much more than creative control. Right now, there are examples of people using ChatGPT as a life coach or personal trainer. How would we know if there are subtle biases in ChatGPT’s helpful responses? We’re so conditioned to living with internet filter bubbles that it might not be obvious if ChatGPT nudged us in one direction or another.
To be clear, I don’t think ChatGPT or any other LLM has nefarious motives baked in2. But we can set helpful goals that accidentally become harmful outcomes, as we saw with the hospital AI.
Or take social media. As we’ve learned, optimizing algorithms for engagement can lead to accidental radicalization. YouTube recommendations are a prime example - watching one politically-tinged video can lead down a rabbit hole of extreme videos. And in a world of infinite content, recommendations become much more powerful. If an LLM started radicalizing our thinking, would we even notice?
Fake news everything
There's a long history of new tools and technologies that have let us remix, and repackage content. Before deepfakes, there was Photoshop. Before Photoshop, there was airbrushing.
Photoshop is a great analogy. At this point, its existed for over 30 years, and it hasn’t unleashed world-ending disinformation3. Millions can manipulate pixels to create never-before-seen images. But there are some key differences between Photoshop and new generative AI tools.
With past software, there was at least some specialized knowledge required - mastering Photoshop takes hundreds of hours. But Midjourney has dramatically lowered the barrier to entry to create photorealistic images. That might not be a problem on an individual basis, but with any tech, reducing the friction increases the output. Edited photos that used to take several hours each can now be made hundreds of times in a matter of minutes.
Beyond that, the image quality is improving far faster than previous image editing capabilities. Generative art isn’t new, but it’s becoming hard to distinguish from real photos for the first time. Recently, a photography competition winner rejected his award, revealing his submission had been AI-generated.
We used to say seeing is believing. In the Internet age, that's stopped being true. But there are many digital mediums that we still implicitly trust, like audio and video. Generative AI is poised to erode that trust, starting with voice recordings. New voice generation tech is good enough to bypass bank security and defraud family members. And they only need a few minutes - soon to be seconds - of audio.
Taken together, these advancements could make us fundamentally question what’s real when it comes to digital content. Today, we listen to voicemails and assume they were recorded by humans. We watch YouTube videos and believe we’re watching actual people. That could all change in the near future.
When every single piece of digital content may no longer be “real”, what happens next4?
Over time, most people will learn to tell the difference between synthetic and organic content5. Nobody wants to be the rube who takes an Onion article seriously, or who unwittingly spreads fake news. But while we’re all still learning to tell the difference, there's a lot of opportunity for chaos.
Job losses
A recent study by OpenAI found that up to 80% of American workers would have their jobs affected in some way by large language models. That doesn't always mean that they'll lose their jobs - it could mean that their tools or their workflows change, or their job gets moved into an adjacent role. But it is a very big change, and there will be many people whose jobs are gutted by AI.
For the first time, we're automating industries that do white-collar work. Repetitive tasks like data entry/cleanup, customer support, and document summarization. Low-stakes creative tasks like stock photography, voiceovers, and rough draft content writing. We are already seeing evidence of this kind of job displacement.
But these jobs are not going away without a fight. Lawsuits are working their way through the courts on behalf of webcomic artists, stock photographers, and software developers. We don't know yet how those lawsuits will play out, but clearly people feel threatened. Part of the Hollywood writer’s strike concerns AI - they don’t want their work used to train language models.
And just like the Photoshop example, we’ve seen this problem before. In the 1980s, economists believed offshoring and outsourcing would be a boon for American workers. We would pass the low-skilled work to cheap Chinese labor and keep the high-skilled, high-paying work for Americans. The idea was we'd transition people to newer, better jobs and improve their standard of living.
In hindsight, they were right about the overall shape of the transition. New jobs were created. With the internet, we saw an enormous boom in knowledge work that’s still ongoing today, dramatically changing the day-to-day work of many industries. I cannot imagine explaining to someone from 1985 the concept of being an influencer as a full-time job.
I think if we had realized how traumatic the pace of change would have been, we would have at a minimum had much better policies in place to assist workers in communities that suffered these very severe and immediate consequences. And we might have tried to moderate the pace at which it occurred.
– David Autor, MIT Professor of Economics
But one thing they got wrong was that the same people who were being displaced would be the ones transitioning into those new jobs. The speed of the transition, even though it played out over decades, was ultimately too fast for many to make the jump. While some did, not all blue-collar workers were able to retrain and, instead, have watched their quality of life decline.
Many believe that wholly new jobs will be created, like prompt engineers, or AI managers, or things we can’t even dream about yet, like influencers in 1985. They’re likely right. But the real problem is the timeframe. And unfortunately, with generative AI, we don’t have 40 years before these things have a big impact. I think if we’re lucky, we have a decade.
Social disruption
Lastly, I think we're going to see AI bring some unique challenges to the fabric of society.
Case in point: AI companions. Samantha, the AI from the movie Her, is quickly becoming a reality. Currently, these companions live in our phones and messaging apps, rather than our earbuds. But the attachments people are creating with them are quite real.
After years of offering AI companions, Replika recently decided to disable NSFW conversations. Its users revolted. Many wrote about feeling tremendous loss, because - against the company's warnings - they were treating the AI as a romantic partner. More explicitly, a Snapchat influencer launched an "AI girlfriend" version of herself last week, complete with her cloned voice. At $1/minute, CarynAI earned $72,000 in its first week.
What social media started, AI stands to continue. Tech addiction, parasocial relationships, echo chambers, misinformation, mental health issues.
At a macro level, AI could also mean more market concentration and income inequality. A handful of companies are on track to control the latest models and reap most of the rewards. Lina Khan, the chair of the FTC, explains this better than I can:
The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance.
Where do we go from here?
This might all seem a bit morose, so I want to emphasize that I’m actually an optimist regarding AI!
But new technologies are being released without full consideration of the ramifications. As a tool, AI is not inherently good or bad. It opens up new uses for the humans that wield it. And it also opens up new abuses for the humans that wield it.
We’ve seen the downsides of the “move fast and break things” ethos. Inventors, users, and regulators should consider the responsibilities inherent in developing new technology. "With great power," and all that.
Politicians are now openly worried about the impact of AI on society and elections. At a Congressional hearing this week, Senator Blumenthal demonstrated an AI clone of his voice, and considered the possibilities for misinformation. The EU is reportedly drafting a set of rules that could strange generative AI in its crib.
So it's worth considering the case for regulation.
I know I’m skipping over hard takeoff (the idea that we go from AGI to ASI in a matter of minutes or hours, without even realizing it), but I don’t believe in orienting around an extreme outcome whose probability we don’t know.
Yet. I have full faith that someone will soon build an example LLM chatbot that attempts to convince you of position X without you noticing. PersuadeGPT, if you will.
Arguably, it’s done a number on our mental health and beauty standards, but I’d say Instagram is just as culpable as Photoshop on that front.
I fully expect counter-movements against generative AI. Companies or people that display "100% organic" badges on their blogs or YouTube channels. But that doesn’t stop the absolute deluge of AI-generated content.
Most. There will likely be people who get left behind, who never quite figure out what's real or not. They exist today. They're the ones sharing the obvious satire or fake news articles on their Facebook feeds.
I've heard such large and immediate changes compared with jumping in a cold pool on a hot day: all the shock is in the transition.
now, is that shock enough to give society a disabling cramp at the deep end of a pool that gets bigger by the second? how many people have their phones in their pockets as they fall in? hell, we're diving in blind here...how much water is in the pool?
I don't think anybody knows the answers to such questions at this time.
...but without a doubt we're all gonna get wet, and soon.
we'll see how the water is.