Discover more from Artificial Ignorance
Dismiss the AI hype at your own risk
Four common arguments against the AI hype.
Lately, there are two types of people talking about AI on social media. The first is makers. People building products, discussing research papers, or testing prompts.
The second is, for lack of a better term, influencers. They're breathlessly shouting about how AI will disrupt EVERYTHING, or how you're missing out on these 7 LIFE-CHANGING AI apps, or how this 16-year-old is making $20K PER WEEK with AI tools and you're falling behind1.
If you’re not subscribed to Artificial Ignorance, you’re falling behind in 2023.
Looking at the news might somehow be even less helpful. Most mainstream news articles only focus on viral AI moments, for better or worse. Usually it's for worse - Bing's freakouts, fake Trump arrest photos, or lawsuit upon copyright lawsuit. There's little nuance about how the technology works or its tangible benefits.
The problem with this type of coverage is that it generates an enormous amount of noise. Most people aren't paying attention to this stuff. That might be hard to grasp if you're in tech or terminally online, but I've got plenty of family members who have never heard of ChatGPT. I don't blame them - there's more content to consume than ever, and we're all busy people.
But ignoring or dismissing AI is doing yourself a disservice. Because there's an incredible amount of valuable, impactful technology in the works right now. So let's dig into some common dismissals I see regarding generative AI.
How is this different from the crypto/metaverse/<tech trend> nonsense?
I consider myself a veteran tech trend surfer. Over the last decade and a half, I’ve watched plenty of waves go by. Drones, blockchain, 3D-printing, VR/AR, web3 (aka blockchain 2). I've worked with many of them. I bought a 3D printer and tried printing a few different designs. I finished exactly one Beat Saber song on the max difficulty. I built Ethereum mining rigs, going so far as to tweak the BIOS and firmware settings to improve performance.
After spending hundreds of hours with crypto protocols and platforms, my biggest takeaway is that generative AI has far more practical use cases today than Bitcoins and blockchains have had in their long history.
I get the skepticism, though. With crypto, in particular, the shape of the hype looks very similar. I'm old enough to remember when blockchains were going to reshape the world's finances, and web3 was going to reinvent the internet. It's easy to look at last year's crypto collapse and assume the grifters moved onto AI. Frankly, I'm sure some of them did; I've seen too many people selling "500+ amazing ChatGPT productivity prompts" to believe otherwise2.
But focusing on the loudest voices is somewhat missing the point. Amara's Law says, "we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run." The fact that Stable Diffusion has existed for 2 years, and GPTs for 3 years, speaks volumes about where the tech is headed in the long run. Consider this: Bitcoin has existed for 15 years.
Maybe it’s different from <tech trend>, but it’s still a bubble.
Most technology goes through a hype cycle, and AI is no different. So it's worth considering the question: are we in an AI bubble?
There are good arguments as to why we are. VC funding for generative AI grew by 10x between 2018 and 2021. The most recent YC batch had over 40% of companies related to AI or ML. BuzzFeed saw a 350% stock jump after announcing plans for AI-written quizzes.
I believe we are in a bubble for a certain type of AI product. There are tons and tons of apps and tools that are very thin wrappers around someone else's model. "ChatGPT for X" projects that boil down to a custom ChatGPT prompt. Image tools that act as fronts for Stable Diffusion and ControlNet. Cherry-picked demos are being hailed as the next big thing in AI.
Expect most of these projects to die off in the next six to twelve months. They're not defensible, and most don't make sense as standalone products. As people become more effective with prompts, or as OpenAI adds plugin functionality, most of these apps no longer have a moat. After all, the winners in the AI arms race are most likely going to be incumbent platforms.
But with most tech bubbles, there are seeds of real innovation amidst the frothiness. Right now, we're still discovering the limits of products like GPT-4 and Midjourney. Even their creators don’t fully know what they’re capable of, and these models are, if anything, underhyped. We'll discuss why in a moment.
Sure, but I’ve tried ChatGPT - it can’t do that much.
One of the strangest critiques of generative AI is that it isn't actually that impressive. I see this sentiment a lot with programmers, especially on Hacker News. Maybe they've tried ChatGPT a few times or seen some AI-generated artwork. Yet they're dismissive of whether these products are truly innovative, or whether they'll have real utility.
To me, this represents a lack of imagination. When using ChatGPT for the first time, a lot of folks will attempt to quiz it on some obscure fact. They'll ask, "How tall is the Eiffel Tower" or "What are the lyrics to Hey Jude"? And sometimes ChatGPT will get it right, sometimes it won't. But asking a basic search-engine-type question misses much of the tool's value.
"This is cool, but I've used chatbots before," they say. After a few examples, I can get them to see how powerful ChatGPT is. But why do they have that initial reaction? My gut says it's because they're expecting ChatGPT to repeat basic facts and figures. They aren't expecting ChatGPT to think, to reason, to strategize. And so they don't look beneath the surface.
To be fair, new tools often need to teach users how to be successful, even if they're very advanced. Google Wave was tech years ahead of its time but died because nobody knew what to use it for. AI companies can do a better job of providing real-world examples of how to get the most out of their models.
And we need to be realistic about AI’s current capabilities. By one estimate, ChatGPT hallucinates around 15-20% of the time, which is…not great. If you had a personal assistant that lied to you 1 in 5 times, you’d fire that person within a week3.
Still, it's bonkers to me when I see programmers saying that GitHub Copilot isn't worth $10/month. The average software developer in California earns north of $70/hour. At that rate, if Copilot saves you 9 minutes a month, it has paid for itself. How that is not useful is beyond me.
ChatGPT is sometimes valuable, but only in narrow situations.
This is the final major pushback I see. And much like the last one, it tends to belie a lack of curiosity. It's easy, if you're in tech, to forget about all the non-technical companies and employees that power the world. And so it's easy to miss the myriad use cases that are helping people today.
E-commerce business owners are using AI to pull insights from their customer reviews. Government contractors are using ChatGPT to draft RFPs, saving hundreds of hours. Marketers are translating their blog content into multiple languages in an instant. Programmers are building prototypes that used to take weeks in a single weekend. I personally know people doing all of these things.
Then there are the platforms. Shopify now lets millions of merchants auto-write product descriptions. Intercom helps support agents do their work faster by summarizing tickets. Microsoft is bringing Copilot to Teams, Office, and Outlook (and if it's anything like Bing Chat, it's going to be impressive). ChatGPT's app store/plugins, while very experimental, have a lot of obvious potential.
Plus, we've barely scratched the surface of what's possible with general-purpose models, let alone fine-tuned versions. GitHub Copilot is an excellent example of what fine-tuning can achieve. We're starting to see demos like BloombergGPT, a language model designed for financial analysis. Or Harvey, an AI to help automate legal work.
And look, AI is not going to get rid of lawyers any time soon. Or software developers. Or copywriters. For better or worse, the world is still made up of people. And those people are the ones who make decisions and take action.
But as I said before, I believe our current capabilities are underappreciated, believe it or not. Even if we stopped developing more advanced models now, it would take years to take full advantage of the tech we already have.
So are the influencers right?
A few weeks ago, my mom came to visit. She’s in her sixties but has a pretty good grasp of technology - she used to be a DBA. I asked her, out of curiosity, if she had seen ChatGPT. "Oh sure," she said, "but you know, I've used chatbots before, on websites. They're not very good." I asked if she would let me spend twenty minutes showing her what the latest technology was capable of.
When I finished, she had one question: "Are people afraid of this?"
Some are. Many more will be. And that’s totally understandable - people, by and large, don’t like change. A recent study from OpenAI estimated 80% of workers could have their work affected by LLMs4. That doesn't mean 80% of people will lose their jobs; it could mean that their tools will change or the nature of their job will evolve. But I can't say that 80% of people I know expect their job to change if any significant way due to AI. I don't know whether 80% of my friends and family have even tried ChatGPT.
Yet I must say, as calmly as I can, that the world isn't ready for the changes that are coming5. Remember Amara's Law: headlines may die down in the short term, but generative AI will still be working its way into thousands of products and millions of businesses. The Pandora’s box of generative AI has been opened - whether you were paying attention or not.
“Your newsletter’s not bad” – my mom
Just typing that gives me anxiety.
As an aside, ChatGPT is barely 4 months old. Be skeptical of anyone selling a masterclass on it right now.
Though as I like to say, this is the worst the technology will ever be. We may never get rid of hallucinations completely, but we can absolutely improve on them with time and effort.
The exact wording is “80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.”
The operative word here is calmly. As bad as the influencer tweets are, the AI-doomer tweets are even worse. Shouting “AI will kill us all in 5 years,” even if you truly believe it, is not particularly effective. Imagine telling everyone to wear masks and work remotely because of an impending virus - in September of 2019. You would have been completely correct, and would not have convinced a single person.