I like to say that we're now living in a “post-ChatGPT era.” One year and three months ago, the "low-key research preview" from OpenAI sparked a massive AI boom. It wasn't solely responsible; we've talked about the larger trends leading up to that moment. But in the aftermath of that launch - even just a couple of months in - the AI space was already a firehose of news.
New products and demos were launched daily1. New foundation models were released almost weekly. At the time, researchers were elated and exhausted by how fast the space moved. From
:Working in this environment is extremely straining, for a plethora of reasons — burnout, ambition, noise, influencers, financial upside, ethical worries, and more.
And to be honest, it doesn't feel like it's slowed down that much since.
Some things are different: fewer demos and papers go viral, and people are a bit more skeptical of new GPT-wrapper startups. But between model improvements, open-source releases, and the ensuing backlash/regulation, I've been hard-pressed to find a week without major news stories. And I've been keeping track for over a year now.
So I want to acknowledge the reality of AI fatigue. There’s still so much AI news that many people are tuning things out entirely. And if you aren’t, it's challenging to feel on top of everything. Even I find it hard - and I'm passionate enough about AI to publish two newsletters a week!
This isn't a new concept. I first created the About page of Artificial Ignorance because of that feeling of being overwhelmed:
But as I've delved deeper, I've found a few strategies to help manage the deluge of headlines and hype.
Curate relevant, insightful sources
The first step is to find relevant and insightful sources. Both parts are important.
Relevant means the content covers AI from an angle you know or care about. It could mean cutting-edge research papers or model deployment discussions. It might be analysis of how AI impacts organizations in education or finance. Or it could cover tools and prompts for specific tasks like logo design or copywriting. There's so much to know about AI, and most of it is going to be too technical, too basic, or too irrelevant for you to consume.
Insightful means the content teaches you something new, and ideally does so with nuance. Nuance in this field is pretty underrated (and pretty rare). That's true of many camps of AI enthusiasts - for every AI safety petition portending doom are thousands of e/acc memes embracing a digital utopia. Yet the best and smartest things I read about AI don't start with massive proclamations - they're more thoughtful than that. I get why that's a rare thing - between news headlines and viral hooks, we need to get louder and more dramatic to draw attention. However, curating solid sources of AI knowledge that can strike some amount of balance is a valuable thing.
It’s also worth considering your content’s point of view, and accounting for it2. I'm not suggesting you only read "fair and balanced" coverage or that you should agree with everything you read. It's incredibly difficult to write (in general, but also about AI) without implicit bias, but that's not a problem that needs to be solved. Instead, I'm proposing that as a reader, it's worth probing into the author's worldview (does he believe open-source AI is good? does she have a high p(doom)?) - not to prematurely judge or dismiss, but to help find nuance.
For what it's worth, here's my bias: I'm cautiously optimistic about AI. I believe it will be a transformative technology (at least as big as the smartphone), but I'm still figuring out exactly how big I think it will be. I believe technology isn't inherently good or bad - it's capable of both, and we need to be honest about both sides. I care about understanding the underlying technology, exploring its impacts on society, and thoughtfully explaining both to my readers.
Experiment when you can
Of course, consuming content is actually the easy part. It's much harder to find the time to try the endless stream of new AI tools and toys released every week. But I believe it's essential to do at least a little experimentation.
If you can, get comfortable with at least one LLM, like ChatGPT, Claude, or Bard. Ideally, use the most advanced version - even if it means spending $20/month. And if you're feeling ambitious, add a second medium, like Midjourney or ElevenLabs. Or use more workflow-specific tools that you come across. You don't have to pay: Microsoft offers GPT-4/DALL-E via Bing Chat, and the baseline Bard and ChatGPT are available for free.
But you want to play with one of the main LLMs regularly because of how unpredictable their capabilities are. Professor
has called this "the jagged frontier":AI is weird. No one actually knows the full range of capabilities of the most advanced Large Language Models, like GPT-4. No one really knows the best ways to use them, or the conditions under which they fail. There is no instruction manual. On some tasks AI is immensely powerful, and on others it fails completely or subtly. And, unless you use AI a lot, you won’t know which is which.
For example: GPT-4 can write working code and solve text ciphers, but can't reliably count the number of words in a paragraph. And with every update, every new capability, the jagged frontier shifts. Which means we need more folks who understand how these things work, and can discern what products are actually useful versus AI snake oil.
Besides, if you can put the reps in now, you will be way ahead of most people (and if you're not in tech, most of your coworkers). It's easy to forget how early we still are. If you've played with ChatGPT at all in the past year, you're in the top 10% of AI early adopters. If you know there's a difference between GPT-3.5 and GPT-4, you're in the top 1%.
And as AI inevitably becomes more ubiquitous - as it makes its way into our phones, our apps, our smart devices, our workplaces, and our lives - having that experience now is (probably) going to pay big dividends in the near future.
Let go of the rest
Once you've done those first two things... stop. Stop doomscrolling Twitter. Stop looking for more AI news.
This is easier said than done. At least, it is for me. But I've had to confront the reality that it's impossible to know everything that's going on. If anything, I probably read more than 99.9% of people - and I still feel like I'm missing out.
But it's okay to not read every viral Tweet. To not sign up for every productivity tool. To not know the details of who just closed a new round of billions (or trillions) of funding. Twitter, and the news media more broadly, rely on an endless cycle of hype, snark, and outrage. The vast majority of the news is not going to affect you in an immediate way - so don't beat yourself up for not reading it.
I also take comfort in knowing that change will take time. Once again, I'm reminded of Amara's Law:
We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
I'm bullish on the prospect of our AI future, and I'm sure that it will (eventually) impact much of our lives. But I'm also pretty confident that five years from now, I'll still be paying for groceries. I'll still have to take my dog for a walk in the morning. And I'll still be spending time with my family3.
I like to remind myself that even while the digital world experiences profound change, the physical world will still exist. So if it ever feels like you can’t keep up - that’s okay. You’re busy. We’re all busy. Take a break, go outside, and come back to it when you’re ready.
The interesting thing, which is maybe worth writing more about, is that a lot of those products were built using GPT-3. ChatGPT didn't have API access for over three months.
A lot of this stuff is media literacy 101, but it’s worth repeating.
I reserve about a 1% chance that I'm wrong about one or all of these things. However, if that turns out to be the case, our lives will have changed so much, so fast, that I'm not sure what I would have done instead.
I suppose I could move to the wilderness to escape AGI, or torch my 401K since money will mean nothing. But I don’t want to upend my life over a (perceived) black swan event - if I did, I probably wouldn't live on the San Andreas fault.
People are starting to study it as a legitimate phenomenon now: https://www.amazon.com/dp/B0D2BQV1DC
Charlie, this was refreshing! I like hearing about burnout and exhaustion from the firehose. I feel it too, and I don't even write about AI much at all (but of course, it's been a topic of interest for me for my whole life, especially the last 20 years of it).
I do not envy your position, but I appreciate what you do.