The letter
The loudest news this week was the Future of Life institute's letter calling for a 6-month pause on the training of models more advanced than GPT-4. Signed by several prominent tech leaders (and over 1000 people total), the letter says modern AI systems are a significant risk to society.
Why it matters:
The debate around AI safety and alignment has mostly been theoretical until now. OpenAI is forcing us to take the possibility of AGI (artificial general intelligence) seriously.
Most people haven't grasped the full impact of GPT-4 yet - expect louder reactions as that changes.
Some are asking for the FTC to investigate OpenAI. Others are calling for us to shut the whole thing down.
Yes, but:
The letter itself has some problems - some signatories have walked back their positions, and others have turned out to be fake. The most prominent signatory is Elon Musk, who one month ago proposed a "woke" ChatGPT competitor, and had a falling out with OpenAI in 2018.
It’s unclear which signatories are motivated by AI safety, and which are motivated by an opportunity to catch up to OpenAI.
No matter your stance, it's unclear how effective Congress would be at understanding, let alone regulating AI. Meanwhile, the UK government put out a white paper on their vision and approach to AI regulation.
The pope
In possibly the first viral Midjourney hoax, the world was fooled by an AI-generated image of the Pope in a white puffy jacket.
Why it matters:
We're all going to be climbing the learning curve of AI-generated images and text. Finding small glitches, like the Pope's coffee cup, reveal the illusion, but that's a big ask for many people. Seeing is no longer believing.
Interestingly, not all AI-generated images trick people, even when they go viral. Trump's fake arrest photos didn't seem to fool anyone - it's worth considering why.
Midjourney has gotten a ton of attention, and not in a good way. The company is ending its free trial, citing the influx of new users.
FAANG free-for-all
Microsoft's strategy with Bing Chat is getting clearer. First, it reportedly threatened to revoke its search data to companies building AI chatbot competitors. Second, it confirmed that ads will be coming to Bing Chat.
Why it matters:
Microsoft is serious about protecting its AI advantage. It's in the lead for now, but it may make some hard decisions as the competition heats up.
Nobody knows how to successfully monetize AI chatbots yet. Ads are an obvious choice, but it's unclear how effective (or lucrative) they'll vs search engine ads.
GPTs will need to make money at some point. They're incredibly expensive to train and run at scale, and the ZIRP era is over.
Meanwhile, Google’s going all-out to catch up to Microsoft:
AI features are coming to Gmail and Google Workspace (vs Microsoft's Office 365 Copilot).
A partnership with Replit will use Google's resources to create AI-generated code (vs Github's Copilot).
Allegedly, they trained a version of Bard on ChatGPT's data. Google, for its part, has denied the allegation.
Things happen
Lex Fridman's interview with Sam Altman. Goldman Sachs report: AI could replace the equivalent of 300M jobs. $335,000 Pay for ‘AI Whisperer’ Jobs. Replika AI restores erotic roleplay for some users. “FreedomGPT” praises Hitler, uses the N-word, and more. OpenAI investment into 1X, a human-like robot startup. AI Will Smith eating spaghetti will haunt you for the rest of your life.
Does AI generated data/images leave a trail?