People are worried about AI
The New York Times has an in-depth profile of Geoffrey Hinton, one of the "Godfathers of AI." In it, Hinton explains his decision to leave Google after more than a decade, his fears of AI's future, and his regrets after a long career in AI.
Why it matters:
Many people approach the AI safety debate with their stance already decided. Hinton is a smart, respected expert who has changed his mind. It's worth understanding why.
It may take more experts and leaders publicly changing their minds to help guide the future of AI safety. So far, the open letter to pause AI experiments has achieved little.
One of Hinton’s worries is AI upending the job market, and the tech industry is already seeing the first signs of that. Dropbox laid off 500 employees as part of a pivot to AI. And IBM estimates up to 7,800 jobs could be replaced with AI in the coming years.
Elsewhere in AI anxiety:
The Hollywood writer's strike, while primarily about pay, is also concerned with AI. WGA (Writers Guild of America) members don't want their material used to train AI, and don't want to have to clean up AI first drafts.
A WSJ journalist used AI to break into her own bank account. Now lawmakers have questions.
Last month Samsung engineers accidentally leaked source code by uploading it to ChatGPT. This week the company is banning all employee use of generative AI tools.
Politicians are worried about AI
On top of us regular folk, US politicians took some more steps towards regulating AI. The White House announced a plan to begin addressing AI's risks. Plus, Vice President Harris met Thursday with the Alphabet, Microsoft, OpenAI, and Anthropic CEOs.
Elsewhere in regulation:
FTC Chair Lina Khan says "we must regulate AI," as it could solidify companies' dominance1 and turbocharge fraud.
Senators propose banning AI from singlehandedly launching nuclear weapons. Seems good.
Microsoft’s Chief Economist is "confident AI will be used by bad actors," but wants lawmakers to wait until the technology causes "real harm."
Open-source strikes back
Google, Microsoft and OpenAI are facing more competition with the release of more high-profile models. There's Stability AI's Stable Vicuna, Replit's Code LLM, and Hugging Face's StarCoder.
Between the lines:
Google employees seem to realize the threat of open-source AI: "We have no moat, and neither does OpenAI" reads a leaked internal message. The message goes on to detail why Google is vulnerable and how it can go on the offensive.
Google has also changed its approach to sharing. After the launch of ChatGPT, leaders decided only to publish research after it had been productized. The rise of open-source will likely cement this approach.
Microsoft, for its part, seems to be trying to get as much user adoption as possible before open-source catches up. It's upgrading Bing Chat and making it available to everyone.
Things happen
Chegg's stock falls nearly 50% because of ChatGPT. ChatGPT was basically my attorney. Steve Wozniak: "Get a Tesla if you want to learn about AI trying to kill you." AI-generated fever dream of a beer commercial. Mojo, a new programming language for AI. Scientists "passively decode thoughts" (aka read minds) with GPT models. AI-generated propaganda.
I tend to agree here - I’ve previously written about who stands to win in the AI arms race.