Preparedness
OpenAI formed Preparedness, a new team to assess, evaluate, and probe AI models to protect against “catastrophic risks.”
Between the lines:
The company also launched a Preparedness Challenge, where people can submit novel ideas for catastrophic misuse and win up to $25,000 in API credits.
CEO Sam Altman has repeatedly talked about his fears of rogue AI, and now appears to be putting real resources towards avoiding these "extinction risk" scenarios.
These moves more closely align with AI's doomsayers, including a new paper/framework that suggests one-third of companies' R&D budgets should go to risk management.
Elsewhere in AI anxiety:
The Internet Watch Foundation warns that generative AI is being used to create new types of CSAM, finding thousands of AI-made images that break UK law.
An analysis of 1,800 datasets used to train AI finds that 70% didn't include a license or had been mislabeled with more permissive guidelines than initially intended.
And researchers at UChicago developed Nightshade and Glaze, two new tools to help artists "poison" their work, in case it's used to train AI models.
The AI Safety Institute
As expected, UK Prime Minister Rishi Sunak announced plans for an AI research network modeled on the IPCC, as well as a new AI Safety Institute.
What to watch:
The institute will "evaluate and test new types of AI" and selectively publish its results - though it plans to save the sensitive stuff for national security purposes.
These announcements come ahead of next month's UK AI Summit, which hopes to bring together leaders from the US, EU, and China.
But the summit is losing some of its luster as the leaders of Germany and Canada join President Biden in skipping the event.
Elsewhere in institutional AI:
President Biden is expected to sign an executive order next week that will require assessments of AI models before federal works can use them.
The Frontier Model Forum, which plans to commit $10 million to an AI safety fund, will be run by Chris Meserole of the Brookings Institution.
And while the EU has been trying to pass its AI Act by the end of 2023, continued disagreements are making that deadline unlikely.
People are worried about Apple's AI
When you think of tech companies leading the AI frenzy, Apple isn't one that comes to mind. And a new report from Bloomberg details how the trillion-dollar company is trying to catch up.
Why it matters:
Of course, it might not matter. Apple has always been very intentional about its technology and doesn't chase trends.
However, other sources report some anxiety that Apple's internal AI teams can't deliver a great product, and the company will be forced to use their output anyway.
Ultimately, Apple is exceptionally well-positioned to capitalize on advancements in generative AI, with their abilities to secure user data, build custom chips, and integrate AI assistants directly into iOS.
Elsewhere in the FAANG free-for-all:
Amazon launched new beta tools to let sellers create "lifestyle images" featuring their products, as well as conversational AI for kids via "Explore with Alexa."
Microsoft is opening up early access to Security Copilot, its AI assistant for infosec.
And Google Maps is getting an AI makeover, with immersive navigation and easier driving directions.
Things happen
General-purpose LLMs don't do well with medical questions. Workers training AI demand protections from Congress. Jina AI launches open-source 8k text embedding model. "I expect AI to be capable of superhuman persuasion." AI is re-energizing parts of SF. Shutterstock will let you change real photos using AI. "AI will never threaten humans." OpenAI in talks for an $80B valuation. Is my coworker an AI? Microsoft has over a million paying Github Copilot users. Riley Reid on AI: "I don't want porn to get left behind." Boston Dynamics turns its robot dog into a talking tour guide. Grammarly's new AI features will learn your writing style. Wall Street embraces AI despite risks of catastrophe. The music industry reckons with YouTube's AI.