Sunak's Safety Summit
UK Prime Minister Rishi Sunak’s planned AI Safety Summit now has a date: the beginning of November. The event would bring together President Biden, other G7 leaders, and the CEOs of leading AI companies.
Between the lines:
This isn’t too surprising, but it’s a reminder that Sunak is working to make the UK an AI powerhouse.
However, there are still geopolitical tensions to navigate. There appears to be a debate on whether or not to invite China to the event, as it could be hard to find common ground on AI regulation.
The summit may be held at Bletchley Park, where British codebreakers famously cracked Germany's Enigma cipher during WWII.
Elsewhere in AI safety:
Stanford is running a bootcamp for Congressional staffers on the benefits and risks of AI.
A look at Def Con AI Village, where thousands attempted to find vulnerabilities and exploits in popular text and image generation models.
And a group of nonprofits have released a framework called Zero Trust AI Governance, which aims to push AI regulation forward on several fronts.
Everything in moderation
One of the worst problems on the internet is dealing with content moderation at scale. But OpenAI has outlined a new strategy for using GPT-4 to flag problematic content, explain decisions to human moderators, and suggest updates to content policies.
Yes, but:
GPT-4 isn't as effective as experienced humans and should be paired with human moderators.
It's worth noting that tech companies have been using machine learning to help moderate for a while now, though using generative AI is a promising approach.
And as helpful as GPT-4 may be here, LLMs (including OpenAI's models) will be responsible for a tsunami of spammy and/or harmful content.
Elsewhere in AI productivity:
Drones, robots, and ML are helping speed up colossal construction projects.
How Unilever, Siemens and Maersk use AI to negotiate contracts and navigate complex supply chain issues.
And AI bots are now better than humans at solving CAPTCHAs. I guess it's time for even more frustrating Turing tests.
AI M&A
Some more OpenAI news: the company announced its first (public) acquisition: AI design studio Global Illumination. The terms of the deal were not disclosed.
Elsewhere in deal/no-deal:
Anthropic has raised $100 million from the South Korean SK Telecom, as the two aim to build a telco-focused multilingual LLM.
A new pitch-deck-reading AI startup aims to automate VC analysts out of a job.
And as licensing negotiations have stalled, the New York Times may sue OpenAI over using its articles to train ChatGPT.
Things happen
The AP bans using ChatGPT for "publishable content." Eric Schmidt is funding an AI nonprofit aimed at scientific breakthroughs. "What if Generative AI turned out to be a dud?" Google's experimental new feature summarizes page content in search results. Colleges are trying to "ChatGPT-proof" this year's assignments. Open challenges in LLM research. "The DeSantis campaign texted me with a large language model." Microsoft calls the new Bing a success as market share stays the same. Amazon is rolling out AI summaries of product reviews more broadly. An 8-bit Westworld starter kit.
Hey Charlie, fantastic roundup on Sunak's Safety Summit. It's exciting to see the UK's AI ambitions taking center stage with global leaders and industry CEOs converging. Navigating AI regulation amidst geopolitical tensions will undoubtedly be a challenge, but it's a step towards a safer and smarter future. Cheers to progress in the AI realm.