Digital doldrums
This week, Anthropic cut off Windsurf's direct access to all Claude models via the API (and declined to make Claude 4 available to begin with), due to Windsurf's rumored acquisition by OpenAI.
Between the lines:
The move appears strategic rather than technical - other major AI coding tools like Cursor and GitHub Copilot still have direct access to Claude 4.
And Anthropic's Chief Science Officer confirmed as much, saying that it would be “odd for us to be selling Claude to OpenAI.”
There's a growing tension between AI model providers and coding assistants, as companies like Anthropic develop agentic coding products like Claude Code, potentially competing with former customers.
Elsewhere in frontier models:
Google releases an upgraded preview of Gemini 2.5 Pro with leading performance in coding benchmarks and LMArena scores.
Anthropic unveils new Claude Gov models tailored specifically for US national security customers.
Manus debuts a text-to-video generation tool in early access for paid users.
Mistral releases Mistral Code, an AI coding assistant in private beta on JetBrains and VS Code platforms.
And a comparison of major AI models shows Claude 3.7 Sonnet winning overall with the most consistent answers and no hallucinations.
Elsewhere in the FAANG free-for-all:
X changes its developer agreement to prevent third parties from using its API or content to train AI models.
Amazon is creating an AI team within Lab126 to develop an agentic AI framework for its robotics operations.
Google NotebookLM now lets users share notebooks publicly .
Meta aims to help brands fully automate ad creation using AI by the end of 2026.
Apple is testing AI models ranging from 3B to 150B parameters ahead of WWDC.
Meta plans to use AI to automate up to 90% of its privacy and integrity risk assessments.
And Google released AI Edge Gallery for Android that lets users run Hugging Face AI models locally on their phones.
If you can't beat ‘em
In recent weeks, we've seen a shift in publishers' and record labels' stances as they seek to capitalize on the AI boom - despite earlier (and ongoing) litigation.
The big picture:
Last week, the New York Times licensed AI training content to Amazon; this week, major record labels are negotiating compensation frameworks with Udio and Suno.
Rather than fighting the inevitable rise of AI music generation, major labels appear to be positioning themselves as gatekeepers who can profit from the technology.
And this is a trend we're seeing across industries: despite union contracts and public concern, "every major studio" is quietly experimenting with AI tools to cut costs and accelerate production.
Ultimately, even as creatives and academics refuse to use AI tools, they're quickly becoming so widespread that not using them can disadvantage you.
Elsewhere in AI anxiety:
OpenAI disrupted 10 malicious operations in the past three months, four of which likely originated from China and targeted multiple countries.
AI-assisted "vibe hacking" using jailbroken mainstream AI models creates a cybersecurity arms race.
Reddit sued Anthropic for allegedly accessing Reddit over 100,000 times after claiming it had stopped data collection.
Researchers found that making AI chatbots more agreeable can lead them to reinforce harmful ideas like encouraging drug use.
AI's real threat t o education isn't cheating but short-circuiting learning processes needed to use generative AI effectively.
And researchers suggest DeepSeek used Google Gemini to help train its R1-0528 reasoning AI model.
Big Beautiful Bill
While the federal budget reconciliation bill moves from the House to the Senate, lawmakers and lobbyists are concerned about one of its provisions - a 10-year ban on state and local regulation of AI.
What to watch:
Anthropic has emerged as a major opponent of this policy, lobbying Congress to oppose the bill and calling for federal transparency standards for AI companies.
But it's not alone - 260+ US legislators from all 50 states signed a letter opposing the federal budget bill provision, arguing it would hinder their ability to protect residents from AI-related harms.
And while proponents argue the moratorium could hurt U.S. competitiveness against China, states have already been actively legislating AI issues - from deepfake protection to algorithmic discrimination - filling gaps left by federal inaction.
Elsewhere in AI regulation:
The Trump administration plans to reorganize the Biden administration's US AI Safety Institute into the Center for AI Standards and Innovation.
OpenAI seeks to block a court order requiring it to save all ChatGPT user logs, including deleted chats, citing user privacy concerns.
Apple Intelligence's China rollout with Alibaba faces delays from Chinese regulators amid ongoing US trade tensions.
And Microsoft launches a free cybersecurity program for European governments to protect against AI-enhanced threats and state-sponsored attacks.
Things happen
Dubai attracts AI talent with its Golden Visa program, no taxes, and high salaries. AI pioneer Yoshua Bengio launches LawZero, a nonprofit focused on safer AI. OpenAI rolls out connectors for Dropbox and OneDrive for ChatGPT. Mary Meeker's first Trends report since 2019, focused on AI. The FDA debuts an agencywide generative AI tool to help scientific reviewers. Why do Christians love AI slop? Google DeepMind CEO sees "a 50% chance" of AGI in the next five to 10 years. Figma releases Dev Mode Model Context Protocol server for AI models. AMC signs a deal with Runway to use AI for marketing images. How AI agents help automate tedious coding tasks. Character.AI rolls out AvatarFX for video generation and social feeds. Elad Gil invested in Enam Co., which aims to use AI for PE roll-ups. The OpenAI board drama is reportedly turning into a movie. Samsung is nearing a wide-ranging deal with Perplexity for AI features. Anthropic's annualized revenue hit ~$3B at the end of May. Google pauses rollout of Ask Photos AI feature due to issues. Chinese companies test domestic alternatives as Nvidia processors dwindle. Teachers are not OK. Pro-AI subreddit bans users who suffer from AI delusions. The "white-collar bloodbath" is all part of the AI hype machine. AI makes the humanities more important, but also weirder. AI is not our future. The Darwin Gödel Machine: AI that improves itself by rewriting its own code. What happens when AI-generated lies are more compelling than the truth? Use AI code tools as collaborators, not crutches. Stop over-thinking AI subscriptions. A critical security flaw in Lovable exposes API keys and personal info. How Morgan Stanley uses DevGen.AI to translate legacy code into English specs. US investors visited China to study its AI scene after DeepSeek's advances.