Feature parity
ChatGPT and Claude swapped major feature launches this week. ChatGPT gained the ability to use MCP clients, and Claude got access to a code sandbox with file reading and creation.
Why it matters:
Both features, but especially MCP, come with warnings about prompt injections, data destruction risks, and malicious connectors - highlighting how AI systems become far more dangerous when given write access to external systems.
Despite longstanding reservations about connecting LLMs to the Internet, Anthropic seems to be coming around in recent months with its latest features.
But the launches also reinforce how just cutthroat the AI environment still is, and how any groundbreaking feature is likely to be re-implemented by competitors within months, if not weeks.
Elsewhere in frontier models:
Alibaba released Qwen3-Next, a new model architecture optimized for long-context understanding, large parameter scale, and better computational efficiency.
ByteDance launched Seedream 4.0, an AI image model it claims can beat Google DeepMind's viral Nano Banana model in prompt adherence, alignment, and aesthetics.
Stability AI launched Stable Audio 2.5, which the company calls "the first audio generation model designed specifically for enterprise-grade sound production."
Google announced Veo updates, including support for 9:16 vertical video and 1080p, and the general availability of Veo 3 and Veo 3 Fast in the Gemini API.
Alibaba debuted Qwen3-Max-Preview, its largest AI model with over 1T parameters, showcasing strong benchmark performance despite being non-open source.
Nonbinding memorandum of understanding
OpenAI announced giving over $100 billion in equity to its controlling nonprofit and reached an agreement with Microsoft, clearing major hurdles for its transformation from nonprofit-managed to a public benefit corporation.
The big picture:
Perhaps the biggest challenge was Microsoft's $13+ billion investment and 49% profit stake, but the two companies announced a "nonbinding memorandum of understanding" to restructure their relationship (though with few formal details).
The timing is crucial, as California regulators and advocacy groups intensify their push to block the company's conversion. OpenAI is reportedly considering relocating out of California as a last-ditch option.
And the stakes are pretty high - nearly half of OpenAI's recent funding ($19 billion) is contingent on receiving traditional equity in a for-profit entity, and failure to restructure could derail future fundraising and IPO plans.
Elsewhere in OpenAI:
OpenAI researchers argue that language models hallucinate because standard training and evaluation procedures reward guessing over admitting uncertainty.
OpenAI projects $200B in revenue in 2030, with R&D spending hitting ~45% of that, or ~$90B.
Sources say OpenAI signed a contract with Oracle to purchase an industry record of $300B in computing power over roughly five years.
California and Delaware attorneys general raised concerns about the deaths of two ChatGPT users and threatened to block OpenAI's planned restructuring in a letter to the company.
And OpenAI acqui-hired the team behind Alex Codes, a Y-Combinator-backed startup whose tool lets developers use AI models within Apple's Xcode development suite.
Elsewhere in the FAANG free-for-all:
Microsoft will use Anthropic's models for some AI features in its Office 365 apps, after finding Claude Sonnet 4 beats OpenAI's GPT-5 in some tasks.
Meta signed a contract worth $140M+ to use technology from AI image startup Black Forest Labs.
Google announces an AI Plus subscription tier for emerging markets, offering "more access to Gemini 2.5 Pro" and tools like Flow, starting with Indonesia.
Meta's TBD Lab team apparently sits in an area near Mark Zuckerberg's desk that needs special badge access, and their names are not on the company's org chart.
And Google DeepMind and LIGO researchers detail Deep Loop Shaping, an AI tool that they say improves the observatory's ability to track gravitational waves.
Copywrongs
Anthropic has had a rollercoaster of a week in court regarding its copyright settlement, after federal judge William Alsup rejected the proposed $1.5B deal.
Between the lines:
The case revolves around Anthropic's downloading of pirated content from "shadow libraries" like LibGen - you can read more about it here.
The gargantuan settlement was positioned as a potential template for resolving similar AI copyright disputes against other frontier labs, with attorneys touting the $3,000-per-book payout as a benchmark.
But the judge criticized the lack of critical details like which works are covered and how class members will be notified, postponing approval until a September 15 deadline.
Ultimately, the deal could represent a calculated "drawbridge" strategy - using Anthropic's recent massive fundraise to establish a cost barrier that narrows the competition, as only well-funded companies like OpenAI and major tech giants could afford similar settlements.
Elsewhere in AI licensing:
Encyclopedia Britannica and Merriam-Webster sue Perplexity, alleging that it unlawfully scraped their websites and redirected their traffic to its AI summaries.
Analysis reveals that 13+ datasets used by tech companies without permission to train AI models contain 15.8M+ YouTube videos from 2M+ channels, including 1M how-to videos.
Reddit, Yahoo, Medium, Quora, People, O'Reilly, wikiHow, Ziff Davis, and others adopt the Really Simple Licensing (RSL) standard that sets terms for AI scraping.
And two authors file a proposed class action lawsuit against Apple, alleging Apple knowingly used a dataset of pirated books to train its AI models.
Elsewhere in AI anxiety:
Latin American artists report that AI-generated music flooding Spotify, Deezer, and YouTube Music is shrinking their royalties.
Young jobseekers are using ChatGPT to write applications while HR uses AI to read them, creating a hellish job market where few people get hired.
The FTC ordered Google, OpenAI, Meta, Snap, xAI, and Character.AI to provide information on how their AI chatbots impact children and teens.
Anthropic became the first major AI company to endorse SB 53, a California bill requiring large AI companies to release safety and security reports.
AI companies' bet on prediction token-based LLMs may make them vulnerable to disruption as new models face diminishing returns.
And a look at the software engineers being paid to fix vibe coded messes.
Things happen
Albania appoints AI bot minister to tackle corruption. Using Claude Code to modernize 25-year-old kernel driver. Trump's Big Tech courtship alarms populists in his base. Humanoid robot shows signs of generalized learning. AI-skilled workers make 19% to 56% more. Center for Alignment of AI Alignment Centers. Ted Cruz proposes AI sandbox for tech companies. Man vs. machine hackathon won by AI-supported team. AI surveillance should be banned while there's time. Travel companies prep for AI agents that could upend $1.6T market. "I hate my AI friend." Huawei's Ascend production ramp faces HBM bottlenecks. OpenAI backs AI-made animated film for 2026. 95% of AI pilots fail. Ant Group showcases its first robot. AI bubble argument misunderstands bubbles and AI. C3 AI's police tool was supposed to turbocharge investigations but struggled. Google's AI mode is good, actually. HHS tells employees to start using ChatGPT. AI Darwin Awards show AI's biggest problem is human. Company with world's most enviable ticker isn't cashing in on AI. The Claude Code Framework Wars. Nvidia opposes GAIN AI Act. What if AI is just normal technology? Mercor valued at $2B for recruiting people to train AI. The rise of async AI programming. Hinton: AI will make few people richer, most poorer. Claude's memory architecture is opposite of ChatGPT's.