If anyone builds it, everyone dies
For twenty years, Eliezer Yudkowsky has preached the dangers of superintelligent AI to anyone who would listen (and inspired the founders of OpenAI, Anthropic, and DeepMind). Now, he's releasing a new book aimed at the mainstream, trying once again to persuade humanity to stop working on AI advancement.
Between the lines:
The fundamental challenge is timing - Yudkowsky and colleague Nate Soares have called for a complete shutdown of AI development through international treaties, but their influence is waning as AI becomes mainstream and politically unpopular to oppose.
The "doomer" perspective has become politically toxic, with the Trump administration focused on accelerating AI progress and the term itself now used pejoratively in Washington policy circles.
And the gulf between abstract existential risks and tangible daily benefits (hundreds of millions using ChatGPT) gets wider every day - even as safety experts believe the stakes couldn't be higher.
Elsewhere in AI anxiety:
CrowdStrike found that DeepSeek refuses to write code or produces less-secure code when English prompts say groups or regions disfavored by China will use the code.
The UK has ramped up the use of facial recognition, AI, and internet regulation to address crime and other issues, stoking concerns of surveillance overreach.
Disney, Universal, and WBD are suing China-based MiniMax, alleging that its image- and video-generating service Hailuo AI was built from stolen intellectual property.
A study found that Grok, ChatGPT, Meta AI, Claude, Gemini, and DeepSeek can be easily used to create phishing emails targeting the elderly, despite being trained to refuse.
And online services marketplace Fiverr laid off 30% of its workforce, or about 250 people, as part of a restructuring to become "an AI-first company."
Who's really using ChatGPT
OpenAI's first comprehensive study of ChatGPT users shows the chatbot has evolved from a male-dominated tech tool into a primarily personal assistant used mostly by women and young people.
The big picture:
The demographic shift is striking: ChatGPT went from 80% "typically masculine" names in early 2023 to 52% feminine names by June 2025, suggesting AI tools are becoming mainstream beyond tech circles.
Usage patterns show people increasingly treat ChatGPT as a personal assistant rather than work tool - 73% of conversations are now non-work related, with "practical guidance" like homework help and workout tips dominating at 28% of all chats.
Despite concerns about AI companionship, only 1.9% of conversations involve relationship advice, and just 0.4% involve "AI girlfriend" scenarios, suggesting the parasocial relationship fears may be overblown compared to practical everyday uses.
Elsewhere in OpenAI:
OpenAI is developing a different ChatGPT experience for teens and plans to use age-prediction tech to bar kids under 18 from the standard version.
The company debuted GPT‑5-Codex, a version of GPT‑5 optimized for agentic coding that spends its "thinking" time more dynamically than previous models.
OpenAI is reportedly recruiting AI researchers to work on humanoid robots and is training AI algorithms to better understand the physical world.
And OpenAI unveiled Grove, a five-week mentorship program for nascent tech entrepreneurs hosted in its San Francisco HQ with ~15 participants in its first cohort.
Elsewhere in frontier models:
OpenAI and DeepMind achieved a gold medal performance at the 2025 ICPC World Finals programming competition, with Gemini 2.5 Deep Think solving 10 out of 12 problems, and GPT-5 (plus an experimental model) solving all 12.
OpenAI and Apollo Research trained o3 and o4-mini versions to avoid "scheming", reducing covert actions by approximately 30 times.
Scientists detail Delphi-2M, a generative AI model trained on large-scale health records that can predict susceptibility to over 1,000 diseases decades in advance.
Google releases VaultGemma, a 1B-parameter model it says is the largest open LLM trained with differential privacy.
And Alibaba releases Qwen3-Next, a new model architecture optimized for long-context understanding, large parameter scale, and better computational efficiency.
The chips are down
China's internet regulator has banned major tech companies from buying Nvidia's AI chips, escalating the tech war as Beijing pushes for complete semiconductor independence.
The big picture:
This goes beyond previous restrictions and now covers Nvidia's RTX Pro 6000D chips, which were specifically designed for the Chinese market after US export controls blocked more powerful processors.
Chinese regulators concluded that domestic AI chips from companies like Huawei and Cambricon now match or exceed the performance of Nvidia's China-approved products, giving Beijing confidence to cut ties completely (some larger Chinese firms, like Alibaba and Baidu, are using in-house chips).
While it's a big blow to Nvidia, Huawei's success will ultimately depend on execution and adoption - Nvidia's ecosystem advantage extends far beyond raw hardware specifications to software, developer tools, and market relationships built over years.
Elsewhere in GPUs:
Chinese state media reports that Alibaba secured high-profile client China Unicom for its AI chips, with prior reports indicating Alibaba supplies tens of thousands of chips to the telecom giant.
SMIC is testing China's first domestically produced deep-ultraviolet lithography machine as part of efforts to reduce reliance on western chipmaking technology.
The UAE's $2B investment in the Trump family's WLF is tied to a deal allowing the UAE's G42 to access US AI chips, with David Sacks playing a key role in the arrangement.
And Nvidia plans to invest £2B to support the UK's AI industry in partnership with Accel, Air Street Capital, Balderton Capital, Hoxton Ventures, and Phoenix Court.
Elsewhere in AI geopolitics:
California passes SB 53, requiring AI companies to disclose their safety testing regimes after Newsom vetoed a similar, more expansive measure last year.
The UK government's i.AI unit struggles to attract top talent despite offering a £67,300 median salary to develop AI tools for improving civil service efficiency.
US tech giants announce £31B+ in UK investments coinciding with Trump's visit, with OpenAI planning to bring Stargate to the UK with partners like Nscale.
Anthropic's refusal to let law enforcement use its tools to surveil US citizens has deepened White House hostility toward the company, according to sources.
And Meta's California super PAC allows Mark Zuckerberg to spend the company's money on his own political choices, including AI regulation.
Google (and YouTube) has a lot going on
At Tuesday's Made on YouTube event, the video platform previewed several new AI features for creators, while its parent company continued its parade of AI announcements.
What to watch:
AI tools for podcasters, including one that turns video podcasts into clips and another that creates video for audio-only podcasts.
YouTube plans to expand its likeness detection tech to all Partner Program creators in the next months, with creators able to opt in by uploading an image of their face.
New livestreaming features, including letting creators transition between public and members-only streams, and ads that run next to the stream.
Generative AI tools for Shorts, including a custom version of Veo 3 called Veo 3 Fast, which includes sound, and an Edit with AI feature.
Google's Gemini button in Chrome for all desktop users in the US browsing in English, along with other AI features, including AI Mode in the address bar.
The Agent Payments Protocol, or AP2, is designed to securely facilitate agent-led payments across platforms.
And Google's Gemini app is the #1 free app in the US App Store, driven by its Nano Banana model, which has been used to edit 500M+ images since its August 26 launch.
Elsewhere in the FAANG free-for-all:
Meta's Ray-Ban Display glasses represent Zuckerberg's latest attempt to reframe the company around "personal superintelligence," though they face significant technical and societal challenges.
AirPods Pro's Live Translation feature demonstrates one of the strongest examples of how AI can seamlessly improve people's daily lives.
Amazon launched a chatbot-style assistant to help advertisers use AI to create ads that can run across Amazon's advertising inventory and partner platforms.
Meta has discussed licensing articles from major media companies, including Axel Springer, Fox Corp., and News Corp., for use in its AI tools.
And Microsoft added automatic AI model selection to Visual Studio Code, primarily favoring Claude Sonnet 4 over GPT models for paid GitHub Copilot users.
Things happen
Anthropic postmortem on three infrastructure bugs. xAI's Colossus 2 aims to be the first gigawatt AI datacenter. Claude usage data: 36% code, 77% of businesses automate. SF AI founders are saying no to booze, sleep, and fun in their quest to appear maximally obsessed. Scale AI's $100M Pentagon deal. Malawi farmers get AI farming advice. Tencent hired an OpenAI researcher for a ~$66M package. Russian state TV launched an AI-generated satire show. Faith apps are using chatbot gods for digital chaplaincy. AI will not make you rich, more like shipping containers than microprocessors. Librarians are being asked to find AI-hallucinated books. OpenAI hired xAI's former CFO. DeepSeek says its reasoning model cost just $294K to train. Mira Murati's lab wants to make AI models more consistent. Jack Ma returns to Alibaba. Reddit seeks another AI content pact with Google. Google is reimagining textbooks with generative AI. Seniority-biased technological change. You had no taste before AI. Vibe coding has turned senior devs into AI babysitters. AI hype masks recession signals.