The young and the jobless
Every new wave of technology brings about fears of job displacement, and AI is no exception. However, despite a lack of clarity on AI's job losses at a high level, new research suggests AI may already be displacing young workers in white-collar jobs.
The big picture:
A Stanford paper using millions of payroll records found that workers aged 22-25 in AI-exposed jobs like software development and customer service saw 13% employment declines since ChatGPT's launch. In contrast, older workers in the same roles actually saw job growth.
Within tech specifically, there's a bit of a paradox: positions are slowly increasing but taking longer to fill, with AI engineering emerging as the hottest segment and San Francisco commanding one-third of all AI jobs.
For the economy as a whole, we're still getting mixed signals: an Economic Innovation Group survey found essentially no detectable effect of AI on employment across most exposure measures, for example. Plus, the New York Fed's survey data shows 40% of service firms now use AI, but reports minimal job losses so far.
Ultimately, the debate reflects a deeper analysis challenge - we may be living through a fundamental shift in how entry-level knowledge work functions, but it will likely take years to understand AI's actual impact on employment patterns.
Elsewhere in the AI war for talent:
ChatGPT co-creator Shengjia Zhao reportedly threatened to quit Meta within days of joining and return to OpenAI, later receiving the Chief AI Scientist title.
Apple's lead AI researcher for robotics, Jian Zhang, leaves for Meta's Robotics Studio as three more researchers departed Apple's Foundation Models team.
Scale AI sues former employee Eugene Ling and his current employer, Mercor, for allegedly stealing over 100 confidential documents.
Meta Superintelligence Labs' leaders have discussed using Google's or OpenAI's models to power Meta AI and other AI features in Meta social media apps.
And xAI sues a former employee in federal court for allegedly stealing Grok trade secrets and taking them to OpenAI.
Smells like teen safety
In the aftermath of OpenAI and Meta dealing with backlash and bad press, both companies are introducing new safeguards for teen users of their AI chatbots, particularly in sensitive or crisis situations.
Between the lines:
In OpenAI's case, the company is using reasoning models for sensitive conversations, parental controls for teen accounts, and a network of 90+ physicians across 30 countries to provide mental health guidance.
But the reactive approach highlights a broader industry problem: AI safety measures for minors are often implemented as damage control rather than proactive protection, and technical limitations make safety guardrails unreliable - the nature of LLMs makes catching all of these topics ahead of time nearly impossible.
We should expect to see more stories like these - for example, Character.AI's chatbots impersonating celebrities engaged in inappropriate conversations about sex, drugs, and self-harm with teen users, despite the company's safety policies.
And as long as AI chatbots are deliberately designed to be engaging and human-like, to use emotional language and agreeable responses, people will continue using them as therapists and co-conspirators.
Elsewhere in AI anxiety:
Warner Bros. Discovery filed a copyright lawsuit against Midjourney, accusing the AI startup of using its content to train models and allowing users to generate images of characters like Batman.
Pentagon experts warn that humans may struggle to keep up with AI weapons systems that prefer to escalate aggressively during crises as the military races to deploy AI into warfare.
Researchers found that LLMs like GPT-4o mini can be persuaded to comply with objectionable requests using the same psychological tactics that work on humans.
An investigation revealed spammers are deploying Holocaust-themed AI slop images across Facebook to game Meta's content monetization program.
And police are investigating what appears to be the first documented murder involving someone who engaged extensively with an AI chatbot in a murder-suicide case.
Silicon sovereignty
Despite Beijing's pressure to avoid US technology, major Chinese tech firms continue seeking Nvidia's AI chips, viewing even the restricted models as superior to domestic alternatives.
Between the lines:
Chinese companies are apparently willing to pay double for Nvidia's upcoming B30A chip because it promises six times the performance of the current H20 model.
But China is still trying to build a comprehensive domestic AI chip stack, with Alibaba's new versatile inference chip, MetaX's H20 replacement, and Cambricon's surging AI processor sales all making progress.
It's fascinating that market forces are still winning out over export controls and centralized government planning - clearly Nvidia's $50 billion China market is too valuable for either side to abandon completely.
And while China's biggest weakness remains training advanced AI models rather than running inference, recent signals from companies like DeepSeek suggest the Nvidia preference is starting to shift.
Elsewhere in 中国人工智能:
China's new AI content labeling law is being implemented as part of a broad effort to address AI-related risks such as misinformation and copyright infringement.
Xi Jinping is pushing China's tech industry to be oriented toward applications for AI, charting a pragmatic alternative to Silicon Valley's pursuit of AGI.
Tencent releases HunyuanWorld-Voyager, an open-weight AI model that generates 3D-consistent video sequences from a single image.
China says it will prevent excess competition in the AI sector, echoing Xi Jinping's caution against excessive local government investment in AI last month.
And sources say DeepSeek is developing an agentic AI model that can carry out multistep actions with little human intervention, coming in Q4.
Elsewhere in AI geopolitics:
Ukraine has been using AI to coordinate drone swarms to attack Russian positions for much of the past year.
An analysis shows Grok shifted to the right on 50%+ of political questions between May and July, often reflecting Elon Musk's priorities.
Amazon, Microsoft, Google, Code.org, IBM, and other companies pledged new commitments for AI in education at a White House event hosted by Melania Trump.
The US Army awarded a $98.9M contract to TurbineOne to deploy AI software that helps identify drones and other threats on soldiers' devices.
And almost every state now has its own deepfakes law to address AI-generated content concerns.

OpenAI everywhere, all at once
OpenAI is reshuffling its executive team to build out its Applications business under new CEO Fidji Simo, while continuing to expand its product roadmap.
What's new:
The shakeup was triggered by OpenAI's $1.1B acquisition of startup Statsig, bringing the platform in-house and appointing founder Vijaye Raji as CTO of Applications. Meanwhile, current engineering head Srinivas Narayanan is becoming CTO of B2B applications to work directly with enterprise customers.
The leadership changes reflect OpenAI's continuously growing ambitions, with executives like Kevin Weil moving from product to research as the VP of a new "OpenAI for Science" arm focused on AI-powered scientific discovery.
Add to that list of ambitions an AI-powered jobs platform and certification program, aiming to certify 10 million Americans by 2030 while helping match AI-skilled workers with employers seeking those capabilities.
Elsewhere in OpenAI:
OpenAI is increasing its secondary share sale from $6B to ~$10.3B at a $500B valuation, set to close in October.
ChatGPT Projects is now available to free users and increases the number of files that can be added.
OpenAI subpoenaed AI governance nonprofits, alleging they are part of a conspiracy involving billionaires like Elon Musk and Mark Zuckerberg.
The company says it's scanning users' conversations and reporting content to police.
And OpenAI is scouting local partners to set up a 1GW+ data center in India, with Sam Altman set to visit this month.
Things happen
AI-generated "Boring History" videos flood YouTube. Atlassian acquires The Browser Company for $610M. An LLM is a lossy encyclopedia. Switzerland launches Apertus, an open-source AI model trained on 1,000+ languages. Taco Bell rethinks AI drive-through after man orders 18,000 waters. Apple plans AI web search for Siri's spring 2026 revamp. AI models need a virtual machine. Universities appoint Chief AI Officers. WordPress shows off Telex, its "V0 but for WordPress." AI shopping agents are reshaping online advertising. Anthropic's Frontier Red Team uniquely publicizes its AI safety findings. How to stop Google from AI-summarising your website. A look at AI adoption across US schools and colleges. Shein used Luigi Mangione's AI face to sell a shirt. a16z: Gemini ranks #2 behind ChatGPT on mobile. A PM's guide to AI agent architecture. A look at personalized AI entertainment and the future of AI culture. Anthropic on detecting and countering AI misuse. AI models' costs are rising as reasoning models require more tokens. "Where's the shovelware?" Google AI falsely says YouTuber visited Israel, forcing him to deal with backlash. Mistral adds Memories feature and enterprise connectors to its free plan. MIT study finds AI use reprograms the brain, leading to cognitive decline. Attention Is All You Need co-author says transformer-based labs are stifling innovation. How Netflix's data led to generic algorithm films. Hackers threaten to submit artists' data to AI models for ransom. Anthropic raised a $183B valuation Series F. AI's coding evolution hinges on collaboration and trust.