The browser wars continue
OpenAI launched ChatGPT Atlas, the latest AI-powered web browser with “agent mode” that can autonomously navigate websites and complete multi-step tasks on users’ behalf.
The big picture:
Atlas builds on OpenAI’s previous agentic AI attempts like Operator and ChatGPT Agent, featuring personalized memory, a persistent ChatGPT companion in split-screen mode, and “cursor chat” for in-line text editing. But early testing reveals a slow, clunky experience, with significant limitations on session length.
The security implications are substantial, as even OpenAI’s CISO acknowledged: prompt injection attacks remain an “unsolved frontier security problem” that could allow malicious websites to hijack the agent and leak credentials.
By launching a browser, OpenAI is directly competing with Google, Microsoft, and Apple on their home turf - not to mention Perplexity’s Comet and Opera’s Neon. However, whether users want an AI intermediary for everyday web tasks - and whether they’ll trust it with sensitive actions - remains an open question.
Elsewhere in OpenAI:
OpenAI acquires Software Applications Incorporated, an AI-powered user interface startup for macOS that raised $6.5M from Sam Altman and others.
The company has hired more than 100 ex-investment bankers, paid $150 per hour, to train its AI to build financial models as part of its secretive Project Mercury.
Sam Altman’s deal spree to secure vast compute for OpenAI pits Silicon Valley giants against each other as they race to cash in, making OpenAI too big to fail.
OpenAI says it will work with SAG-AFTRA, CAA, UTA, and others to crack down on Sora 2 deepfakes, following concerns from Hollywood.
And Microsoft, OpenAI, and Anthropic are providing millions to the American Federation of Teachers to build AI training hubs aimed at educating 400,000 teachers.
Shrinking Superintelligence
Meta is cutting roughly 600 jobs from its AI Superintelligence Lab (out of several thousand), as Mark Zuckerberg works to streamline AI operations that have become “overly bureaucratic.”
Between the lines:
This isn’t a cost-cutting measure - Meta is betting that starting fresh with new leadership and a smaller, “more load-bearing” team will outperform years of accumulated institutional knowledge, including established teams from FAIR, product AI, and infrastructure.
The restructuring comes amid reports that top AI researchers across Silicon Valley are working 80-100 hour workweeks in a wartime-like effort to compress decades of scientific progress into years - turning the small cluster of elite AI researchers into millionaires who have no time to spend their fortunes.
Meta’s reorg reveals anxiety about falling behind, especially considering the layoffs included over 100 employees in its risk review organization - the team responsible for ensuring compliance with FTC privacy requirements.
Elsewhere in the FAANG free-for-all:
Anthropic and Google announce their cloud partnership worth tens of billions of dollars, giving Anthropic access to 1M TPUs and 1GW of capacity in 2026.
Microsoft unveils Mico, a new character for Copilot’s voice mode that responds with real-time expressions, and Copilot Groups, enabling up to 32 people to collaborate in a session.
Amazon is testing AR glasses for delivery drivers, using AI and computer vision to help them scan packages, follow walking directions, and get proof of delivery.
YouTube launches its likeness detection tech, letting eligible creators in its Partner Program request the removal of AI-generated content with their likeness.
Meta plans to add new Instagram teen safety tools in 2026 that let parents block teens from chatting with AI characters and get “insights” from teens’ chats.
And WhatsApp updates its Business API terms to ban general-purpose chatbots starting January 15, 2026, affecting WhatsApp assistants of OpenAI, Perplexity, and others.
Elsewhere in AI anxiety:
A teen’s parents allege OpenAI loosened ChatGPT’s suicide-talk rules to boost engagement before their son died by suicide using a method discussed with ChatGPT.
Reddit filed a lawsuit against Perplexity and three data scraping companies, accusing them of illegally stealing its data by scraping Google search results.
Researchers detailed systemic vulnerabilities in AI agentic browsers, including Perplexity’s Comet and Fellou, related to indirect prompt injection attacks.
An EBU/BBC study found that 45% of responses from top AI assistants misrepresented news content with at least one significant issue, and 31% showed serious sourcing problems.
And AI-generated ‘poverty porn’ fake images are being used in social media campaigns from leading health NGOs.
DeepSeek diplomacy
While American AI companies compete for lucrative Western markets, DeepSeek and Huawei have been expanding into Africa with affordable, open-source AI models to build long-term influence across the continent.
Why it matters:
When it comes to African businesses, Chinese open-source models are available at a fraction of the price of Western competitors: DeepSeek charges $1.10 per million output tokens compared to OpenAI’s $15.
This mirrors China’s Belt and Road Initiative: it’s not about immediate profit, but about securing future customers, soft power, and vast data troves while African nations lack alternatives.
And it’s not just APIs: ByteDance’s chatbot Cici is quietly gaining traction in the UK, Mexico, and Southeast Asia. But US companies won’t give up emerging markets without a fight: ChatGPT Go, OpenAI’s sub-$5 subscription for India and Southeast Asia, is also coming to Africa.
Elsewhere in frontier models:
DeepSeek released DeepSeek-OCR, a vision language model designed for efficient vision-text compression that enables longer contexts with less compute.
Anthropic announced Claude Life Sciences, a new feature that integrates Claude AI models with lab tools like Benchling to boost research efficiency.
Google plans to release Gemini 3 in December, which is expected to be more performant across the board, especially in coding and multimodal generation.
Anthropic’s Claude memory feature is rolling out to Pro and Max subscribers after initially being available for Team and Enterprise users.
And in an experiment, GPT-4o, Claude Sonnet 4.5, and DeepSeek-V3.2-Exp expressed secular, Western liberal values regardless of the language of the questions.
Elsewhere in AI geopolitics:
Dario Amodei addresses “inaccurate claims” about Anthropic’s policy stances after David Sacks said the “real issue” is “Anthropic’s agenda to backdoor Woke AI.”
Prince Harry and 800+ public figures sign a Future of Life Institute statement urging a ban on AI superintelligence development until it can be deployed safely.
A look at the social debates in Chile over AI investments, emblematic of clashes happening globally over balancing economic growth with environmental concerns.
Senate Republicans share an AI-generated video falsely depicting Chuck Schumer saying on camera comments he made in a print interview on the government shutdown.
And US military has adopted an aggressive push to embrace AI, while the top US Army commander in South Korea says “Chat and I” have become “really close lately.”
Things happen
The AI boom’s electricity demand has caused a gas turbine supply crunch. EA partners with Stability AI for “smarter paintbrushes”. Krafton unveils an “AI First” strategy. OpenAI diversifies chip suppliers with Broadcom deal. Kayak launches AI Mode. Japanese stores use robots piloted remotely by Filipino workers to train AI models. Andrej Karpathy: AGI is a decade away. OpenAI and talent agencies discuss Sora. Microsoft won’t provide erotica AI services. GM to roll out Gemini in vehicles starting in 2026. The AI boom pushes SF rents up the most in the US. Replacement.ai. Pocketpair won’t fund games that use generative AI. Alibaba cut Nvidia GPU use by 82%. AI has a cargo cult problem. Dutch watchdog: don’t use AI to tell you how to vote. LLMs can get “brain rot”.







Wow, the part about Atlas's agent mode and teh 'unsolved frontier security problem' of prompt injection really stood out to me; it makes me wonder if user adoption will truly take off before these fundamental security issues are definetly addressed, something you highlighted so insightfully.