Workshops
Reminder: Join me and Sairam Sundaresan in two weeks for a new workshop, "Enough GPT To Be Dangerous"!
Movie magic
Late last week, Meta announced Movie Gen, a suite of AI models capable of generating realistic video and audio, as well as making targeted edits.
Between the lines:
The models (30B video parameters and 13B audio parameters) can generate up to 16-second videos complete with sound effects, background music, and even personalized characters.
The outputs look similar to OpenAI's Sora, which made massive waves just seven months ago. Like Sora, it's unclear if or when Meta plans to release the model publicly.
Yet as text-to-video models become increasingly prolific (and increasingly productized), the original sin of training data remains - most (if not all) of these models were presumably trained on scraped YouTube videos.
Elsewhere in frontier models:
Writer unveiled Palmyra X 004, an enterprise LLM trained for just $700K using synthetic data, as the company reportedly seeks to raise up to $200M at a $1.9B valuation.
Apple's AI research team released Depth Pro, an AI model that generates a 2.25-megapixel 3D depth map using a single image in 0.3 seconds on a standard GPU.
And researchers developed an AI model for pareidolia, the phenomenon of seeing faces in inanimate objects.
Going nuclear
Amazon, Microsoft, and Google are increasingly turning to nuclear reactors to meet the growing energy demands of AI (and to try and offset their carbon emissions).
The big picture:
Over the past year, it's become increasingly difficult to ignore AI's energy cost - by some estimates, AI demand will more than double data center power usage by 2030.
And unlike renewables such as wind and solar, which generate power intermittently, nuclear power provides a steady, carbon-free baseload of electricity.
It's striking that AI could catalyze a new wave of investment in nuclear power - though it would mean even higher sunk costs if (when?) investor sentiment turns on AI.
Elsewhere in the FAANG free-for-all:
Amazon announced AI Shopping Guides, showing AI-generated descriptions and recommendations for over 100 product types on its US mobile website and app.
Meta began rolling out new AI tools that let advertisers expand video aspect ratios and generate Instagram Reel video ads from static images.
And Google released Android theft protection features, including Theft Detection Lock, which uses AI to detect motion indicating theft.
Elsewhere in AI infrastructure:
Malaysia's Johor is transforming from a backwater into one of the world's biggest AI data center hubs, with an estimated $3.8B investment in 2024.
Foxconn plans to build a plant in Mexico's Guadalajara to manufacture Nvidia GB200 Blackwell AI servers, as tech companies seek to decouple from China.
AMD will launch its Instinct MI325X GPUs to compete with Nvidia's upcoming Blackwell chips, with production set to begin before the end of 2024.
And a deep dive explores the rise of "AI Neoclouds," a new breed of cloud compute providers like Crusoe, Lambda Labs, and CoreWeave, built to offer GPU rentals and their unique economics.
Disruptive innovation
In a new report, OpenAI disclosed that it had disrupted over 20 operations worldwide that were using its models for nefarious purposes.
Why it matters:
The report illustrates various harms that AI is capable of today: from creating malware to spreading misinformation to impersonating legions of social media users.
OpenAI also revealed that a suspected China-based group called SweetSpecter attempted a phishing attack on its employees, highlighting cybersecurity risks for leading AI companies amidst geopolitical tensions.
And while the company addressed election-related abuse quickly, it's hard to say whether other platforms will be as vigilant in the face of upcoming elections.
Elsewhere in OpenAI:
Internal documents suggest OpenAI won't turn a profit until 2029, and projected losses could reach $14 billion in 2026.
Sources say OpenAI plans to restructure as a public benefit corporation to protect against hostile takeovers and outside interference.
And OpenAI signs a content deal with Hearst to use material from over 40 local newspapers and 20 magazines in its products.
Elsewhere in AI anxiety:
X reportedly moves slowly to remove AI-generated nudes reported as nonconsensual intimate media, but acts quickly on copyright violation claims.
Thousands of internal AI training datasets and tools were exposed to anyone on the internet.
As political AI slop becomes more prevalent, it’s becoming clear that some people understand the content is AI-generated and simply don’t care.
And hacked data from an AI girlfriend app revealed user prompts describing child sexual abuse.
Things happen
Crypto heist of $243M used phone-based social engineering on a Gemini user. AI will make "out" and "fault" calls at Wimbledon starting in 2025. Anthropic launches Message Batches API in beta. The rise of AI-powered job application bots. Nobel in Chemistry goes to protein researchers. A look at OpenAI's ongoing talent exodus. Amkor and TSMC team up for advanced chip packaging in Arizona. AI-generated pro-North Korean TikToks double as supplement ads. Google's US search ad market share expected to drop below 50% in 2025. Q&A with Terence Tao, the world's greatest living mathematician. UK launches Regulatory Innovation Office to speed up tech approvals. Grindr is testing an AI "wingman". Google NotebookLM's Audio Overview feature goes viral. HyperWrite blames "a bug" for Reflection 70B model issues. Nobel Prize in Physics awarded for machine learning discoveries. The editors protecting Wikipedia from AI hoaxes. TSMC's global foundry revenue share to hit ~64% in 2024. Zoom plans to launch AI-animated avatars in 2025. Adobe launches Content Authenticity web app. UK defense ministry uses Palantir AI for review. Chinese AI startups launch products overseas to boost growth. Interview with Shield AI co-founder on building "the world's best AI pilot". Era of politically motivated AI slop is here.