Wow. I can't believe we hit 100 roundups. When I first started with "Roundup 001," I felt a little silly. Getting to three digits worth of posts felt inconceivable! And yet.
This isn't quite the 2-year mark, but I have some thoughts to share when we get there. Here's to the next hundred weeks!
CES
Today wraps up the Consumer Electronics Show in Las Vegas, where over 100,000 attendees descend on Las Vegas for gadgets, gearboxes and, lately, GPTs. Over the week, Nvidia has been a dominant throughline.
Everything Nvidia announced this week:
Project Digits, a personal AI supercomputer capable of running 200B parameter models.
The new RTX 5090 GPU features 92B transistors and AI-powered DLSS 4 technology.
Early access to Omniverse Cloud Sensor RTX for developers to create smarter autonomous machines with generative AI.
Cosmos world models for robots and autonomous vehicles were trained on 20 million hours of human movement footage.
Llama Nemotron and Cosmos Nemotron model families for use with AI agents.
Updated Autonomous Game Characters that now use small language models to make NPCs behave more like human players in games.
Mega, an Omniverse Blueprint for testing robot fleets in digital twins before deployment.
And CEO Jensen Huang made plenty of proclamations, including that Nvidia's AI chips are improving faster than Moore's law and that autonomous vehicles will become the first multitrillion-dollar robotics industry (with Nvidia leading the revolution).
Go fake yourself
Meta was the center of multiple AI stories this week (aside from its new moderation policies), as users reacted negatively to new features that the company is testing.
What to watch:
The first was about Meta envisioning "AI-generated users" who would generate and share content alongside humans. Unsurprisingly, people resisted the idea of even more AI slop.
However, Instagram has already started testing a different form of AI-impersonation: a new feature that shows users AI-generated images of themselves in different contexts. One user reportedly saw the images of his likeness after using Meta AI to edit a selfie.
They're a reminder that even as Meta pours billions into pushing the open-source frontier forward, there's no such thing as a free lunch. The company still has plans to monetize its AI R&D.
Elsewhere in the FAANG free-for-all:
Google launched Daily Listen, an AI-powered feature that creates personalized five-minute audio summaries of followed stories and topics.
Google plans to enhance its TV platform with Gemini integration, improved voice commands, and deeper YouTube features in 2025.
And Microsoft makes its Phi-4 AI model fully open source on Hugging Face under an MIT License, including its 14B-parameter weights.
Elsewhere in AI anxiety:
After users complained about degraded quality, Microsoft is rolling back its Bing Image Creator to an older DALL-E 3 version.
Users are exploiting Runway AI to make violent content appear like Minions movies to bypass social media moderation.
AI-generated images of the Hollywood sign burning are being mistaken for real photos of LA wildfires.
Court documents reveal Mark Zuckerberg approved Meta's use of pirated materials from LibGen to train Llama AI models.
And Elon Musk is urging state attorneys general to force OpenAI to auction off a significant portion of its business, despite the company having no such plans.
Who debugs the debuggers
As someone who thinks a lot about how AI will impact programming (spoiler alert: it's gonna be big), I'm constantly trying to update my priors on how quickly things are changing.
Three perspectives:
While it might not feel that way, AI adoption is still slow going, even amongst programmers. Many are still skeptical of how much benefit AI brings, with the assumption that much of the code it writes is terrible.
Maybe I'm a "true believer," but I see many of the complaints as short-sighted. There's so much low-hanging fruit in improving these systems - simply repeating "write better code" can yield up to 100x speed increases!
But perhaps, despite the pomp and circumstance, the challenge is the same as ever - the software industry is constantly reinventing itself, and its practitioners must find ways to keep up.
Things happen
xAI launches Grok app for iOS in beta. Razer unveils Project Ava, an AI gaming copilot. Sam Altman says OpenAI is confident about building AGI. AMD's new Ryzen AI Max+ chips claim 84% faster rendering than M4 Pro MacBook. Samsung's TVs get AI-powered features, including food recognition. Anthropic in talks to raise funding at $60B valuation. Extracting AI models from mobile apps. US startups raised $209B in 2024, with AI startups taking $97B. A look at AI and startup moats. OpenAI is losing money on ChatGPT Pro plan. AI's next leap needs intimate access to your digital life. New Polish film uses AI to superimpose Putin's face onto an actor. AWS plans $11B investment in Georgia data centers. AI-enabled wearables offer always-on conversation recording. Religious leaders experiment with AI for sermons and research. Microsoft to invest $3B in India for Azure and AI expansion. UK competition watchdog tests AI tool to catch bid-rigging. Alibaba Cloud partners with 01.AI for industrial large model lab. Q&A with Sam Altman on OpenAI, AGI, and more. Texas introduces broad AI liability law for developers. Eric Schmidt's secret project is an AI video platform. Microsoft on track to spend $80B on AI infrastructure in FY 2025. OpenAI could launch agents as early as January.