Workshops
Due to a scheduling conflict, will be hosting his “Enough GPT To Be Dangerous” workshop next Friday! We’ll see you there!
Watermarking the AI wave
Google DeepMind introduced SynthID-Text, a new watermarking system for AI-generated text.
Between the lines:
The system subtly alters word choices during generation, creating a statistical signature invisible to humans but detectable by specialized tools.
That said, it’s far from foolproof - simple edits or AI rewrites can break the watermark, and it's currently limited to Google's ecosystem and select developers.
Ultimately, detecting AI-generated text is still far from accurate - and we should still not rely on so-called “AI detectors.”
Elsewhere in AI anxiety:
Workers at AI companies are seeking specific whistleblower protections, arguing that exposing threats posed by AI advancements aren’t covered under current US laws.
Google's Project Big Sleep used an AI agent to discover a previously unknown and exploitable bug in SQLite.
And Netflix appears bullish on generative AI for games after laying off human game developers.
AI goes to war
Major AI companies are rapidly making their AI models available to US defense agencies, as China's military researchers appear to be using Meta's open-source Llama model.
The big picture:
As Meta partners with Anduril, Booz Allen, and Lockheed Martin, Anthropic is teaming up with Palantir. OpenAI, for its part, already has a retired army general on its Board of Directors.
However, the situation also highlights the double-edged nature of open-source AI models - they can also be accessed by potential adversaries, as demonstrated by reports of Chinese military researchers using Llama 2.
Deals like these contrast Silicon Valley's historical reluctance to work with the US military (due to employee protests and ethical concerns) and signal a shift in AI companies' stance on military applications.
Elsewhere in AI geopolitics:
Ars Technica details what a Trump victory could mean for US AI regulation, while Anthropic and Microsoft weigh in with their own AI policy proposals.
The UK launched a platform with guidance and resources to help businesses evaluate the safety of new AI systems.
And Saudi Arabia plans a new AI project to rival the UAE's tech hub, backed by up to $100B for data centers, startups, and talent.
Elsewhere in frontier models:
Mistral AI launched a content moderation API powered by its Ministral 8B model to detect potentially harmful content in 11 languages.
Anthropic increased the price of Claude 3.5 Haiku to $1 per million input tokens, up from Claude 3.0 Haiku's ¢25 per million tokens, and removes image analysis features.
Standard Intelligence released hertz-dev, an open-source base model for conversational audio.
And OpenAI introduced Predicted Outputs as a new feature for latency optimization.
Happy birthday, Alexa
Ten years after its launch, Amazon's Alexa skills platform hasn't lived up to its ambient computing vision - but LLM breakthroughs might finally change that.
Between the lines:
Despite Amazon's efforts to create a seamless voice-first ecosystem, most users still primarily use Alexa for basic tasks like playing music and checking the weather.
The platform's fundamental challenges stem from clunky user interfaces, skill discovery, and syntax requirements - issues that are potentially fixable with LLMs.
But it will still be an uphill battle: the Alexa marketplace has over 160,000 “skills,” each with their own set of inputs and outputs.
Elsewhere in the FAANG free-for-all:
Microsoft is rolling out AI-powered text editing features in Notepad and generative features in Paint for Windows Insiders.
Apple has reportedly asked Foxconn to produce AI servers in Taiwan using Apple silicon, though capacity is limited by demand for Nvidia's servers.
Amazon unveils X-Ray Recaps on Prime Video, an AI feature generating spoiler-free summaries of TV shows for Fire TV users in the US.
And Meta plans to use an AI-based "adult classifier" tool on Instagram to identify young users lying about their age and adjust privacy settings automatically.
Things happen
Doctors are pioneering AI use to improve patient outcomes, from faster diagnostics to better communication. Perplexity's US election hub offered real-time insights and historical context on November 5. Denmark's AI supercomputer Gefion, built with Novo Nordisk's $100M, boasts 1,528 H100 GPUs. OpenAI bought chat.com, which now redirects to ChatGPT. Perplexity is finalizing a $500M round that would triple its valuation to $9B. ChatGPT Search is far from being a "Google killer" due to unreliability with short queries. Meta's former AR glasses head joins OpenAI to lead robotics and consumer hardware. Disney forms the Office of Technology Enablement to coordinate its use of emerging technologies. Tech giants' capital expenditures set to top $200B in 2024, chasing AI developments. T-Mobile plans to use OpenAI's tech for a $100M customer service bot slated for 2025. TSMC SVP says Taiwan must improve its chip tech to maintain global leadership. OpenAI is in early talks with California's AG to become a for-profit company. YC's Head of Public Policy aims to fight for "little tech" in Washington. The Vatican's anime mascot becomes an AI porn sensation. Tesla's autonomous driving approach needs AI breakthroughs that may be years away. A Mumbai drugmaker is helping Putin get Nvidia AI chips. One in 20 new Wikipedia pages seems to be written with AI assistance.