System prompts
Anthropic published the system prompts of its flagship models, including Claude 3 Opus and Claude 3.5 Sonnet.
Why it matters:
AI companies have fought to hide their system prompts, ostensibly to prevent unintended behaviors, but have had mixed success in the face of LLM jailbreaks.
There are a few surprising things about the prompts, like how they get longer with each new model, and how Anthropic refers to Claude in the third person, rather than as "you."
Yet they still seem incomplete - the prompts don't mention artifacts, for example, or tool use.
Elsewhere in foundation models:
Alibaba released Qwen2-VL, a new AI model that can analyze videos longer than 20 minutes to summarize and answer questions about their content.
Meta reported its Llama models were downloaded almost 350M times, with usage via cloud providers more than doubling from May to July 2024.
OpenAI reportedly demoed its "Strawberry" model to US officials, including making training data for a new LLM, codenamed "Orion".
And OpenAI and Anthropic agreed to give the US AI Safety Institute early access to major new AI models for testing and evaluating purposes.
15 minutes of doom
As AI becomes more integrated into our lives, it seems less like Skynet and more like Siri. And it's worth asking: did the AI doomers waste their big moment?
Between the lines:
Prominent AI safety proponents wish the movement had better handled its moment in the spotlight. After years on the fringe, they were unprepared when regulators wanted to take them seriously.
After the failed coup at OpenAI, some safety researchers took away the lesson that "novel corporate-governance structures cannot constrain executives who are hell-bent on acceleration."
In hindsight, it seems that much of the regulation momentum was due to the overnight success of ChatGPT and GPT-4's immediate follow-up.
Now, people's fears have ebbed, and the appetite for safety regulation is slowing down.
Elsewhere in AI anxiety:
In a new survey, 1 in 10 minors say a friend or classmate has used AI to generate nudes of other kids.
Call center staff in the Philippines are navigating job volatility in the face of AI automation.
A RAND Corporation report suggests that over 80% of AI projects fail - more than twice the rate of other software projects.
And major websites and media outlets have blocked Apple's AI crawler from accessing their content.
SB 1047
After months of headlines, multiple rounds of revision, and prominent pushback, California's State Assembly passed SB 1047.
What's the latest:
The bill now heads to the Governor for either a signature or veto. Despite political pressure, the bill passed with overwhelming support in the State Senate (31-2) and Assembly (41-9).
While many tech giants still oppose the bill, Anthropic and xAI have come around to support it.
And right behind it is AB 3211, which passed the Assembly unanimously and would require companies to watermark AI-generated content.
Elsewhere in AI geopolitics:
Ukraine is working with private firms to test AI and other tech in drones to find land mines, save lives, and allow military forces to advance more quickly.
Documents reveal state-linked Chinese entities are using cloud services from AWS or its rivals to access advanced US chips and AI models they cannot acquire otherwise.
And amid President Maduro's media crackdown, some 100 Venezuelan journalists are using two AI avatars to host daily newscasts and avoid putting themselves at risk.
Things happen
Plaud unveils NotePin, yet another always-on AI wearable. Inflection will cap free access to its AI chatbot Pi in pivot to enterprise. Google's custom AI chatbots have arrived. Cheap AI voice bots are taking off with businesses in India. An interview with Pieter Levels on his latest AI startups. Midjourney's website is now officially open to everyone. TollBit aims to be the iTunes of AI content licensing. Ex-Googlers discover that startups are hard. ChatGPT active users doubled to over 200M since last year. This is Doom running on a diffusion model. Midjourney teases a new hardware product. OpenAI in talks for funding round valuing it above $100B. Chinese AI pushes ahead, chips or not. Using AI to fight insurance claim denials. Gmail users can now chat with Gemini about their emails. AI predicts earthquakes with unprecedented accuracy. Grok sends users to Vote.gov after warnings from officials. AI training shouldn't erase authorship. Man arrested for creating child porn using AI. Applying to 1k jobs with AI. The art of programming and why I won't use LLMs. Amazon's delayed overhaul of Alexa expected for October.