The AI innovator's dilemma
Google is reportedly considering charging for new "premium" AI-powered search features. It would be the first time the company has ever put its core product behind a paywall.
Why it matters:
While it is clear to many that AI is a transformational technology, it's less clear how companies should try to make money with it. The old monetization playbooks are not entirely applicable.
It's not just Google - plenty of other companies are struggling to find AI-compatible business models. Perplexity is now planning to sell ads in contrast with past statements, and Stability AI has reportedly struggled to pay its GPU bills.
Part of what makes this difficult is the high costs of training and running LLMs, which makes freemium tiers - especially for the most advanced models - quite expensive.
Elsewhere in the FAANG free-for-all:
Apple researchers detail a new system that can "see" and understand screenshots.
Microsoft is testing a new AI-powered, Xbox-based chatbot to automate support tasks.
Amazon's AGI team aims to beat Anthropic's latest Claude models this summer via an upcoming LLM codenamed Olympus.
And CNET has an interview with Andrew Bosworth on the past and future of Meta's AR, VR, and AI efforts.
OpenAI has a lot going on
Once again, OpenAI published several features, announcements, and updates this week.
What's new:
ChatGPT is now available without an OpenAI account - though it's missing some advanced features such as saved chats and custom instructions.
The fine-tuning API comes with more configuration and observability, and the company is also expanding its Custom Models program.
OpenAI detailed its Voice Engine, which can clone voices from as little as 15 seconds of audio. The company has intentionally avoided a public release due to the potential for abuse.
ChatGPT Plus users can now natively edit images generated with DALL-E by highlighting and inpainting parts of each image.
In a new interview, COO Brad Lightcap calls 2024 "the year of adoption for AI in the enterprise."
Microsoft and OpenAI are reportedly planning a $100B data center project to supply the GPUs necessary for OpenAI's models.
And OpenAI has changed the governance structure of its venture capital arm, according to a recent filing.
Elsewhere in model mayhem:
Stability AI released Stable Audio 2.0, which can generate audio clips of up to three minutes. The model, which is available for free via Stability's website, is the company's first major release since its CEO resigned.
Cohere unveiled Command R+, an "enterprise-friendly" LLM that's cheaper than GPT-4 and has performance optimized for tasks like RAG.
And Replit launched Code Repair, an AI agent to fix coding errors automatically.
AI & Section 230
As copyright-focused AI lawsuits work their way through the courts, a different legal challenge is appearing: whether AI companies can be held liable for harm caused by generated content.
Between the lines:
The heart of the issue is Section 230, which provides a safe harbor for user-generated content (posts, images, videos, and comments) hosted by internet companies. Many legal scholars believe AI-generated content doesn't qualify for these protections.
We'll likely see it put to the test sooner rather than later - there are plenty of examples, like NYC's AI chatbot that hallucinates bad legal advice. Previously, Air Canada was forced to honor a hallucinated refund in a civil case.
There's also the question of how AI content should be used as evidence; a Washington state judge recently blocked the use of "AI-enhanced video" evidence in a landmark ruling.
Elsewhere in AI anxiety:
Hundreds of musicians, including some big names, have signed a letter urging AI developers to stop using artist voices and causing damage to the music industry.
Anthropic researchers detailed a "many-shot jailbreaking" technique, which loads a context window with dozens of harmful examples to get LLMs to bypass their safety guardrails.
And the US House has banned staffers from using Microsoft Copilot "due to the threat of leaking House data."
Last week's roundup
Things happen
US and UK sign landmark agreement on testing safety of AI. This camera turns every photo into a nude. Some MBA programs are going all-in on AI. Lavender: the AI machine directing Israel's bombing in Gaza. Google Books is indexing AI-generated garbage. The MIT economist who believes AI could actually benefit the middle class. Elon Musk is boosting Tesla pay to stop OpenAI from poaching AI talent. Models all the way down. A law firm of AI-generated lawyers is sending fake threats. Welcome to the AI gadget era. An AI researcher takes on election deepfakes. AI-generated Asians were briefly unavailable on Instagram. NYC’s AI gun detectors hardly work. Amazon Kindle lock-screens are showing ads for AI-generated books. George Carlin's estate settles with Dudesy, makers of "AI George Carlin" podcast. An AI-generated performance of the MIT License. AI-generated garbage is polluting our culture. Big Tech companies form new consortium to allay fears of AI job takeovers. The US and EU look to find alternatives to "forever chemicals" used in chip manufacturing. Jon Stewart on AI's false promises.
Fundraising (302 million raised by 14 companies)
If you haven’t already, also check out the this week’s YC AI batch roundup: