Adobe MAX
At Adobe's annual MAX conference, the company teased several new AI tools with impressive capabilities.
Between the lines:
There was a Firefly Video Model, a 3D scene generator, and a vector art rotation tool.
What strikes me is that these aren't just another take on "text to image" models - they're clearly new models trained for specific creative tasks.
Likewise, Adobe is attempting to differentiate from Runway et al. by claiming its new video model is "commercially safe," i.e., trained only on licensed and public domain content.
Elsewhere in FAANG free-for-all:
Google added an option for NotebookLM users to customize audio summaries, and launched a NotebookLM Business pilot.
Amazon expanded its suite of AI-powered ad tools to let US advertisers use generative AI to make audio ads.
Meta shared its vision for open AI hardware, including cutting-edge open rack designs, and advanced network fabrics and components.
And Google rolled out a personalized Shopping feed with AI-generated shopping tips.
Cheap imitations
Character.AI has come under fire over the fact that its users can create AI characters with anyone's likeness - from fictional characters, to celebrities, to murdered high-schoolers.
Why it matters:
Technically, these "personas" violate Character.AI's terms of service and are taken down once reported - but there isn't much to stop users from making more.
We don't have good social norms for this - is it okay for me to create an unauthorized AI persona of my friend? My ex? My celebrity crush?
And without Federal laws protecting likenesses, this becomes an endless cat-and-mouse game between users and moderators - a game we've already seen before on other social platforms.
Elsewhere in AI anxiety:
A review of Telegram communities uncovered 50+ bots claiming to create explicit nonconsensual content, with over 4 million reported monthly users.
Instagram's Adam Mosseri attributes recent moderation issues causing account access loss to human reviewer mistakes, not AI systems.
And scammers in Southeast Asia are leveraging generative AI and other high-tech tools to expand their "pig butchering" operations - Hong Kong police just arrested one such crime ring that netted $46M.
OpenAI v Open AI
A fascinating Bloomberg story details the years-long conflict between OpenAI and Guy Ravine - the man who owns the open.ai domain and trademarked the name "Open AI".
Down the rabbit hole:
The story, which at first seems like something of a domain squatter shakedown, is a bit more complex.
While the dispute only recently moved to the courts, Guy Ravine has communicated with OpenAI's founders and other AI luminaries for years.
It's easy to dismiss Ravine as a kook - and he very well may be - but there also appears to be evidence that he genuinely was working on a similar idea at the time.
But in the end, it's more than just "good ideas" that determine who wins and who loses in Silicon Valley.
Elsewhere in AI argumentation:
The New York Times has sent a cease-and-desist letter to AI startup Perplexity, demanding it stop using NYT content, while Perplexity expresses interest in working with publishers.
The Open Source Initiative plans to publish its open-source AI definition next week, accusing Meta of "polluting" the term by using it for Llama.
And Nvidia and TSMC's lucrative AI alliance is reportedly showing signs of stress as Nvidia reportedly found flaws in TSMC-made Blackwell chips.
Elsewhere in OpenAI guys:
Microsoft's VP of GenAI research, Sebastien Bubeck, known for his work on Phi small language models, is joining OpenAI to further his work on AGI.
Former Palantir CISO Dane Stuckey has been appointed as OpenAI's new CISO, where he will serve alongside head of security Matt Knight.
And WaPo profiles OpenAI's principal threat investigator, Ben Nimmo, who uncovered evidence of Russia and China using ChatGPT to influence political discourse online.
Things happen
Worldcoin rebrands as World and unveils a new Nvidia-powered Orb. Architects and designers create photorealistic virtual spaces in the metaverse. Boston Dynamics and Toyota Research Institute partner to speed up humanoid robot development. Accel expects 2024 VC funding for cloud startups to rise 27% YoY to $79.2B. Google DeepMind's 2023 operating profit rose 91% YoY to £136M. US DOD has awarded ~$670M in contracts for AI projects since ChatGPT's launch. Mistral releases Les Ministraux AI models for on-device applications. Threads users criticize Meta for posting AI-generated Aurora Borealis images. Google signs deal to buy nuclear energy from small modular reactors. How AI could have transformative effects on Africa's developing economies. LatticeFlow's LLM Checker finds compliance issues in AI models against EU's AI Act. Anthropic's CEO on what "powerful AI" might look like. National Archives pushes Google Gemini AI on employees. Developers say OpenAI's GPT Store is a mixed bag. US mulls capping AI chip sales to certain countries. Google reorganizes: Gemini App team joins DeepMind. A look at the promise of AI agents for monetizing AI models. AI Winter Is Coming. Apple study "proves" LLM-based AI models cannot reason. AI-powered app promises to "shape reality" through social media manipulation. Perplexity announces Internal Knowledge Search and Spaces.