Discover more from Artificial Ignorance
AI Roundup 022: Superalignment
July 7, 2023
OpenAI announced a new Superalignment team focused on developing ways to control superintelligent AIs. One of OpenAI's founders, Ilya Sutskever, will run the team.
Why it matters:
OpenAI is committing 20% of its available computing power to the team, which is a huge amount. Whether or not you believe we should be focusing on ASI and alignment, clearly, OpenAI is putting their money where their mouth is.
But some believe that the framing between rogue AI worriers and everyone else erases history and the work of researchers and activists.
And in case you missed it, here’s a previous post of mine that looks at some of AI's (non-superintelligent) harms.
More OpenAI news:
GPT-4 is now available via API to everyone with a developer account, and fine-tuning for ChatGPT is planned for later this year.
Code Interpreter, perhaps the most interesting ChatGPT plugin, is rolling out to all Plus users over the next week. Meanwhile, the Browse with Bing plugin has been disabled after reports that it would bypass paywalled content.
In a series of (unconfirmed) leaks, GPT-4's architecture appears not to be a single gargantuan model but rather a bunch of (still pretty big) models that work together.
OpenAI shares its learnings from Sam Altman's world tour.
People are worried about AI copyright
Valve clarified its statement from last week, saying that it won't publish Steam games with AI artwork that violates copyright, rather than a blanket ban on AI-generated content.
Elsewhere in AI copyright:
Games like AI Dungeon let players use generative AI to create in-game content. The problem is nobody's sure who owns the assets or the copyright.
We've mentioned it before, but even more performers who signed away their voice rights are now being forced to compete with AI-cloned versions of their voices.
The CEO of the Grammys says that music containing AI elements is eligible for a Grammy, but the AI portion will not be considered. Only human artists and performers can win the award.
It's all uphill from here
It bears repeating that products like ChatGPT and Midjourney are still far from perfect - they don't always do want you want, or they might lie to your face. But it's also worth remembering that this is likely the worst these models will ever be. And we're seeing cutting-edge research every week that's trying to make them better1.
Between the lines:
The New York Times looks at an Amazon team tackling "voice disentanglement" and getting Alexa to speak like a Dubliner.
The IEEE details a proposed technique called Waterwave which aims to improve GPU efficiency for AI models.
A (non-peer reviewed) paper on LongNet, a theoretical Transformer alternative that scales up to 1 billion tokens.
Crypto miners pivot to training AI models. Tips for programmers to stay ahead of generative AI. Man who tried to kill Queen with crossbow encouraged by AI chatbot, prosecutors say. The US military is testing LLMs trained on classified info. AI agents that “self-reflect” perform better. NYC’s law on hiring with AI takes effect, with fines of $1,500 per violation per day. The US plans to require cloud companies to get permission before offering advanced AI models to China.
Thanks for reading Artificial Ignorance! Subscribe for free to receive new posts and support my work.
Last week’s roundup
And clearly, not all of this research will bear fruit. Much of it is overhyped, especially once the media gets ahold of it. But the point is that people are trying new approaches and moving the ball forward at a pretty incredible pace.