OpenAPI
AI is having a bit of a moment right now. And while there is a lot of hot air, there has been genuine progress in the underlying tech. It can sometimes seem like magic, but at the core of this progress are improvements to machine learning models, and the way we're building and training them.
To radically oversimplify, cutting-edge ML models need mountains and mountains of data. Often, there's also a massive amount of manual work required to get the data in the right format. For example, image-generating AI learns through billions of labeled images - think pictures of birds with the associated caption "bird", across billions of categories.
Once you're done with the data, you then have to run computers to crunch all that data into the final model. For state-of-the-art models like ChatGPT, this means running a gargantuan amount of servers and CPUs. We don't know exactly how much OpenAI spent training ChatGPT, but it's a safe bet that it's in the millions of dollars.
These combined technical and financial hurdles mean that many businesses that want to incorporate ChatGPT are stuck - they don't have the resources to train their own models, and ChatGPT is only accessible through a simple chat UI. You'd imagine that if OpenAI made ChatGPT more easily accessible, at a low cost, it would lead to yet another wave of AI features and companies.
So it's notable that OpenAI announced APIs for ChatGPT and Whisper (a transcription model that converts audio to text). And, in fact, these APIs are already powering new features at Snap (née Snapchat), Quizlet, Instacart, and Shopify.
Quizlet has worked with OpenAI for the last three years, leveraging GPT-3 across multiple use cases, including vocabulary learning and practice tests. With the launch of ChatGPT API, Quizlet is introducing Q-Chat, a fully-adaptive AI tutor that engages students with adaptive questions based on relevant study materials delivered through a fun chat experience.
Instacart is augmenting the Instacart app to enable customers to ask about food and get inspirational, shoppable answers. This uses ChatGPT alongside Instacart’s own AI and product data from their 75,000+ retail partner store locations to help customers discover ideas for open-ended shopping goals, such as “How do I make great fish tacos?” or “What’s a healthy lunch for my kids?” Instacart plans to launch “Ask Instacart” later this year.
Beyond API access, OpenAI is reportedly launching Foundry, a platform for big businesses to run OpenAI's models at scale with dedicated capacity1. The leaked screenshots suggest that 1) the long-awaited GPT-4, successor to ChatGPT, is coming and 2) this won't be cheap: the lowest tier starts at $22,000 per month.
Currently, research is being done to see whether we can bring these costs down. But it's far from guaranteed. If we can't lower costs, there's a risk of creating AI landlords and renters. The companies that can fund in-house AI, and everyone else. Organizations that can afford their own AI would become the gatekeepers to any new tech progress, for both good and bad.
OpenAI seems to take its role as a gatekeeper pretty seriously. This week they also explained their plans and principles when it comes to working on AGI - AI that's as smart (or smarter) than humans.
If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
…
On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.
Not-so-open AI
On the one hand, it's good that OpenAI is grappling with some of the hard questions now and laying down ethical principles for its approach. On the other hand, I'm old enough to remember that the "Open" part of "OpenAI" refers to the fact that they used to be a non-profit research group. Apparently, some other people remembered too:
This blog post and OpenAI’s recent actions—all happening at the peak of the ChatGPT hype cycle—is a reminder of how much OpenAI’s tone and mission have changed from its founding, when it was exclusively a nonprofit. While the firm has always looked toward a future where AGI exists, it was founded on commitments including not seeking profits and even freely sharing code it develops, which today are nowhere to be seen.
At some point, they decided that actually, profits are kind of important2.
This resulted in a shift in direction in 2018 when the company looked to capital resources for some direction. “Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission,” the company wrote in an updated charter in 2018.
By March 2019, OpenAI shed its non-profit status and set up a “capped profit” sector, in which the company could now receive investments and would provide investors with profit capped at 100 times their investment.
Right now, two of the most advanced AI organizations are DeepMind (owned by Alphabet) and OpenAI (partly owned by Microsoft). This, understandably, makes some people nervous.
Building a meaningful competitor isn't as simple as doing a startup in your garage, though. Some are trying! EleutherAI began several years ago an informal group of developers working on open-source AI. They curated and released training data sets and several open-source GPT models. They've found decent success – they now have over 20 full-time researchers and have co-authored 28 papers in the last 18 months.
But you can't get away from the computing costs. From TechCrunch:
To train its models, EleutherAI relied mostly on the TPU Research Cloud, a Google Cloud program that supports projects with the expectation that the results will be shared publicly.
…
But the fickle nature of its cloud providers sometimes forced EleutherAI to scuttle its plans. Originally, the group had intended to release a model roughly the size of GPT-3 in terms of the number of parameters, but ended up shelving that roadmap for technical and funding reasons.
So this week, EleutherAI announced new funding from major industry players. They include Canva, Stability AI (creator of Stable Diffusion), and Hugging Face (a leading cloud platform for AI models). It seems like they’re trying to mount a serious challenge to OpenAI.
EleutherAI also announced the creation of a new nonprofit: the EleutherAI Institute. The plan is to remain independent and not become beholden to corporate interests.
[Foundation co-runner Stella] Biderman asserts that the EleutherAI Foundation will remain independent and says she doesn’t see a problem with the donor pool so far.
“We don’t develop models at the behest of commercial entities,” Biderman said. “If anything, I think that having a diverse sponsorship improves our independence. If we were fully funded by one tech company, that seems like a much bigger potential issue from our end.”
Given recent history though, that seems easier said than done.
FTC to AI companies: check yourself
Look, I think we can all agree that there's an inordinate amount of hype in this space. Everyone and their mother is pivoting to AI or incorporating AI.
And as a business, it's hard to keep up! If your competition is launching AI-powered gizmos and gadgets, you might feel like you're being left in the dust. How do you even get started? It’s all a black box! A really expensive black box!
And in the race to not get left behind, you might be tempted to take some shortcuts3. Maybe you dust off an old GPT model and claim it's state-of-the-art. Heck, maybe you don't even have software at all, and launch something that's powered by humans behind the scenes. AI is such a these days, who's to say you're lying?
Luckily, there's a savvy, agile organization to ensure you stay on the straight and narrow. The... uh... FTC?
AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between. Breathless media accounts don’t help, but it starts with the companies that do the developing and selling. We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts. But the fact is that some products with AI claims might not even work as advertised in the first place. In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.
They're... not wrong?!4 It's actually very commendable that the FTC is trying to keep up with developments in AI and protect consumers, especially since Congress seems so out of touch with the realities of the tech industry. The snarky part of me, though, thinks the FTC has been feeling a bit sheepish after FTX's spectacular implosion (and the ongoing fallout) last year. And right now, AI hype looks suspiciously like crypto hype at a distance. Perhaps the FTC is looking to get out ahead of the curve this time.
Things happen
Meta is forming a team to get generative AI into its products. Scribble Diffusion: turn your sketch into a refined image. Would you let ChatGPT control your smart home? Elon Musk is reportedly building “Based AI” because ChatGPT is too woke. “I worked on Google’s AI. My fears are coming true.” A profile of OpenAI CTO Mira Murati.
The parallels between this and the history of cloud computing are striking. It used to be you had to pay tens of thousands upfront and hire someone to manage your servers in order to run a software business. Now, AWS/GCP/Azure make it nearly free to get started. A big difference is that by the time AWS came around, there was a lot of value being driven by internet businesses. I’m not sure we’re there yet with AI, but I’m pretty confident we’ll get there.
Because profits are important! Servers don’t grow on trees. And we can argue the ethics of capitalism all day, but our present reality dictates that almost nobody is going to pour hundreds of millions into AI research and development without an expectation of profits. However, that doesn’t excuse the bait-and-switch nature of OpenAI’s ethos.
Not legal advice. Never legal advice.
Also, for what it’s worth, I loved their lede. So much more poetic than I would have expected! “A creature is formed of clay. A puppet becomes a boy. A monster rises in a lab. A computer takes over a spaceship. And all manner of robots serve or control us.”