What President Biden's AI executive order actually means
I read all 111 pages so you don't have to.
On Monday, the White House unveiled AI.gov, a new website that showcases the federal government’s agendas, actions, and aspirations when it comes to AI.
There are links to join the "AI Talent Surge" and to find educational AI resources, but the main event is President Biden's executive order. It's far more comprehensive than many were expecting and tries to move the needle on AI safety in several ways. Of course, it can only go so far as an EO - long-lasting changes will have to come through acts of Congress.
But it's setting the stage for a lot of future AI regulation, and will reshape how the government (and large companies) think about AI.
TL;DR:
The EO has many areas of interest, but there are some key themes: limiting computing power, focusing on biotech risk, adding more AI talent, and directing government agencies to think about AI.
Most AI companies will not be affected by this EO (yet). Foundation model developers (think OpenAI, Anthropic, and Meta) will be impacted, along with infrastructure-as-a-service platforms and federal contractors.
Other immediate impacts cover federal immigration/hiring, Cabinet departments, and miscellaneous government programs.
There is a tremendous amount of longer-term research, planning, and reporting that is going to happen across the entire federal government.
We are almost undoubtedly going to see much more regulation on the back of these changes. But it's too early to say whether the government is stifling innovation and/or adequately accounting for AI risks.
Key themes
The Biden Administration has eight main areas of concern regarding AI - and many of these have been previously covered in the Administration's Blueprint for an AI Bill of Rights. From the EO:
Artificial Intelligence must be safe and secure.
Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges.
The responsible development and use of AI require a commitment to supporting American workers.
AI policies must be consistent with the Administration’s dedication to advancing equity and civil rights.
The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
Americans’ privacy and civil liberties must be protected as AI continues advancing.
It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans.
The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change.
But this sprawling list is hard to understand in its entirety. It touches on civil rights, education, labor markets, social justice, biotech, AI safety, and immigration. What's more useful are the key themes:
Regulation via computing thresholds: One piece of the EO that's getting a lot of attention is the way that foundation models and GPU farms are being classified based on the amount of computing that they use. Any model trained on 1026 flops, or any computing cluster with 1020 flops/second capacity, must regularly report to the government - though these thresholds are subject to change. It's also worth noting this is happening via the Defense Production Act, which seems like a somewhat unusual way to put these into effect.
Emphasis on biotech risks: While AI safety was a leading concern, AI safety as it pertains to biotech was called out specifically. The compute limit for "biological sequence data" models is 1023 flops, three orders of magnitude lower than the general purpose AI limits. And there are plans for industry guidance regarding future biosecurity regulation, including synthetic bio, pathogen databases, and nucleic acid (DNA) synthesis.
Bringing in more AI talent: There are significant pushes to get more AI talent into the US and into the US government. The State Department is being asked to streamline AI-related visas, and there's a new "AI and Technology Talent Task Force" aimed at getting more AI experts into federal agencies. I suspect the Administration knows they need more expertise as they embrace AI at a broad level, but it will be an uphill battle to compete with tech salaries here.
Widely applying and researching AI: I've covered this in much more detail below, but the Biden Administration is really pushing AI into every corner of the federal government. Not all departments and agencies will have to take specific actions (most won't), but they're being tasked with at least thinking about and planning for an AI future. Every Cabinet department is also getting a Chief AI Officer.
Beyond these themes, the devil is really in the details. So it's helpful to think of the EO in terms of two categories: things the White House can do (or direct others to do) right now, and things the White House can ask others to assess and plan. Put another way: immediate actions and future planning.
Immediate actions
Computing thresholds
Perhaps the biggest immediate impact comes from the new computing thresholds as they’ll dictate which companies end up in the regulators' crosshairs. As mentioned above, those thresholds are any model trained on 1026 flops, or any computing cluster with 1020 flops/second capacity. In addition to regularly reporting to the government, organizations going above these limits must run red-team testing on their models and share the results.
I'm very curious where those numbers came from - by my incredibly rough napkin math, they sit a few orders of magnitude above the latest models like Llama 2 and GPT-4 (I'd love to be wrong on this - leave a reply/comment if you disagree). Current models are most likely fine, though OpenAI, Anthropic, DeepMind, and Meta will probably need to do some math before releasing the next generation of LLMs.
But I agree with critics here that regulating the number of flops is a bad approach. Setting computation limits seems like a fool's errand, as 1) we figure out how to train models more efficiently, and 2) we figure out ways around the limit. For example, does taking GPT-4 and doing heavy fine-tuning count as exceeding the threshold? I feel pretty confident in saying that those numbers aren't going to age well, especially as computing costs come down over the next few years.
There's also language around infrastructure-as-a-service platforms, requiring them to report foreign activity to the government. Specifically, IaaS providers have to report when foreign nationals train large AI models with potentially malicious capabilities. These seem like KYC-style checks for foreigners training large models.
Overall though, there aren't many immediate impacts to the industry. Your average AI startup probably isn't going to be affected, though cutting-edge foundation model development is almost certainly going to come under more scrutiny. That will likely change as individual government agencies get their AI-acts together, though.
AI talent and immigration
The second impact aims to boost the amount of AI talent in the US, specifically within the US government. On the immigration side, there are directives to streamline visas for those working on AI R&D, and to continue making visas available for those with AI expertise. There are also programs to identify and attract top AI talent overseas and entice them to move to the US.
There’s a new "AI Talent Task Force," which is meant to guide federal agencies in attracting and retaining top AI talent. Paired with new committees and working groups, the goal is to promote 1) engaging more with industry experts and 2) increasing the flexibility of hiring rules to expedite the hiring process. The AI.gov website puts this initiative front and center, with a landing page to "Join the national AI talent surge." And where AI talent isn't available, there are other initiatives to boost the availability of AI training programs for government workers.
While it is undoubtedly clear that the government is going to need a lot more AI expertise, it's less clear whether they can be competitive enough to actually hire the right people. The government can’t match the going rate for AI researchers, so can they somehow convince them to leave high-paying jobs? The US Digital Service (USDS) has been hiring Silicon Valley programmers for nearly a decade, but it works on a "tour of duty" model - very different from long-term civil service workers.
Chief AI Officers
The last area with immediate change is specific agency interventions. Each Cabinet agency will need a new Chief AI Officer, who will be responsible for any new AI-related guidelines and frameworks that are created. And there are a lot - see the next section.
Besides new research and reporting, there are some concrete actions, which include:
The National Science Foundation (NSF) will fund an NSF Regional Innovation Engine that prioritizes AI-related work.
The Department of Health and Human Services will prioritize grants related to responsible AI development and use.
The Department of Veterans Affairs will host two AI Tech Sprint competitions.
The Small Business Administration will allocate millions in funding to AI-related initiatives.
The NSF will establish at least four new National AI Research Institutes (on top of the 25 existing ones).
The Department of Energy will create a pilot program aimed at training 500 new AI researchers by 2025.
Future planning
Beyond the immediate impacts, what's clear from the EO is that many, many agencies are now being forced to think about AI. Every single Cabinet member is involved in the order, and many other agencies like the USPTO, NSF, and SBA are involved as well.
These agencies are now having to evaluate, assess, guide, plan, and report on AI. However, there isn't much in terms of action, so the lasting impact remains unclear. Again, more impactful AI regulation would need to come from Congress, but given the state of things, that doesn't seem likely to happen anytime soon.
Some of the guidelines and standards that we can expect to see in the coming months - it's a long list.
National Institute of Standards and Technology (NIST) guidelines for safe development and deployment of AI models.
Department of Energy plans for AI model tools to evaluate harmful/hazardous AI outputs.
Department of Homeland Security assessments on AI risk to critical infrastructure.
Department of the Treasury report on best practices for financial institutions to manage AI-specific cybersecurity risks.
Department of Defense plans to add AI-related capabilities to government software and systems.
Department of Homeland Security report on AI's potential to enable CBRN (chemical, biological, radiological, or nuclear) threats.
Department of Defense assessment on how AI can increase biosecurity risks.
Office of Science and Technology Policy (OSTP) framework for synthetic biology risk management, procurement screening, and security best practices.
Department of Commerce report on tools and methods for authenticating, labeling, and detecting synthetic content.
Office of Management and Budget (OMB) guidance for federal agencies on how to label and authenticate any digital content they publish.
Department of Commerce/State request for inputs on the benefits and drawbacks of publicly releasing foundation model weights.
A National Security Memorandum to provide guidance to the Department of Defense, Department of State, other relevant agencies, and the Intelligence Community to address the security risks of AI.
Department of State guide for overseas AI experts to understand their options for working in the US, plus a report on how AI experts are navigating the US immigration system.
United States Patent and Trademark Office (USPTO) guidance on using AI (including generative AI) as part of the invention process.
United States Copyright Office recommendations on AI-related copyright issues.
Department of Homeland Security program to mitigate AI-related IP risks.
Department of Energy report on the potential for AI to improve electric grid infrastructure and clean energy availability.
Department of Labor report on federal support for workers displaced by the adoption of AI.
Department of Labor best practices for employers to mitigate AI's potential harms to employees and maximize its potential benefits.
Attorney General guidance on preventing algorithmic discrimination and a report on the use of AI in the criminal justice system.
Department of Health and Human Services/Department of Agriculture plan to address the use of algorithmic systems in the administration of public benefits.
Department of Labor guidance for avoiding discrimination involving AI-based hiring of federal contractors.
Where we go from here
There have been a lot of strong reactions to the executive order in the last few days. Some are applauding the government’s decisions, while others are decrying the ham-fisted overreach of the government or the successful regulatory capture of AI doomers. The most extreme example I've seen is an announcement to put GPUs into international waters so companies can train AI models without government oversight.
For what it's worth, I'm not so sure that the executive order is going to be all that oppressive - yet.
Yes, it's clunky - regulation via computing limits is an extremely blunt approach. And to repeat myself, I'm pretty confident that those computing limits will not age well.
Yes, the new rules will likely benefit incumbents - OpenAI will have way more resources available to red-team new models vs a brand-new startup.
However, your average AI startup doesn't need to worry about these rules. And realistically, we have an enormous amount of AI capability today that we are still figuring out how to leverage and adapt to. As much as I want access to GPT-5 right now, I also know that we could spend the next few years wrapping our heads around what GPT-4 is actually capable of, and integrating it into society.
What is clear is that there will be much, much more regulation coming off the back of this. You can't install Chief AI Officers at every cabinet department and expect them to sit on their hands - especially when so many are clamoring for the government to do something about AI. And with every department looking hard at what they can do with/against AI (and given more power to do so), we can expect to see many new rules from various agencies.
With any luck, said agencies will be thoughtful about applying AI to their purview. But I'm pretty skeptical here. If the Health and Human Services department is given free reign (and 180 days) to put together comprehensive guidance on the US healthcare system’s approach to AI, my guess is they're going to be painting with a pretty broad brush.
The 10^26 FLOPS threshold seems to be set just above the amount of computing power used to be train currently available models. According to this visualisation, GPT-4 used 2.1*10^25 FLOPS https://epochai.org/mlinputs/visualization - just an order of magnitude below the threshold.
It seems that number was set up so the current models won't fall under extra scrutiny. The new models, like GPT-5 or Google Gemini, might exceed the threshold.