2 Comments
User's avatar
Conrad Gray's avatar

The 10^26 FLOPS threshold seems to be set just above the amount of computing power used to be train currently available models. According to this visualisation, GPT-4 used 2.1*10^25 FLOPS https://epochai.org/mlinputs/visualization - just an order of magnitude below the threshold.

It seems that number was set up so the current models won't fall under extra scrutiny. The new models, like GPT-5 or Google Gemini, might exceed the threshold.

Expand full comment
Charlie Guo's avatar

That's the intuition I have as well. I'm guessing there was some closed-door meetings between OpenAI/Anthropic/DeepMind and the Biden Administration to negotiate these. It'll be very interesting to see whether Gemini falls below the threshold or not.

Expand full comment