2 Comments

The 10^26 FLOPS threshold seems to be set just above the amount of computing power used to be train currently available models. According to this visualisation, GPT-4 used 2.1*10^25 FLOPS https://epochai.org/mlinputs/visualization - just an order of magnitude below the threshold.

It seems that number was set up so the current models won't fall under extra scrutiny. The new models, like GPT-5 or Google Gemini, might exceed the threshold.

Expand full comment