Discussion about this post

User's avatar
Daniel Nest's avatar

I remember the GPT-4 "getting lazy" discussion at the end of last year very well. And I remember Ethan Mollick making a joke post about prompting getting weird that included a line providing a "non-lazy" month to the LLM. But I must say, I completely missed the new pushback against Claude. I've personally found Claude to be consistently great recently, and my main complain is that Anthropic is now frequently defaulting to Claude 3 Haiku for free accounts when demand is high.

But many of the theories sound reasonable, including the training and us slowly discovering edge cases after the shine wears off. It'll be interesting to see if we ever get some clarity here.

Expand full comment
Andrew Smith's avatar

Very good job of laying out the various theories here, Charlie. Out of these, I tend to gravitate toward some combination of the extra pre-training, coupled with (possibly) collective delusion due to misinformation. I also don't want to dismiss folks who are noticing things getting worse, but my own experience has not been so - the LLMs are not getting dumber, at least for the things I use them for every day.

With all that said, I've been thinking nonstop about emergence, and I wonder if this little surprise might be a part of that larger concept. I'll be thinking about that one for a while.

Expand full comment
9 more comments...

No posts