As far as I (ex-ML researcher) know, the main technological case that LLM performance will hit a limit is due to the amount of text data available to train on is limited. The ways these scaling laws work is they require 10x or 100x quantity of data to see major improvements.
This isn't necessarily going to limit it though. It's possible there are clever approaches to leverage much more data. This could either be through AI-generated data, other modalities (e.g. video) or another approach altogether.
As far as I (ex-ML researcher) know, the main technological case that LLM performance will hit a limit is due to the amount of text data available to train on is limited. The ways these scaling laws work is they require 10x or 100x quantity of data to see major improvements.
This isn't necessarily going to limit it though. It's possible there are clever approaches to leverage much more data. This could either be through AI-generated data, other modalities (e.g. video) or another approach altogether.
This is quite a good accessible post on both sides of this discussion: https://www.dwarkeshpatel.com/p/will-scaling-work