If superhuman intelligence requires persistently perfect training data, then maybe we should just admit to ourselves that LLMs are physically incapable of attaining "AGI".
If superhuman intelligence requires persistently perfect training data, then maybe we should just admit to ourselves that LLMs are physically incapable of attaining "AGI".