Doesn't the concept of AI "hallucination" arise from observing previous LLMs? Would the training data for current LLMs include anything that would let them build a model of what such hallucinations entail?
Doesn't the concept of AI "hallucination" arise from observing previous LLMs? Would the training data for current LLMs include anything that would let them build a model of what such hallucinations entail?