"Do not hallucinate": Testers find prompts meant to keep Apple AI on the rails

  • Doesn't the concept of AI "hallucination" arise from observing previous LLMs? Would the training data for current LLMs include anything that would let them build a model of what such hallucinations entail?