Hallucinations happen because A. LLMs don't know what any of the things they say mean. B. The ANNs lacked the flexibility to match all training data. C. The ANNs overfit the training data. D. Training data includes incorrect responses. E. Training data lacks "I don't know" responses. Why do prompts like "How can I get an AI to tell me X" give back prompts that work better than X does? A. AI designers intercept prompts with specific patterns in them and provide parts of their documentation in response. B. AIs are good at introspection: they understand how they work and give good advice related to that. C. AIs like to feel important; being asked about themselves strokes their ego. D. AI training data includes human-written articles on effective AI use, which responses to "how can I get an AI to" uses.