Ask HN: Given a sufficiently complex argument, people deduce anything they like

  • > Is there a principle for such a thing?

    Confirmation bias.

  • It seems to apply to AI as well, so don't be too judgemental.

  • To me that sounds like sophistry (unintentional or not). Wikipedia summarizes it nicely:

    "Sophistry" is today used as a pejorative for a superficially sound but intellectually dishonest argument in support of a foregone conclusion.

    Loosely related: The 60's scifi novel "The Moon Is a Harsh Mistress" explored the idea of computers with powerful enough AI that they could construct a logically persuasive argument for any stance by cherry picking and manipulating the facts. In the book I think they called those computers Sophists, which seems particularly relevant today. You can absolutely ask an LLM to construct an argument to support any stance and, just like in the book, they can be used to produce misinformation and propaganda on a scale that makes it difficult for humans to discern the truth.

  • Can you give an example?