"But LLMs are not deterministic "

  • I really like this framing. It reminds me of how early steam engines needed governors—a simple mechanical feedback loop—to prevent them from spinning out of control. The engine itself wasn’t "safe" or "stable" by design; stability was something imposed externally through a control mechanism.

    In a way, LLMs feel similar. Their internal workings may be probabilistic and unpredictable, but that doesn't mean we can't build external feedback loops—tests, validation layers, human oversight—to steer them toward reliable, useful outcomes. The unpredictability isn’t a flaw; it’s just a raw, unmanaged state that invites control systems around it.

    Maybe what unsettles people is that the "chaos" is now at the language layer, where it feels more personal and less abstract than when it's buried in hardware or OS internals. But we've always tamed unpredictable systems with good design—LLMs are just the next place to apply that thinking.

  • I find this both interesting and very wrong. On the one had there seem to be some potential edge cases where this framing could be useful. On the other hand I think “But LLMs are not deterministic” is really code for “I find limited utility in a tool that regularly acts aggressively contrary to my goals”

  • I could be completely wrong, but I think determinism isn't the issue.

    The issue is that LLMs cannot explain their reasoning.

    LLMs are not expert systems; expert systems provide an answer and explaining their reasoning.