A gentle introduction to automated reasoning

  • I enjoyed this bit:

    // When we show this program to industry professionals, they usually figure out the right answer quickly. * A few, especially those who are aware of results in theoretical computer science, sometimes mistakenly think that we can't answer this question, with the rationale ā€œThis is an example of the halting problem, which has been proved insolubleā€. * In fact, we can reason about the halting behavior for specific programs, including this one.

    In other words, where a mid-wit would step through the code (and be right), a high-brow might deems the problem impossible because it fits to a generic class of which this particular instance is trivial.

    It's kinda "word thinking" concept where you substitute actual understanding by being able to slap a familiar label on something and then use the label to decide how to feel about it (kinda like feeling differently about a topic based on whether you call it "social justice" which sounds good or "letting criminals roam your city" which sounds bad - whereas in reality it's probably more nuanced and different than either of these labels suggests.)

  • This should have [2021] in the title.

  • Interesting that Amazon still thinks there is a place for symbolic, logic-based reasoning, in this era of LLMs.

  • [dead]