Why Can't 100B-Parameter AI Models Create a Simple Puzzle?

  • Because the AI model is built around statistics and prediction --- not logic and reason. In other words, it's primarily a talking database.

    Asking a database for logic is a mis-application. Acceptable logic can be produced in some cases --- but this is due to random chance (it just happened to find a similar problem encoded in memory) more so than any "thinking" or reason. Do you feel lucky?

    Ask for any logical result not seen before (like creating a puzzle) and it is likely to fail miserably. Memory is a poor substitute for logic.

    How many applications can accept faulty logic at high cost? There are some but these are rather limited in my opinion.

    The ultimate test --- would you do business with a bank run by AI?

    Would AI bank consistently offer better interest rates on savings and loans than a conventional one? Or would it be easily victimized by con artists? Would it simply refuse any further loans as a result?

    If AI can't run a bank successfully, it certainly can't run the world.