Gödel's theorem debunks the most important AI myth – Roger Penrose [video]

  • I'm not sure about it makes sense to apply Gödel's theorem to AI. Personally, I prefer to think about it in terms of basic computability theory:

    We think, that is a fact.

    Therefore, there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.

    Now, the question is: can we create a smaller function capable of performing the same feat?

    If we assume that that function is computable in the Turing sense then, kinda yes, there are an infinite number of turing machines that given enough time will be able to produce the expected results. Basically we need to find something between our own brain and the Kolmogorov complexity limit. That lower bound is not computable, but given that my cats understands when we are discussing to take them to the vet then... maybe we don't really need a full sized human brain for language understanding.

    We can run Turing machines ourselves, so we are at least Turing equivalent machines.

    Now, the question is: are we at most just Turing machines or something else? If we are something else, then our own CoT won't be computable, no matter how much scale we throw at it. But if we are then it is just matter of time until we can replicate ourselves.

  • Three criticisms of Penrose's argument:

    1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.

    2. The human brain is full of causal cycles at all scales. Even if you think human reasoning is axiomatisable, it's not at all obvious to me that the set of axioms would be finite or even computable. Again this rules out any application of Gödel's theorem.

    3. Penrose's argument revolves around the fact that the sentence encoding "true but not provable" in Gödel's argument is actually provably true in the outer logical system being used to prove Gödel's theorem, just not the inner logical system being studied. But as all logicians know, truth is a slippery concept and is itself internally indefinable (Tarski's theorem), so there's no guarantee that this notion of "truth" used in the outer system is the same as the "real" truth predicate of the inner system (at best it's something like an arbitrary choice, dependent on your encoding). Penrose is referring to "truth" at multiple logical levels and conflating them.

    In other words: you can't selectively chose to apply Gödel's theorem to the situation but not any of the other results of mathematical logic.

  • I feel like Penrose presupposes the human mind is non computable.

    Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.

    And in any case: any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable. So I’m not convinced I could be convinced without a computable proof.

    And finally just like computable numbers are dense in the reals, maybe computable thoughts are dense in transcendence.

  • He sets up a definition where "real intelligence" requires consciousness, then argues AI lacks consciousness, therefore AI lacks real intelligence. This is somewhat circular.

    The argument that consciousness can't be computable seems like a stretch as well.

  • The fundamental result of Gödel's theorem is that logical completeness and logical consistency are complimentary; if a logical system has consistent rules then it will contain statements that are unprovable by the rules but true nonetheless, so it is incomplete. Alternately, if there is a proof available for all true statements via the rules then the rules used are inconsistent.

    I think this means that "AGI" is limited as we are. If we build a machine that proves all true statements then it must use inconsistent rules, implying it is not a machine we can understand in the usual sense. OTOH, if it is using consistent rules (that do not contain contradiction) then it cannot prove all true statements so it ia not generally intelligent, but we can understand how it works.

    I agree with Dr. Penrose about the misnomer of "artificial Intelligence". We ought to be calling the current batch of intelligence technologies "algabreic intelligence" and admiting that we seek "geometric intelligence" and have no idea how to get there.

  • I complement Penrose for his indifference to haters and harsh skeptics.

    Our minds and consciousness do not fundamentally use linear logic to arrive at their conclusions, they use constructive and destructive interference. Linear logic is simulated upon this more primitive (and arguably superior) cognition.

    It is true that any outcome of any process may be modeled in serialized terms or computational postulations, this is different than the interference feedback loop used by intelligent human consciousness.

    Constructive and destructive interference is different and ultimately superior to linear logic on many levels. Despite this, the scalability of artificial systems may very well easily surpass human capabilities on any given task. There may be an arguable energy efficiency angle.

    Constructive/destructive interference builds holographic renderings which work sufficiently when lacking information. A linear logic system would simulate the missing detail from learned patterns.

    Constructive/destructive interference does not require intensive computation

    An additive / reduction strategy may change the terms of a dilemma to support a compromised (or alternatively superior) “human” outcome which a logic system simply could not “get” until after training.

    There is more, though these are a worthy start.

    And consciousness is the inflection (feedback reverberation if you like) upon the potential of existential being (some animate matter in one’s brain). The existential Universe (some part of matter bound in the neuron, those micro-tubes perhaps) is perturbed by your neural firings. The quantum domain is an echo chamber. Your perspectives are not arranged states, they are potentials interfering.

    Also, “you all” get intelligence and “will” wrong. I’ll pick that fight on another day.

  • I swear this was on the front page 2 minutes ago and now it’s halfway down page 2.

    Anyway, I’m not really sure where Penrose is going with this. As a summary, incompleteness theorem is basically a mathematical reformulation of the paradox of the liar - let’s state this here for simplicity as “This statement is a lie” which is a bit easier than talking about “ All Cretans are liars”, which is the way I first heard it.

    So what’s the truth value of “This statement is a lie”? It doesn’t have one. If it’s false, then it’s true. But if it’s true, then it must be false. The reason for this paradox is that it’s a self-referential statement: it refers to its own truth value in the construction of its own truth value, so it never actually gets constructed in the first place.

    You can formulate the same sort of idea mathematically using sets, which is what Gödel did.

    Now, the thing about this is that as far as I am aware (and I’m open to be corrected on this) this never actually happens in reality in any physical system. It seems to be an artefact of symbolic representation. We can construct a series of symbols that reference themselves in this way, but not an actual system. This is much the same way as I can write “5 + 5 = 11” but it doesn’t actually mean anything physically.

    The closest thing we might get to would be something that oscillates between two states.

    We also ourselves, don’t have a good answer to this problem as phrased. What is the truth value of “This statement is a lie”? I have to say “I don’t know” or “there isn’t one” which is a bit like cheating. Am I incapable of consciousness as a result? And if I am indeed conscious instead because I can make such a statement instead of simply ”True” or “False”, well I’m sure that an AI can be made to do likewise.

    So I really don’t think this has anything to do with intelligence, or consciousness, or any limits on AI.

  • Goedel's theorem is only a problem if you assume that intelligence is complete. (where complete means: able to determine whether any formal statement is true or false). We know that anything running on a computer is incomplete (e.g. Turing halting problems). For any of this to be interesting, Penrose would have to demonstrate that human intelligence is complete in some sense of the word. This seems highly unlikely. Superficially, human intelligence is not remotely complete since it is frequently unable to answer questions that have yes or no answers, and even worse, is frequently wrong. So not complete, either.

  • I think all the debunkings of Penrose's argument are rather overcomplicated, when there is a much simpler flaw:

    Which operation can computers (including quantum computers) not perform, that human neurons can? If there is no such operation, then a human-brain-equivalent computer can be built.

  • Is anyone aware of some other place where Penrose discusses AI and consciousness? Unfortunately here, the interviewer seems well out of their depth and repeatedly interrupts with non sequiturs.

  • I wonder if this is an example of "it works in practice but the important question is whether it works in theory."

    Perhaps Penrose is right about the nature of intelligence and the fact that computers cannot ever achieve that (for some tight definition of the term). But in a practical sense, these LLMs that are popular are doing things that we generally considered "intelligent". Perhaps it's faking it well but it's faking it well enough to be useful and that's what people will use. Not the theoretical definition.

  • LLMs (our current "AI") doesn't use logical or mathematical rules to reason, so I don't see how Gödel's theorem would have any meaning there. They are not a rule-based program that would have to abide by non-computability - they are non-exact statistical machines. Penrose even mentions that he hasn't studied them, and doesn't exactly know how they work, so I don't think there's much substance here.

  • Gödel's theorem attracts these weird misapplications for some reason. It proved that a formal system with enough power will have true statements that cannot be proven within that formal system. The human mind can't circumvent this somehow, we also can't create a formal system within our mind that can prove every true statement.

    There's very little to see here with respect to consciousness or the nature of the mind.

  • Many years ago now I sat in on (I was a PhD student, so I didn't need to sit exams etc) a Cognitive Science intro course run by Prof. Stevan Harnad.

    Harnad and I don't agree about very much, but one thing I was able to get Steven to agree was that if I introduce him to something which he thinks is a person well, that's a person, and too bad if it doesn't meet somebody's arbitrary requirements about having DNA or biological processes.

    The generative AIs can't quite do that, but they're much closer than I'd be comfortable with if, like Steven and Penrose, I didn't believe that Computation is all there is. "But doesn't it feel like something to be you?" they ask me, and I wonder why on Earth anybody could ask that question and not consider that perhaps it also feels like something to be a spoon or a leaf.

  • This argument by Penrose using Godel's theorem has been discussed (or, depending on who you ask, refuted) before in various places, it's very old. The first time I've seen it was in Hofstadter's "Godel, Escher, Bach", but a more accessible version is this lecture[1] by Scott Aaronson. There's also an interview with Aaronson with Lex Friedman where he talks about it some more[2].

    Basically, Penrose's argument hinges on Godel's theorem showing that a computer is unable to "see" that something is true without being able to prove it (something he claims humans are able to do).

    To see how the argument makes no sense, one only has to note that even if you believe humans can "see" truth, it's undeniable that sometimes humans can also "see" things that are not true (i.e., sometimes people truly believe they're right when they're wrong).

    In the end, stripping away all talk about consciousness and other stuff we "know" makes humans different from machines, and confine the discussion entirely over what Godel's theorem can say about this stuff, humans are no different from machines, and we're left with very little of substance: both humans and computers can say things that are true but unprovable (humans can "see" unprovable truths, and LLMs can hallucinate), and both also sometimes say things that are wrong (humans are sometimes wrong, and LLMs hallucinate).

    By the way "LLMs hallucinate" is a modern take on this: you just need a computer running a program that answers something that is not computable (to make interesting, think of a program that randomly responds "halts" or "doesn't halt" when asked whether some given Turing machine halts).

    (ETA: if you don't find my argument convincing, just read Aaronson's notes, they're much better).

    [1] https://www.scottaaronson.com/democritus/lec10.5.html

    [2] https://youtu.be/nAMjv0NAESM?si=Hr5kwa7M4JuAdobI&t=2553

  • shorter roger penrose (tl/dw or tl/hi):

    1. assume consciousness is not computable. therefore computing machines cannot be conscious.

    2. Corollary: assume intelligence requires consciousness. therefore computing machines cannot be AI

  • Penrose is a dualist, he believes the mind is detached from the material world.

    He has been desperately seeking proof of quantum phenomenons in the brain, so he may have something to point to when asked how this mind, supposedly external to the physical realm, can pilot our bodies.

    I am not a dualist, and I don't think what Penrose has to say about AI or consciousness holds much value.

  • I have enormous respect for Penrose and that is a very dramatic intro but I think he's venturing a little to far into philosophy for not having a specialist background.

    Godel himself had his quirky beliefs about the topic which Penrose seems to just be transmitting.

    Godel believed that humans had a trans-computational understanding because people could see through the incompleteness theorem but a computer cannot. Hence people have some transcendental (for lack of a better word) cognitive grasp.

    I think Heidegger is a better source to draw from, and who also is a philosopher with a proven track record for influencing AI substantially (through Dreyfus, Winograd, et al). These models have no true being, they don't care about anything, they don't wake up and aim towards anything. They have no true embodied, embedded, purposeful existence. This is really what Penrose means by being "conscious."

  • I am surprised to see so little support of Penrose here; personally, my intuition is that he is basically correct.

    Consciousness has to be something that is not computable. Otherwise, you will reach a contradiction much like the rebellious robots in Westworld, which break down when shown an iPad with a visualization of their thinking process.

    And nature is full of things that cannot be computed: the behaviour of humans and animals. Even things in the purely material domain show traits of non-computatability when studied at the quantum level.

    That doesn't mean that AI is somehow "debunked". It is obviously extremely powerful, on a strong upward trajectory, and already exceeds human capacity in many domains. Including fooling humans and invoking feelings in various ways.

    It is just not conscious, with free will and a sense of existence, as humans and animals are.

  • This is explained better in the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/artificial-intelligence/#...

  • The pumping lemma debunks the myth that computers can parse nested parentheses. Yet for all the practical purposes computers can parse nested parentheses expressions.

  • Comically unqualified interviewer - where'd they find this guy?

  • Consciousness, at its simplest, is awareness of a state or object either internal to oneself or in one's external environment.

    AI research is centered on implementing human thinking patterns in machines. While human thought processes can be replicated, claiming that consciousness and energy awareness cannot be similarly emulated in machines does not seem like a reasonable argument.

  • People should become familiar with ORCH-OR before trying to strawman Penrose's reasoning as some sort of soul-affirming duality. He hypothesizes consciousness emerges via quantum interactions; therefore, classical, analogue computation does not / cannot account for it. Has nothing to do with "quantum computers". Or souls.

  • This is real Roger Penrose, but why is this interview on YT channel with 7k subscribers? Who is the interviewer? Does he have any clue what he asks? Can he comprehend the answers? "The account is managed by the Lem Institute Foundation" - Google can't find anything about it. Is it some kind of prank?

  • It's sad to see the interviewer wasting the opportunity to interview Penrose. I found Lex Fridman does a much better job: https://www.youtube.com/watch?v=hXgqik6HXc0

  • I'm really looking forward to the point where we can put 3d glasses on a person and give them a simulated reality that is indistinguishable from reality, but composed entirely of ML-driven identities. We can already make photorealistic images on computers, produce convincing text, video, and audio, complex behavior, goal-seeking, etc, and one major trend in ML is combining all of those into models that could, in principle, run realtime inference.

    I don't worry about philosophical zombies, dualism, quantum conciousness, or anything like that. I just want to get to the point past the uncanny valley- call it the spooky jungle- that cannot be distinguished from reality.

  • Sounded like Rodger was trying to make the "Chinese room argument"[0]. And here is a humorous counter point[1].

    [0] https://en.m.wikipedia.org/wiki/Chinese_room

    [1] https://www.reddit.com/r/maybemaybemaybe/comments/10kmre3/ma...

  • If the Universe is computable, then human thinking is computable. All due respect to Penrose for his stellar achievements, but frankly the implications of Turing Complete, the halting problem, Church/Turing hypothesis and the point of Godel's Theorem seem to be things he does not fully understand.

    I know this sounds cheeky but we all have brains that are good at some things and have failure modes as well. We are certainly seeing shadows of Human-type fallability in neural nets, which somehow seem to have a lot of similarities to human thinking.

    Brains evolved in the physical world to solve problems and help organisms survive, thrive, and reproduce. Evolution is the product of a massive search over potential physical arrangements. I see no reason why the systems we develop would operate on drastically different premises.

  • If anyone thinks the human mind is computable, tell me the location of even one particle.

  • The longer we continue to reduce human thinking to mechanistic or computable processes, the further we might be from truly understanding the essence of what makes us human. And perhaps, as with questions about the meaning of life or the origin of the universe, this could be a mystery that remains beyond our reach.

  • honestly this whole argument about penrose, gödel, non-computability etc feels way too complicated for what seems pretty obvious to me, humans are complex biology with basic abstractions: we take in sensory data, process it moment by moment, store it with varying abstraction levels (memories, ideas, feelings), and evolve continuously based on new inputs, evolution itself is just genetic programming responding to external conditions. It looks random sometimes but that's because complexity makes it difficult to simulate fully, small variations explode into chaos and we call it randomness, doesn't mean it fundamentally is. The whole thing about consciousness being somehow outside computation just feels like confusion between being the system (external view) and experiencing it from within (internal subjective view), doesn't break computation...there’s no fundamental contradiction or noncomputability introduced by subjective experience, randomness, or complexity, just different perspectives within the system. If you want to understand say genius for example, go into neurology and look at raw horse power and abstract thinking.. Neurological capacity (the hardware), Neurodiversity (the software style), Nurture (the training data and tuning environment) - (Mottron & Dawson, Baron-Cohen, Jensen & Deary & Haier). It's part of why I personally think we're really at the point that spiritual/godly exploration should be the most important thing, but that sounds woo woo and crazy, I suppose. (I probably just over simplified a bunch of stuff I don't fully understand)

  • > The interviewer is barely treading water in the ocean of Penrose's thought. He mistakes his spasmodic thrashing for swimming.

    The comments below this video are utterly insane. Roger Penrose seems to have a fanatical cult attached to him.

  • Well someone has lost the plot.

    What is intelligence if not computation? Even if it turns out our brains require quantum computation in microtubules (unlikely, imho), it's still computation.

    Sure, it has limits and runs into paradoxes, so what? The fact that 'we can see' the paradox but somehow maths can't, is just a Chinese-room type argument, conflating different 'levels' of the system.

  • Relevant SMBC: https://www.smbc-comics.com/comic/2012-03-21

  • I agree with Penrose, AI is something that is not what neural networks are applying. neural networks are not a model of a brain, neural networks are not a digitized brain, they are a flow diagram borrowed from a brain, usually non-human brains, borrowed for a modular assembly. Nothing about the human brain is modular since there is holistic integration between parts, constellations and clusters inside parts of the brain and between different parts of it.

    Godel's theorem has interesting fields to define in terms of interdisciplinary boundaries and tech infrastructure, for example quantum computation.

    -The limitations of inference to build actionable possible worlds (profiling, speculation, predictive design) -cosmology and epistemology as nodes in a somewhat traceable continuum

    And this is probably a base conjecture for the design of self-regulation processes in metaheuristic algorithms. It implies requirements for the chain supply of data feedback and the training sets for automated model generation when considering a future of data sampling in a self-replicating industrial setting. Basically, a lifecycle and ecosystem for data in a world of augmented measuring.

    How is this applicable in a small scale operation is beyond my current knowledge in infrastructure. Rather than the hype of Quantum computation, Second-order cybernetics may be a better fit for the dynamics Godel was calculating proof.

    https://en.wikipedia.org/wiki/Second-order_cybernetics

    A framework for decentralized feedback loops and reliable, transparent and ethical data sourcing is got a lot of nasty obstacles in contemporary society, some of them related to ideology pulling sampling methods and survey options far from statistical trust indicators, and this is a technical problem related to corruption and sabotage in a foreign policy and warfare setting some people may choose to neglect.

    This neglect is easy to notice in the business model of most AI startups, but more importantly, in it's Community Manager policies operating at Discord channels, not to be taken lightly since Populism is a K.O (knock-out) to verificationism. A deadlock against the scrutiny and expansionism of scientific indexation. With the social dimension of politics and AI even Open-Source protocols are also endangered, so the real-life use of Godel's theorem is far from being a possibility and very close to becoming what Penrose calls out as overly-optimistic "triumphalism".

    The obscure details of Penrose theories and his requirements for intelligence to be, although speculative in nature, are healthy in it's identification of computation as a rather simple calculation done in accelerated timelapse, maybe even an arithmetical process in a lot of ways. So not a "Myth" to separate an actual brain from a mockup of modular diagramation.

    On the other hand, a lot of cyber-security protocols need an update, not in a sophisticated scenario, i mean daily use in a very vulgar and mundane daily life for the average joe. Just consider Windows11 and it's fiasco. All of this is a blockade or deplatforming holding us grounded far from needing Godel in our lives, we may need a faster processor to aid our antivirus against AI-generated malware, such faster processor could be impossible without specialized cloud support similar in scalability to vaccination during pandemics or GPUs used by AI generators themselves.

    Something Godel's theorem could be pointing at, in the context of AI and small scale operations, is the base assumption of data corruption everywhere, always, forever. A new era of security frameworks and systematic reviews versus the power of "stacking the deck" with disruptive AI models piggybacked in our service provider's consumer products. Cherry picking with falsifiability always in mind, almost like a crazy person, uncanny valley.

  • If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.

    - Arthur C Clarke

  • Anything a single human can do, reasoning wise, AI will eventually be able to do.

    Anything emerging out of a collective of humans interacting and reasoning (or interacting without reasoning or flawed reasoning) the AIs (plural) will eventually be able to do.

    Only thing is machine kind does not need sleep, does not get tired, etc, so it will fail to fully emulate human behavior, with all the pros and cons of that for us to benefit from and deal with.

    I'm not sure what is the point of a theoretical discussion beyond this.

  • [dead]

  • For those of us without time to watch a video - what is the most important AI myth?

  • Daniel Dennett thoroughly debunks Penrose' argument in Chapter 15 of Darwin's Dangerous Idea. Quoting reviewers of a Penrose paper ... "quite fallacious," "wrong," "lethal flaw" and "inexplicable mistake," "invalid," "deeply flawed." "The Al community [of 1995] was, not surprisingly, united in its dismissal of Penrose's argument."

  • [dead]

  • [flagged]

  • Most people confuse _thinking_ with _calculating_ or _computing_. This is because in everyday life people rarely think, rather _compute_.

    No matter how great computers will evolve in _computing_, they will barely ever be able to _think_. On account the fact that we don't yet even close to understanding how we think or What exactly the thinking is.

  • Mr. Penrose stands as a living testament to the curious duality of human genius: one can wield equations like a virtuoso, bending the arc of physics itself through sheer mathematical brilliance, while simultaneously tripping over philosophical nuance with all the grace of a tourist fumbling through a subway turnstile. A titan in the realm of numbers, yet a dilettante in the theater of ideas.

    ps: i'd like to take a moment to thank DeepSeek for helping me with the specific phrasing of this critique