1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.
2. The human brain is full of causal cycles at all scales. Even if you think human reasoning is axiomatisable, it's not at all obvious to me that the set of axioms would be finite or even computable. Again this rules out any application of Gödel's theorem.
3. Penrose's argument revolves around the fact that the sentence encoding "true but not provable" in Gödel's argument is actually provably true in the outer logical system being used to prove Gödel's theorem, just not the inner logical system being studied. But as all logicians know, truth is a slippery concept and is itself internally indefinable (Tarski's theorem), so there's no guarantee that this notion of "truth" used in the outer system is the same as the "real" truth predicate of the inner system (at best it's something like an arbitrary choice, dependent on your encoding). Penrose is referring to "truth" at multiple logical levels and conflating them.
In other words: you can't selectively chose to apply Gödel's theorem to the situation but not any of the other results of mathematical logic.
> it's not at all obvious to me that the set of axioms would be finite or even computable
The reasoning is representable with and by a finite number of elementary physical particles and so must itself be finite. Because it is finite it is computable.
Said another way, you would need an infinitely large brain (or an infinitely deep one) to create infinite reasoning.
I think that doesn't work, because we don't know how to represent and predict the state of a cloud of elementary particles to that level of detail. You could argue that the mathematics proves that this is possible in principle, but I counter that you have no idea whether the theory extrapolates to such situations in real life because it is way out of humanity's compute budget to test. Like the rest of physics, I expect new regimes would come with new phenomena that we don't understand.
True but not relevant. In this case "it" is the number of states of a finite volume that we believe to be fundamentally quantised.
Borealid isn't saying that any finite output is computable, but that outputs of this specific thing is computable because as far as we know it has a finite number of states.
This implies that brains can't compute the general nth BB function which is also true as far as we know.
The Busy Bever numbers may be finite, but the machine (specifically its tape) that produces them is not. If the Busy Bever is running on a Turing machine with a finite tape length the number becomes computable.
Turning it around, the answer to "can a machine of infinite size do things a finite computer can't" is "yes". That answer ends up being the reason many things aren't computable, including the halting problem.
The halting problem is a trick in disguise. The trick is: no one said the program you are checking halts had to have finite code, or finite storage. Once you see the trick the halting problem looses a lot of its mystique.
I'm not sure what you mean here - the Turing machine that represents a particular BB number halts by definition, which means that it can only visit a finite segment of the tape. Nevertheless BB numbers are incomputable in general.
On your second point - allowing infinitely many steps of computation lets you solve the halting problem for regular Turing machines, but you still get an infinitary version of the halting problem that's incomputable (same proof more or less). So I don't think that's really the issue at stake.
I'm not sure about it makes sense to apply Gödel's theorem to AI. Personally, I prefer to think about it in terms of basic computability theory:
We think, that is a fact.
Therefore, there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
Now, the question is: can we create a smaller function capable of performing the same feat?
If we assume that that function is computable in the Turing sense then, kinda yes, there are an infinite number of turing machines that given enough time will be able to produce the expected results. Basically we need to find something between our own brain and the Kolmogorov complexity limit. That lower bound is not computable, but given that my cats understands when we are discussing to take them to the vet then... maybe we don't really need a full sized human brain for language understanding.
We can run Turing machines ourselves, so we are at least Turing equivalent machines.
Now, the question is: are we at most just Turing machines or something else? If we are something else, then our own CoT won't be computable, no matter how much scale we throw at it. But if we are then it is just matter of time until we can replicate ourselves.
Many philosophical traditions which incorporate a meditation practice emphasize that your consciousness is distinct from the contents of your thoughts. Meditation (even practiced casually) can provide a direct experience of this.
When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs. So human thinking is probably computable, and I think that LLMs can be said to be ”think” in ways that are analogous to what we do.
But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I don’t necessarily think that you need to subscribe to dualism or religious beliefs to explain consciousness - it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
But I do think it’s still important to maintain a distinction between “thinking” (computable, we do it, AIs do it as well) and “consciousness” (we experience it, probably many animals experience it also, but it’s orthogonal to the linguistic or logical reasoning processes that AIs are currently capable of).
At some point this vague experience of awareness may be all that differentiates us from the machines, so we shouldn’t dismiss it.
> It's very difficult to find some way of defining rather precisely something we can do that we can say a computer will never be able to do. There are some things that people make up that say that, "While it's doing it, will it feel good?" or, "While it's doing it, will it understand what it's doing?" or some other abstraction. I rather feel that these are things like, "While it's doing it, will it be able to scratch the lice out of it's hair?" No, it hasn't got any hair nor lice to scratch from it, okay?
> You've got to be careful when you say what the human does, if you add to the actual result of his effort some other things that you like, the appreciation of the aesthetic... then it gets harder and harder for the computer to do it because the human beings have a tendency to try to make sure that they can do something that no machine can do. Somehow it doesn't bother them anymore, it must have bothered them in earlier times, that machines are stronger physically than they are...
> When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs.
Function can mean inputs-outputs. But it can also mean system behaviors.
For instance, recurrence is a functional behavior, not a functional mapping.
Similarly, self-awareness is some kind of internal loop of information, not an input-output mapping. Specifically, an information loop regarding our own internal state.
Today's LLMs are mostly not very recurrent. So might be said to be becoming more intelligent (better responses to complex demands), but not necessarily more conscious. An input-output process has no ability to monitor itself, no matter how capable of generating outputs. Not even when its outputs involve symbols and reasoning about concepts like consciousness.
So I think it is fair to say intelligence and consciousness are different things. But I expect that both can enhance the other.
Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".
Yet even with this radical reduction in general awareness, and our higher level thinking, we remain aware of our awareness of experience. We are not unconscious.
To me that basic self-awareness is what consciousness is. We have it, even when we are not being analytical about it. In meditation our mind is still looping information about its current state, from the state to our sensory experience of our state, even when the state has been reduced so much.
There is not nothing. We are not actually doing nothing. Our mental resting state is still a dynamic state we continue to actively process, that our neurons continue to give us feedback on, even when that processing has been simplified to simply letting that feedback of our state go by with no need to act on it in any way.
So consciousness is inherently at least self-awareness in terms of internal access to our own internal activity. And that we retain a memory of doing this minimal active or passive self-monitoring, even after we resume more complex activity.
My own view is that is all it is, with the addition of enough memory of the minimal loop, and a rich enough model of ourselves, to be able to consider that strange self-awareness looping state afterwards. Ask questions about its nature, etc.
This is what I wrote while I was thinking about the same topic before I can across your excellent comment; as if it’s a summary of what you just said:
Consciousness is nothing but the ability to have internal and external senses, being able to enumerate them, recursively sense them, and remember the previous steps. If any of those ingredients are missing, you cannot create or maintain consciousness.
When I was a kid, I used to imagine if that society ever developed AI, there would be widespread pushback to the idea that computers could ever develop consciousness.
I imagined the Catholic Church, for example, would be publishing missives reminding everyone that only humans can have souls, and biologists would be fighting an quixotic battle to claim that consciousness can arise from physical structures and forces.
I'm still surprised at how credulous and accepting societies have been of AI developments over the last few years.
> I think that LLMs can be said to be ”think” in ways that are analogous to what we do. ... But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I for one (along with many thinkers) define intelligence as the extent to which an agent can solve a particular task. I choose the definition to separate it from issues involving consciousness.
>it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?
Well, it entirely depends on how you even define free will.
Btw, Turing machines provide some inspiration for an interesting definition:
Turing (and Gödel) essentially say that you can't predict what a computer program does: you have to run it to even figure out whether it'll halt. (I think in general, even if you fix some large fixed step size n, you can't even predict whether an arbitrary program will halt after n steps or not, without essentially running it anyway.)
Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them. And by an argument implied by Turing in his paper on the Turing test, that simulation would have the same experience as the human would have had.
(To go even further: if quantum fluctuations have an impact on human behaviour, you can't even do that simulation 100% accurately, because of the no cloning theorem.
To be more precise: I'm not saying, like Penrose, that human brains use quantum computing. My much weaker claim is that human brains are likely a chaotic system, so even a very small deviation in starting conditions can quickly lead to differences in outcome.
If you are only interested in approximate predictions, identical twins show that just getting the same DNA and approximation of the environment gets you pretty far in making good predictions. So cell level scans could be even better. But: not perfect.)
To state it's a turing machine might be a bit much but there might be a map between substrates to some degree, and computers can have a form of consciousness, an inner experience, basically the hidden layers and clearly the input of senses, but it wouldn't be the same qualia as a mind, I suspect it has more to due with chemputation and is dependent on the substrate doing the computing as opposed to a facility thereof, up to some accuracy limit, we can only detect light we have receptors for after all. To have qualia distinct to another being you need to compute on a substrate that can accurately fool the computation, fake sugar instead of sugar for example.
What we have and AI don't are emotions. After all, that what animates us to survive and reproduce. Without emotions we can't classify and therefore store our experiences because there no reason to remember something which we are indifferent about. This includes everything not accessible by our senses. Our abilities are limited to what is needed for survival and reproduction because all the rest would consume our precious resources.
The larger picture is that our brains are very much influenced by all the chemistry that happens around our units of computation (neurones); especially hormones. But (maybe) unlike consciousness, this is all "reproducible", meaning it can be part of the algorithm.
We don’t know that LLMs generating tokens for scenarios involving simulations of conscious don’t already involve such experience. Certainly such threads of consciousness would currently be much less coherent and fleeting than the human experience, but I see no reason to simply ignore the possibility. To whatever degree it is even coherent to talk about the conscious experience of others than yourself (p-zombies and such), I expect that as AIs’ long term coherency improves and AI minds become more tangible to us, people will settle into the same implicit assumption afforded to fellow humans that there is consciousness behind the cognition.
The very tricky part then is to ask if the consciousness/phenomenological experience that you postulate still happens if, say, we were to compute the outputs of an LLM by hand… while difficult, if every single person on earth did one operation per second, plus some very complicated coordination and results gathering, we could probably predict a couple of tokens for an LLM at some moderate frequency… say, a couple of tokens a month? a week? A year? A decade? Regardless… would that consciousness still have an experience? Or is there some threshold of speed and coherence, or coloration that would be missing and result in failure for it to emerge?
Impossible to answer.
Btw I mostly think it’s reasonable to think that there might be consciousness, phenomenology etc are possible in silicon, but it’s tricky and unverifiable ofc.
> would that consciousness still have an experience?
If the original one did, then yes, of course. You're performing the exact same processing.
Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain. The individual humans performing the simulation are now comparable to the individual neurons in a real brain. Similarly, in your scenario, the humans are just the computer hardware running the LLM. Apart from that it's the same LLM. Anything that the original LLM experiences, the simulated one does too, otherwise they're not simulating it fully.
Yes that’s my main point - if you accept the first one, then you should accept the second one (though some people might find the second so absurd as to reject the first).
> Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain.
However, I don’t really buy “of course it would,” or in another words the materialist premise - maybe yes, maybe no, but I don’t think there’s anything definitive on the matter of materialism in philosophy of mind. as much as I wish I was fully a materialist, I can never fully internalize how sentience can uh emerge from matter… in other words, to some extent I feel that my own sentience is fundamentally incompatible with everything I know about science, which uh sucks, because I definitely don’t believe in dualism!
It would certainly with sufficient accuracy honestly say to you that it's conscious and believes it whole heartily, but in practice it would need to a priori be able describe external sense data, as it's not separate necessarily from the experiences, which intrinsically requires you to compute in the world itself otherwise it would only be able to compute on, in a way it's like having edge compute at the skins edge. The range of qualia available at each moment will be distinct to each experiencer with the senses available, and there likely will be some overlap in interpretation based on your computing substrate.
We in a way can articulate the underlying chemputation of the universe mediated through our senses, reflection and language, turn a piece off (as it is often non continuous) and the quality of the experience changes.
But do you believe in something constructive? Do you agree with Searle that computers calculate? But then numbers and calculation are immaterial things that emerge from matter?
It likely is a fact, but we don't really know what we mean by "think".
LLMs have illuminated this point from a relatively new direction: we do not know if their mechanism(s) for language generation are similar to our own, or not.
We don't really understand the relationship between "reasoning" and "thinking". We don't really understand the difference between Kahneman's "fast" and "slow" thinking.
Something happens, probably in our brains, that we experience and that seems causally prior to some of our behavior. We call it thinking, but we don't know much about what it actually is.
I think we have a pretty good idea that we are not stochastic parrots - sophisticated or not. Anyone suggesting that we’re running billion parameter models in order to bang out a snarky comment is probably trying to sell you something (and crypto’s likely involved.)
I think you’re right, LLMs have demonstrated that relatively sophisticated mathematics involving billions of params and an internet full of training data is capable of some truly, truly, remarkable things. But as Penrose is saying, there are provable limits to computation. If we’re going to assume that intelligence as we experience it is computable, then Gödel’s theorem (and, frankly, the field of mathematics) seems to present a problem.
I've never had any time for Penrose. Gödel’s theorem "merely" asserts that in any system capable of a specific form of expression there are statements which are true but not provable. What this has to do with (a) limits to computation or (b) human intelligence has never been clear to me, despite four decades or more of interest in the topic. There's no reason I can see why we should think that humans are somehow without computational limits. Whether our limits correspond to Gödel’s theorem or not is mildly interesting, but not really foundational from my perspective.
At the end of the day Penrose's arguments is just Dualism.
Humans have a special thingy that makes the consciousness
Computers do not have the special thingy
Therefore Computers cannot be consciousness.
But Dualism gets you laughed at these days so Dualists have to code their arguments and pretend they aren't into that there Dualism.
Penrose's arguments against AI has always felt to me like special pleading that humans (or to stretch a bit further, carbon based lifeforms) are unique.
> I think we have a pretty good idea that we are not stochastic parrots - sophisticated or not. Anyone suggesting that we’re running billion parameter models
On the contrary, we have 86B neurons in the brain, the weighting of the connections is the important thing, but we are definitely 'running' a model with many billions of parameters to produce our output.
The theory by which the brain mainly works by predicting the next state is called predictive coding theory, and I would say that I find it pretty plausible. At the very least, we are a long way from knowing for certain that we don't work in this way.
I don't think its useful or even interesting to talk about AI in relation to how humans think, or whether or not they will be "conscious" whatever that might mean.
AIs are not going to be like humans because they will have perfect recall of a massive database of facts, and be able to do math well beyond any human brain.
The interesting question to me is, when will we be able to give AI very large tasks, and when will it to be able to break the tasks down into smaller and smaller tasks and complete them.
When will it be able to set its own goals, and know when it has achieved them?
When will it be able to recognize that it doesn't know something and do the work to fill in the blanks.
I get the impression that LLMs don't really know what they are saying at the moment, so don't have any way to test what they are saying is true or not.
Worth pointing out that we aren't Turing equivalent machines - infinite storage is not a computability class that is realizable in the universe, so far as we know (and such a claim would require extraordinary evidence).
As well, perhaps, worth noting that because a subset of the observable universe is performing some function, then it is an assumption that there is some finite or digital mathematical function equivalent to that function; a reasonable assumption but still an assumption. Most models of the quantum universe involve continuously variable values, not digital values. Is there a Turing machine that can output all the free parameters of the standard model?
> Is there a Turing machine that can output all the free parameters of the standard model?
Sure, just hard code them.
> As well, perhaps, worth noting that because a subset of the observable universe is performing some function, then it is an assumption that there is some finite or digital mathematical function equivalent to that function; a reasonable assumption but still an assumption. Most models of the quantum universe involve continuously variable values, not digital values.
Things seem to be quantised at a low enough level.
Also: interestingly enough quantum mechanics is both completely deterministic and linear. That means even if it was continuous, you could simulate it to an arbitrary precision without errors building up chaotically.
(Figuring out how chaos, as famously observed in the weather, arises in the real world is left as an exercise to the reader. Also a note: the Copenhagen interpretation introduces non-determinism to _interpret_ quantum mechanics but that's not part of the underlying theory, and there are interpretations that have no need for this crutch.)
> there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
We mistakenly assume, they are true because perhaps we want them to be true. But we have no proof that either of these are true.
Mind-body dualism has nothing to do with this. The point is that, as Descartes observed, the fact that I myself am thinking proves that I exist. This goes directly against what northern-lights said, when he said that we have no proof that reasoning exists or that we do it.
Kant addressed this Cartesian duality in the "The paralogisms of pure reason" section of the Transcendental Dialectic within his Critique of Pure Reason. He points out that the "I" in "I think, therefore I am" is a different "I" in the subject part vs the object part of that phrase.
Quick context: His view of what constitutes a subject, which is to say a thinking person in this case, is one which over time (and time is very important here) observes manifold partial aspects about objects through perception, then through apprehension (the building of understanding through successive sensibilities over time) the subject schematizes information about the object. Through logical judgments, from which Kant derives his categories, we can understand the object and use synthetic a priori reasoning about the object.
So for him, the statement "I am" means simply that you are a subject who performs this perception and reasoning process, as one's "existence" is mediated and predicated on doing such a process over time. So then "I think, therefore I am" becomes a tautology. Assuming that the "I" in "I am" exists as an object, which is to say a thing of substance, one which other thinking subjects could reason about, becomes what he calls "transcendental illusion", which is the application of transcendental reasoning not rooted in sensibility. He calls this metaphysics, and he focuses on the soul (the topic at hand here), the cosmos, and God as the three topics of metaphysics in his Transcendental Dialectic.
I think that in general, discussion about epistemology with regard to AI would be better if people started at least from Kant (either building on his ideas or critical of them), as his CPR really shaped a lot of the post-Enlightenment views on epistemology that a lot of us carry with us without knowing. In my opinion, AI is vulnerable to a criticism that empiricists like Hume applied to people (viewing people as "bundles of experience" and critiquing the idea that we can create new ideas independent of our experience). I do think that AI suffers from this problem, as estimating a generative probability distribution over data means that no new information can be created that is not simply a logically ungrounded combination of previous information. I have not read any discussion of how Kant's view of our ability to make new information (application of categories grounded by our perception) might influence a way to make an actual thinking machine. It would be fascinating to see an approach that combines new AI approaches as the way the machine perceives information and then combines it with old AI approaches that build on logic systems to "reason" in a way that's grounded in truth. The problem with old AI is that it's impossible to model everything with logic (the failure of logical posivitism should have warned them), however it IS possible to combine logic with perception like Kant proposed.
I hope this makes sense. I've noticed a lack of philosophical rigor around the discussion of AI epistemology, and it feels like a lot of American philosophy research, being rooted in modern analytical tradition that IMO can't adapt easily to an ontological shift from human to machine as the subject, hasn't really risen to the challenge yet.
> Personally, I prefer to think about it in terms of basic computability theory:
Gödel's incompleteness theorem applies to computing. I'm sure you're familiar with the Halting Problem. Gödel's applies to any axiomatic system. The trouble is, it's very hard to make a system without axioms. They are sneaky and it's different than any logic you're probably familiar with.
And don't forget Church-Turing, Gödel Numbers, and all the other stuff. Programming is math and Gödel did essential work on the theory of computation. It would be weird NOT to include his work in this conversation.
> are we at most just Turing machines or something else?
But this is a great question. Many believe no. Personally I'm unsure, but lean no. Penrose is a clear no but he has some wacky ideas. Problem is, it's hard to tell a bad wacky idea from a good wacky idea. Rephrasing Clarke's Second Law: Genius is nearly indistinguishable from insanity. The only way to tell is with time.
But look into things like NARS and Super Turing machines (Hypercomputation). There's a whole world of important things that are not often discussed when it comes to the discussion of AGI. But for those that don't want to dig deep into the math, pick up some Sci-Fi and suspend your disbelief. Star Trek, The Orville and the like have holographic simulations and I doubt anyone would think they're conscious, despite being very realistic. But The Doctor in Voyager or Isaac in The Orville are good examples of the contrary. The Doctor is an entity you see become conscious. It's fiction, but that doesn't mean there aren't deep philosophical questions. Even if they're marked by easy to digest entertainment. Good stories are like good horror, they get under your skin, infect you, and creep in
Edit:
I'll leave you with another question. Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
> Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
In addition to the detectability problem, I wrote in the adjacent comment, this question can be further refined.
A Turing machine is an abstract concept. Do we need to take into account material/organizational properties of its physical realization? Do we need to take into account computational complexity properties of its physical realization?
Quantum mechanics without Penrose's Orch OR is Turing computable, but its runtime on classical hardware is exponential in, roughly, the number of interacting particles. So, theoretically, we can simulate all there is to simulate about a given person.
But to get the initial state of the simulation we need to either measure the person's quantum state (thus losing some information) or teleport his/her quantum state into a quantum computer (the no-cloning theorem doesn't allow to copy it). The quantum computer in this case is a physical realization of an abstract Turing machine, but we can't know its initial state.
The quantum computer will simulate everything there are to simulate, but the interaction of a physical human with the initial state of the Universe via photons of the cosmic microwave background. Which may deprive the simulated one of "free will" (see "The Ghost in the Quantum Turing Machine" by Scott Aaronson). Or maybe we can simulate those photons too, I'm not sure about it.
Does all of it have anything to do with consciousness? Yeah, those are interesting questions.
There's no evidence that hypercomputation is anything that happens in our world, is there? I'm fairly confident of the weaker claim that there's no evidence of hypercomputation in any biological system. (Who know what spinning, charged black holes etc are doing?)
> Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
A Turing machine can in principle beat the Turing test. But so can a giant lookup table, if there's any finite time limit (however generous) placed on the test.
The 'magic' would in the implementation of the table (or the Turing machine) into something that can answer in a reasonable amount of time and be physically realised in a reasonable amount of space.
All of this is a fine thought experiment, but in practice there are physical limitations to digital processors that don’t seem to manifest in our brains (energy use in the ability to think vs running discrete commands)
It’s possible that we haven’t found a way to express your thinking function digitally, which I think is true, but I have a feeling that the complexity of thought requires the analog-ness of our brains.
If human-like cognition isn't possible on digital computers, it's certainly is on quantum ones. The Deutsch-Church-Turing principle shows that a quantum Turing machine can efficiently simulate any physically realizable computational process.
It is a big mistake to think that most computability theory applies to AI, including Gödel’s Theorem. People start off wrong by talking about AI “algorithms.” The term applies more correctly to concepts like gradient descent. But the inferences of the resulting neural nets is not an algorithm. It is not a defined sequence of operations that produces a defined result. It is better described as a heuristic, a procedure that approximates a correct result but provides no mathematical guarantees.
Other aspects of ANN that show that Gödel doesn’t apply is that they are not formal systems. Formal system is a collection of defined operations. The building blocks of ANN could perhaps be built into a formal system. Petri nets have been demonstrated to be computationally equivalent to Turing machines. But this is really an indictment on the implementation. It’s the same as using your PC, implementing a formal system like its instruction set to run a heuristic computation. Formal system can implement informal systems.
I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
AI is most definitely an algorithm. It runs on a computer, what else could it be? Humans didn't create the algorithm directly, but it certainly exists within the machine. The computer takes an input, does a series of computing operations on it, and spits out a result. That is an algorithm.
As for humans, there is no way you can look at the behavior of a human and know for certain it is not a Turing machine. With a large enough machine, you could simulate any behavior you want, even behavior that would look, on first observation, to not be coming from a Turing machine; this is a form of the halting problem. Any observation you make that makes you believe it is NOT coming from a Turing machine could be programmed to be the output of the Turing machine.
> But the inferences of the resulting neural nets is not an algorithm.
Incorrect.
The comment above confuses some concepts.
Perhaps this will help: consider a PRNG implemented in software. It is an algorithm. The question of the utility of a PRNG (or any algorithm) is a separate thing.
Heuristic or not, AI is still ultimately an algorithm (as another comment pointed out, heuristics are a subset of algorithms). AI cannot, to expand on your PRNG example, generate true random numbers; an example that, in my view, betrays the fundamental inability of an AI to "transcend" its underlying structure of pure algorithm.
On one level, yes you’re right. Computing weights and propagating values through an ANN is well defined and very algorithmic.
On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
I suspect maybe at that level you can think of it as an algorithm with unreliable outputs. I don’t know what that idea gains over thinking it’s not algorithmic and just a heuristic approximation.
"Heuristic" and "algorithmic" are not antipodes. A heuristic is a category of algorithm, specifically one that returns an approximate or probabilistic result. An example of a widely recognized algorithm that is also a heuristic is the Miller-Rabin primality test.
“Algorithm” just means something which follows a series of steps (like a recipe). It absolutely does not require understanding and doesn’t require determinism or reliable outputs. I am sympathetic to the distinction that (I think) you’re trying to make but ANNs and inference are most certainly algorithms.
> On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
It is hard to assess the comment above. Depending on what you mean, it is incorrect, inaccurate, and/or poorly framed.
The word “really” is a weasel word. It suggests there is some sort of threshold of understanding, but the threshold is not explained and is probably arbitrary. The problem with these kinds of statements is that they are very hard to pin down. They use a rhetorical technique that allows a person to move the goal posts repeatedly.
This line of discussion is well covered by critics of the word “emergence”.
> But the inferences of the resulting neural nets is not an algorithm
It is a self-delimiting program. It is an algorithm in the most basic sense of the definition of “partial recursive function” (total in this case) and thus all known results of computability theory and algorithmic information theory apply.
> Formal system is a collection of defined operations
Not at all.
> I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
We have zero evidence of this one way or another.
—
I’m looking for loopholes around Gödel’s theorems just as much as everyone else is, but this isn’t it.
Heuristics implemented within a formal system are still bound by the limitations of the system.
Physicists like to use mathematics for modeling the reality. If our current understanding of physics is fundamentally correct, everything that can possibly exist is functionally equivalent to a formal system. To escape that, you would need some really weird new physics. Which would also have to be really inconvenient new physics, because it could not be modeled with our current mathematics or simulated with our current computers.
To be fair, I muddled concepts of formal/informal systems versus completeness and consistency. I think if you start from an assumption that ANN is a formal system(not a given), you must conclude that they are necessarily inconsistent. The AI we have now hallucinates way too much to conclude any truth derived from its “reasoning.”
But surely any limits on formal systems apply to informal systems? By this, I am more or less suggesting that formal systems are the best we can do, the best possible representations of knowledge, computability, etc., and that informal systems cannot be "better" (a loaded term herein, for sure) than formal systems.
So if Gödel tells us that either formal systems will be consistent and make statements they cannot prove XOR be inconsistent and therefore unreliable, at least to some degree, then surely informal systems will, at best, be the same, and, at worst, be much worse?
I suspect that if formal systems were unequivocally “better” than formal systems our brains would be formal systems.
The desirable property of formal systems is that the results they produce are proven in a way that can be independently verified. Many informal systems can produce correct results to problems without a known, efficient algorithmic solution. Lots of scheduling and packing problems are NP-complete but that doesn’t stop us from delivering heuristic based solutions that work good enough.
Edit: I should probably add that I’m pretty rusty on this. Godels theorem tells ua that if a formal system is consistent, it will be incomplete. That is, there will be true statements that cannot be proven in the system. If the system is complete, that is, all true/false statements can be proven, then the system will be incomplete. That is you can prove contradictory things in the system.
AI we have now isn’t really either of these. It’s not working to derive truth and falsehood from axioms and a rule system. It’s just approximating the most likely answers that match its training data.
All of this has almost no relation to the questions we’re interested in like how intelligent can AI be or can it attain consciousness. I don’t even know that we have definitions for these concepts suitable for beginning a scientific inquiry.
Yeah I don’t know why GP would think computability theory doesn’t apply to AI. Is there a single example of a problem that isn’t computable by a Turing machine that can be computed by AI?
It does apply to AI in terms of the computers we compute neural networks on may be equivalent to Turning machines but the ANN networks are not. If you did reduce the ANN down to a formal system, you will likely find that in terms of Godels theorem that it would be sufficiently powerful to prove a falsehood. Thus not meeting the consistency property we would like in a system used to prove things.
Excuse me, what are you talking about? You think there is any of computability that doesn't apply to AI? With all respect and I do not intend this in a mean way but just intend to rightly call all of this as exactly nonsense. I think there is a fundamental misunderstanding of computational theory and Turing machines, Church-Turing thesis, etc. any standard text on the subject should clear this up.
Gödel’s incompleteness theorem, and, say, the halting problem seem to fall squarely into the bucket of “basic computability theory” in precisely the way that “we think, that is a fact”, does not (D.A. hat tip)
You’re arguing that we know artificial reasoning exists because we are capable of reasoning. This presupposes that reasoning is computable and that we ourselves reason by computation. But that’s exactly what Penrose is saying isn’t the case - you’re saying we’re walking Turing machines, we’re intelligent, so we must be able to effectively create copies of that intelligence. Penrose is saying that intelligence is poorly defined, that it requires consciousness which is poorly understood, and that we are not meat-based computers.
Your last question misses the point completely. “If we are something else, then out CoT won’t be computable…” it’s like you’re almost there but you can’t let go of “we are meat-machines, everything boils down to computation, we can cook up clones”. Except, “basic computability theory” says that’s not even wrong.
Penrose is a dualist, he does not believe that function can be computed in our physical universe. He believes the mind comes from another realm and "pilots" us through quantum phenomenons in the brain.
Interesting. Does that fit with the simulation hypothesis? That the world's physics are simulated on one computer, but us characters are simulated on different machines, with some latency involved?
Its all pop pseudoscience.
Things exist.
Anything that exists has an identity.
Physics exists and other things (simulations, computing, etc.) that exist are subject to those physics.
To say that it happens the other way around is poor logic and/or lacks falsifiability.
dude, the simulation hypothesis does not mean things don't exist, it means they don't necessarily exist in the way you have, rather unimaginatively, imagined, and you have no way to tell.
> … and you have no way to tell.
This is exactly my point. If we have no way to tell, what experiment could you possibly use to test whether we’re in a simulation or not? The simulation hypothesis lacks falsifiability and is pseudoscience.
Which is—to use the latest philosophy lingo—dumb. To be fair to Penrose, the “Gödel’s theory about formal systems proves that souls exist” is an extremely common take; anyone following LLM discussions has likely seen it rediscovered at least once or twice.
To pull from the relevant part of Hofstadter’s incredible I am a Strange Loop (a book also happens to more rigorously invoke Gödel for cognitive science):
And this is our central quandary. Either we believe in a nonmaterial soul that lives outside the laws of physics, which amounts to a nonscientific belief in magic, or we reject that idea, in which case the eternally beckoning question "What could ever make a mere physical pattern be me?”
After all, a phrase like "physical system" or "physical substrate" brings to mind for most people… an intricate structure consisting of vast numbers of interlocked wheels, gears, rods, tubes, balls, pendula, and so forth, even if they are tiny, invisible, perfectly silent, and possibly even probabilistic. Such an array of interacting inanimate stuff seems to most people as unconscious and devoid of inner light as a flush toilet, an automobile transmission, a fancy Swiss watch (mechanical or electronic), a cog railway, an ocean liner, or an oil refinery. Such a system is not just probably unconscious, it is *necessarily* so, as they see it. This is the kind of single-level intuition so skillfully exploited by John Searle in his attempts to convince people that computers could never be conscious, no matter what abstract patterns might reside in them, and could never mean anything at all by whatever long chains of lexical items they might string together.
Highly recommend it for anyone who liked Gödel, Escher, Bach, but wants more explicit scientific theses! He basically wrote it to clarify the more artsy/rhetorical points made in the former book.
It feels really weird to say that Roger Penrose is being dumb.
It's accurate. But it feels really weird.
It's not uncommon for great scientists to be totally out of their depth even in nearby fields, and not realize it. But this isn't the hard part of either computability or philosophy of mind.
No, Penrose is not dumb. He gives a very good argument in his books on limitations of AI, which is almost always misrepresented including in most of this thread. It is worth reading "Shadows of the Mind".
hes a damn good mathematician. it is indeed weird to experience him not breaking down the exact points of assumption he makes on arriving at his conclusion. he is old though, so...
he starts with "consciousnes is not computable". You can not just ignore it as a central argument withouth explaining why your preference to think on it as basic computability theory makes more sence than his.
What's more, whatever you like to call the transoforming of information into thinked information by definition can not be a (mathematical) function, because it would require all people to process the same information in the same way and this is plainly false
>> What's more, whatever you like to call the transoforming of information into thinked information by definition can not be a (mathematical) function, because it would require all people to process the same information in the same way and this is plainly false
No this isn't the checkmate you think it is. It could still be a mathematical function. But every person transforming information into "thinked information" could have a different implementation of this function. Which would be expected as no person is made of the same code (DNA).
I think the complication here is that brains are probabilistic, which admits the possibility that they can’t be directly related to non probabilistic computability classes. I think there’s a paper I forget the name of that says quantum computers can decide the halting problem with some probability (which makes sense because you could always just flip a coin and decide it with some probability) - maybe brains are similar
>Therefore, there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
"Thinked information" is a colour not an inherent property of information. The fact that information has been thought is like the fact it is copyrighted. It is not something inherent to the information, but a property of its history.
No, I mean, it's nice but I don't think any of that works. You say "Therefore, there is a function capable ..." that is a non-sequitur. But, let's set that aside, I think the key point here is about Turing machines and computability. Do you really think your mind and thought-process is a Turing machine? How many watts of power did it take to write your comment? I think it is an absolute certainty that human intelligence is not like a Turing machine at all. Do you find it much more troublesome to think about continuous problems or is ironically more troublesome to discretize continuous problems in order to work with them?
We don't know every fact, either, so I don't know how you can use that idea to say that we're not Turing machines. Apart, of course, from the trivial fact that we are far more limited than a Turing machine...
With sufficient compute capacity, a complete physical simulation of a human should be possible. This means that, even though we are fallible, there is nothing that we do which can't be simulated on a Turing machine.
If I have a 9-DOF sensor in meatspace and am feeding that to a "simulation" that helps a PID coalesce faster then my simulation can move something. When I tell my computer to simulate blackbody radiation...
What you said sounds good, but I don't think it's philosophically robust.
I think you misunderstood my point. A simulation is never the actual simulated phenomenon. When you understand consciousness as a "physical" phenomenon (e.g. as in most forms of panprotopsychism), believing in being able to create it by computation is like believing in being able to generate gravity by computation.
I don't see how computation itself can be a plausible cause here. The physical representation of the computation might be the cause, but the computation itself is not substrate-independent in its possible effect. That is again my point.
I'm arguing with an AI about this too, because my firm belief is that the act of changing a 1 to a 0 in a computer must radiate heat - a 1 is a voltage, it's not an abstract "idea", so that "power" has to go somewhere. It radiates out.
I'm not really arguing with you, i just think if i simulate entropy (entropic processes, "CSRNG", whatever) on my computer ...
I agree and the radiation/physical effect is in my opinion the only possibility a normal computer may somehow be able to cause some kind of consciousness.
The entire p-zombie concept presumes dualism (or something close enough to it that I'm happy to lump it all into the generic category of "requires woo"). Gravity having an effect on something is measurable and provable, whereas qualia are not.
Why should a complete simulation be possible?
in fact there are plenty of things we can do that can't be simulated on a Turing machine. just one example the Busy Beaver Problem is an uncountable problem for large N, so by definiton is not coumptable and yet humans can prove properties like "BB(n) grows faster than any computable function"
Proving properties and computing values are quite different things, and proofs can absolutely be done on Turing machines, e.g. with proof assistants like Lean.
no, see the problem is that the machine needs a well defined problem, and the "BB(n) grows faster than any defined problem" is well defined but you would not come up with an insight like that by executing the BB(n) function. that insight requires a leap out of the problem into a new area and then sure after it is defined as a new problem you enter again in the computability realm in a different dimension. But if the machine tries to come up with insight like that by executing the BB(n) function it will get stuck in infinite loops.
As long as you take the assumption that the universe is finite, follows a fixed set of laws, and is completely deterministic, then I think it follows (if not perfectly, then at least to a first order) that anything within the universe could be simulated using a theoretical computer, and you could also simulate a smaller universe on a real computer, although a real computer that simulated something of this complexity would be extremely hard to engineer.
It's not entirely clear, though, that the universe is deterministic- our best experiments suggest there is some remaining and relevant nondeterminism.
Turing machines, Goedel incompleteness, Busy Beaver Functions, and (probably) NP problems don't have any relevance to simulating complex phenomena or hard problems in biology.
I feel like Penrose presupposes the human mind is non computable.
Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.
And in any case: any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable. So I’m not convinced I could be convinced without a computable proof.
And finally just like computable numbers are dense in the reals, maybe computable thoughts are dense in transcendence.
This is accurate from his Emperor's New Mind. Penrose essentially takes for granted that human brains can reason about or produce results that are otherwise uncomputable. Of course, you can reduce all (historical) human reasoning to a computable heuristic as it is finite, but for some reason he just doesn't see this.
His intent at the time was to open a physical explanation for free will by taking the recourse to quantum nano-tubules magnifying true randomness to the level of human cognition. As much as I'm also skeptical that this actually moves the needle on whether or not we have free will (...vs occasionally having access to statistically-certain nondeterminism? Ok...) the computable stuff was just in service of this end.
I strongly suspect he just hasn't grasped how powerful heuristics are at overcoming general restrictions on computation. Either that or this is an ideological commitment.
Kind of sad—penrose tilings hold a special place in my heart.
> His intent at the time was to open a physical explanation for free will by taking the recourse to quantum nano-tubules magnifying true randomness to the level of human cognition. As much as I'm also skeptical that this actually moves the needle on whether or not we have free will (...vs occasionally having access to statistically-certain nondeterminism? Ok...) the computable stuff was just in service of this end.
Free will is a useful abstraction. Just like life and continuity of self are.
> I strongly suspect he just hasn't grasped how powerful heuristics are at overcoming general restrictions on computation.
Allowing approximations or "I don't know" is what's helpful. The bpf verifier can work despite the halting problem being unsolvable, not because it makes guesses (uses heuristics) but because it's allowed to lump in "I don't know" with "no".
You could reasonably consider free will to be an abstraction or a language game, but it is closely linked to moral responsibility, prisons, punishment, etc, which are very much not language games.
I don't think free will exists because I don't think supernatural phenomena exist, and there's certainly no natural explanation for free will (Penrose was correct about that). But I have a very non-nihilistic view on things [1].
I suppose if you really wanted to you could view condensing useful abstractions out of a highly detailed system as a kind of language game, but I'm not convinced that that's useful in the context of investigating particular abstractions rather than investigating the nature of the process of making abstractions.
I think the very concept of an abstraction is a language game, one that speaks to the old Platonic ideals of Greek philosophy - with all the good and bad that implies. This specific language game takes on a very concrete meaning to programmers that can be quantitatively analyzed by a compiler but epistemologically it’s just another abstract concept (I hate philosophical inception).
> Penrose essentially takes for granted that human brains can reason about or produce results that are otherwise uncomputable.
That's Penrose's old criticism. We're past that. It's the wrong point now.
Generative AI systems are quite creative. Better than the average human at art.
LLMs don't have trouble blithering about advanced abstract concepts.
It's concrete areas where these systems have trouble, such as arithmetic.
Common sense is still tough. Hallucinations are a problem. Lying is a problem.
None of those areas are limited by computability. It's grounding in the real world that's not working well.
(A legit question to ask today is this: We now know how much compute it takes to get to the Turing test level of faking intelligence. How do biological brains, with such a slow clock rate, do it? That was part of the concept behind "nanotubules". Something in there must be running fast, right?)
Nah. It just needs to be really wide. This is a very fuzzy comparison, but a human brain has ~100 trillion synaptic connections, which are the closest match we have to "parameters" in AI models. The largest such models currently have on the order of ~2 trillion parameters. (edit to add: and this is a low end estimate of the differences between them. There might be more stuff in neurons that effectively acts as parameters, and should be counted as such in a comparison.)
So AI models are still at least two orders of magnitude off from humans in pure width. In contrast, they run much, much faster.
IMO creativity is beside the point, I mean, it is just one of those things that the human brain happens to be good at, so we identify it with consciousness (in the sense that it falls out of the same organ). But really all sorts of stuff can look creative. I mean, just to use an example that is clearly identified as a creative product: a Jackson Pollock painting is clearly a creative work, but a lot of the beauty comes from purely physical processes, the human influences the toss but after that it’s just a jumble of fluid dynamics.
We wouldn’t call the air creative, right? Or if we do, we must conclude that creativity doesn’t require consciousness.
Given that we struggle with even a basic consensus about which humans are better at art than others, I don't think this sentence carries any meaning whatsoever.
They’re better than your average human at producing jpegs. I think if you put any of your average humans in a closed room with nothing but canvases, paint, and a mirror, within a month or two they’d be producing pretty interesting paintings without having been fed an image of every single artwork humanity has ever made.
> It's concrete areas where these systems have trouble, such as arithmetic. Common sense is still tough. Hallucinations are a problem. Lying is a problem
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable.
This is a fallacy. Just because you need to serialize a concept to communicate it doesnt mean the concept itself is computable. This is established and well proven:
The fact that we can come up with this kind of uncumputable problems is a big plus in supprt of Penrose's Idea that consciousnes is not computable and goes way beyond compatability.
That's how I understood Penrose's reasoning too. He differentiated between the computer and whatever is going on in our brain. Computers are just "powerful" enough to encode something that mimics intelligence on the surface (the interviewer tried to pin him on that "something new"), but is still the result of traditional computation, without the involvement of consciousness (his requirement for intelligence).
well that is the "beyond computable". we are somehow able to say that this function will not halt, and we wouldnt be able to do that if we only had computable power to simulate it because that would proove that the probem was decidable in the first place.
how you comunicate it does not alter the nature of the problem.
> My thoughts are serialized and obviously countable.
You might want to consider doing a bit of meditation...anyone who describes their thoughts as 'serialized' and 'obviously countable' has not much time actually looking at their thoughts.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable
Are you aware of how little of modern mathematics has been formalised? As in, properly formalised on a computer. Not just written up into a paper that other mathematics can read and nod along to.
Mathematics might seem very formal and serialised (and it is, compared to most other human endeavours) but that’s actually quite far from the truth. Really, it all exists in the mind of the mathematician and a lot of it is hard, if not currently impossible, to pin down precisely enough to enter into a formal system.
I think you probably do understand some things ‘transcendently’! Almost by definition they’re the things you’re least aware of understanding.
Experience is what's hard to square with computability. David Chalmer's calls this the hard problem. As long as you're taking about producing speech or other behaviors, it's easy to see how that might be a computation (and nothing more).
It's harder (for me) to see how it's possible to say that pain is just a way of describing things, i.e. that there's in principle no difference between feeling pain and computing a certain function.
Remember - there is no such thing as an objective consciousness meter.
Emulating the behaviours we associate with consciousness - something that still hasn't been achieved - solves the problem of emulation, not the problem of identity.
The idea that an emulation is literally identical to the thing it emulates in this instance only is a very strange belief.
Nowhere else in science is a mathematical model of something considered physically identical and interchangeable with the entity being modelled.
> Nowhere else in science is a mathematical model of something considered physically identical and interchangeable with the entity being modelled
you can make the argument that everything in science is a mathematical model... if you measure a basketball arcing through the sky, you are not actually privy to any existential sensing of the ball, you are proxying the essence of the basketball using photons, and even collecting those photons is not really "collecting those photons", etc.
> Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.
You needn't be a genius. Go on a few vipassana meditation retreats and your perception of all this may shift a bit.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable
Hence the suggestion by all mystical traditions that truth can only be experienced, not explained.
It may be possible for an AI to have access to the same experiences of consciousness that humans have (around thought, that make human expressions of thought what they are) - but we will first need to understand the parts of the mind / body that facilitate this and replicate them (or a sufficient subset of them) such that AI can use them as part of its computational substrate.
This is not a presupposition for Penrose, but a conclusion. The argument for the conclusion is the subject of several of his books.
Secondly, the issue is not being a genius, but an ability to reflect. What can be shown, uncontroversially, is that a formal computer system which is knowably correct, a human (or indeed a machine apart which is not the original system) can know something(like a mathematical theorem) which is not accesible to the system. This is due to a standard diagonalization argument used in logic and computability.
The important qualifier is 'knowably correct' which doesn't apply to LLMs which are famous for their hallucinations. But, this is not a solid argument for LLMs being able to do everything that humans can do. Because correctness need not refer to immediate outputs, but outputs which are processed through several verification systems.
would you mind summarizing what the main argument is? I've watched several of his interviews (but not read the books), and I don't really understand why he concludes that consciousness is not computable.
I watched half of the video. He keeps appealing to the idea that Goedel applies to AI because AI doesn't understand what it's doing. But I seriously doubt that we humans really know what we're doing, either.
IIRC, his Goedel argument against AI is that someone could construct a Goedel proposition for an intelligent machine which that machine could reason its way through to hit a contradiction. But, at least by default, humans don't base their epistemology on such reasoning, and I don't see why a conscious machine would either. It's not ideal, but frankly, when most humans hit a contradiction, they usually just ignore whichever side of the contradiction is most inconvenient for them.
The argument does not need to involve typical human behaviour with its faults. Because if a computer can simulate humans, it can also simulate humans with error correction and verification mechanisms. So, the computer should also be able to simulate a process where a group of humans write down initial deductions and then verify it extensively for logical errors using both computers and other humans.
Most of the objections have been covered in his book "Shadows of the Mind".
Also, the fact that most human behaviour is not about deducing theorems isn't relevant as that is used as a counterexample which attacks the 'computers can simulate humans' hypothesis. This particular behaviour is chosen, as it is easy to make reflective arguments precise.
Penrose does not require transcendent insights, but merely the ability to examine a finite knowably correct system to arrive at a correct statement which is not provable by the system. In fact, the construction of a Godel statement is mechanical, but this does not mean that the original system can see it. It is a bit like given a supposed finite list of all primes, we can construct a new prime by multiplying them and adding 1. This construction is a simple computation.
You can always arrive at a correct statement. A random expression generator can occasionally do that. You just can't tell if it is true.
A simple integer counter can generate the Gödel statement, it just doesn't have the ability to identify it.
You could take a guess, which is what people are doing when they say they they understand, they have applied hurisics to convince themselves, the problem is either decidable, or they are simply wrong.
But in the Penrose argument, we can start from a true system and use reflection to arrive at another true statement which is not deducible from the original system.
This is important to the argument as one starts with a proposed program which can perform mathematical reasoning correctly and is not just a random generator. Then, the inability to see the new statement is a genuine limitation.
What is "understanding transcendently"? Just because Penrose is an authority on some subjects in theoretical physics doesn't mean he is a universal genius and that his ideas on consciousness or AI hold any value.
We gotta stop making infaillible super heroes/geniuses of people.
In this particular case, Penrose is a convinced dualist and his theories are unscientific. There are very good reasons to not be a dualist, a minority view in philosophy, which I would encourage anyone to seek if they want to better understand Penrose's position and where it came from.
He’s written and cowritten 5 books on the subject, going back nearly 40 years. I think he can as much as anyone can be considered an “authority” on something as inherently hard to observe or develop falsifiable theories as subjective conscious experience.
This isn’t an example of physicist stumbling into a new field for the first time and saying “oh that’s an easy problem. you just need to…”
The ideas of a very smart person who has spent decades thinking about a problem tend to be very valuable even if you don’t agree with them.
Goedel's theorem is only a problem if you assume that intelligence is complete. (where complete means: able to determine whether any formal statement is true or false). We know that anything running on a computer is incomplete (e.g. Turing halting problems). For any of this to be interesting, Penrose would have to demonstrate that human intelligence is complete in some sense of the word. This seems highly unlikely. Superficially, human intelligence is not remotely complete since it is frequently unable to answer questions that have yes or no answers, and even worse, is frequently wrong. So not complete, either.
Anything a single human can do, reasoning wise, AI will eventually be able to do.
Anything emerging out of a collective of humans interacting and reasoning (or interacting without reasoning or flawed reasoning) the AIs (plural) will eventually be able to do.
Only thing is machine kind does not need sleep, does not get tired, etc, so it will fail to fully emulate human behavior, with all the pros and cons of that for us to benefit from and deal with.
I'm not sure what is the point of a theoretical discussion beyond this.
Whether or not there is some magic that makes humans super special really has no bearing on whether or not we can make super duper powerful computers that can be given really hard problems.
In my view its inevitable that we'll build an AI that is more capable than a human. And that the AI will be able to build better computers, and write better software. That the singularity.
I'm not sure you have a proof of that to claim there is no point in theoretical discussion. He has a point, that current "AI" isn't conscious and so far there is no indication that it will be. It doesn't mean it can't happen either.
Basically we aren't up to "Do Androids Dream of Electric Sheep?" so far.
He sets up a definition where "real intelligence" requires consciousness, then argues AI lacks consciousness, therefore AI lacks real intelligence. This is somewhat circular.
The argument that consciousness can't be computable seems like a stretch as well.
Consciousness is not a result, it cannot be computed. It is a process, and we don't know how it interacts with computation. There are only two things I can really say about consciousness, and both are speculation: I think it isn't observable, and I think it is not a computation. For the first point, I can see no mechanism by which consciousness could affect the world so there is no way to observe it. For the second, imagine a man in a vast desert filled only with a grid of rocks that have two sides, a dark and light side and he has a small book which gives him instructions on how to flip these rocks. It seems unlikely that the rocks are sentient, yet certain configurations of rocks and books could produce the thought computation of the human mind. When does the sentience happen? If the man flips only a single rock according to those rules, would the computer be conscious? I doubt it. Does the consciousness exist between the flips of rock when he walks to the next stone? The idea that computation creates consciousness seems plainly untenable to me.
Indeed, I also think consciousness cannot be reduced to computation.
Here is one more thing to consider. All consciousness we can currently observe is embodied; all humans have a body and identity. We can interact with separate people corresponding to separate consciousnesses.
But if computation is producing consciousness, how is its identity determined? Is the identity of the consciousness based on the set of chips doing the computation? It is based on the algorithms used (i.e., running the same algorithm anywhere animates the same consciousness)?
In your example, if we say that consciousness somehow arises from the computation the man performs itself, then a question arises: what exactly is conscious in this situation? And what are the boundaries of that consciousness? Is the set of rocks as a whole? Is it the computation they are performing itself? Does the consciousness has a demarcation in space and time?
There are no satisfying answers to these questions if we assume mere computation can produce consciousness.
Just wanted to point out that I absolutely share your view here. I would like to add that the concept of virtualization and the required representation of computation makes substrate-independent consciousness rather absurd.
To me the only explanation for consciousness I find appealing is panprotopsychism.
I think to argue usefully about consciousness you've got to be able to define what you mean by it. If you use in the sense of a boxer is knocked unconscious as he's not aware of anything much versus conscious where he knows what's going on and can react and punch back, then AI systems can also be aware or not and react or not.
If you say it's all about the feelings and machines can't feel that way then it gets rather vague and hard to reason about. I mean they don't have much in the way of feelings now but I don't see why they shouldn't in the future.
I personally feel both those aspects of consciousness are not woo but the results mechanisms built by evolution for functional purposes. I'm not sure how they could have got their otherwise unless you are going to reject evolution and go for divine intervention or some such.
Consider a universe purely dictated by the mathematical laws of physics. It would be indistinguishable from our own to an observer, but such a universe would effectively be a fixed 4D structure, a statue incapable of experience. You have experience, yes? You think therefore you are. There exists something beyond maths and physics, experiencing the our universe, and you are that thing. How could such an entity develop from physical processes?
Penrose believes that consciousness originates from quantum mechanics and the collapse of the wavefunction. Obviously you couldn't (effectively) simulate that with a classical computer. It's a very unconventional position, but it's not circular.
The fundamental result of Gödel's theorem is that logical completeness and logical consistency are complimentary; if a logical system has consistent rules then it will contain statements that are unprovable by the rules but true nonetheless, so it is incomplete. Alternately, if there is a proof available for all true statements via the rules then the rules used are inconsistent.
I think this means that "AGI" is limited as we are. If we build a machine that proves all true statements then it must use inconsistent rules, implying it is not a machine we can understand in the usual sense. OTOH, if it is using consistent rules (that do not contain contradiction) then it cannot prove all true statements so it ia not generally intelligent, but we can understand how it works.
I agree with Dr. Penrose about the misnomer of "artificial Intelligence". We ought to be calling the current batch of intelligence technologies "algabreic intelligence" and admiting that we seek "geometric intelligence" and have no idea how to get there.
The issue isn't the mere existence of two thinking modes (algebraic vs. geometric), but that we’ve culturally prioritized and trained mostly algebraic modes (linear language, math, symbolic logic). This has obscured our natural geometric capacity, especially the neurological pathways specialized in visually processing and intuitively understanding phenomena, particularly light itself (photons, vision, direct visual intuition of physics). Historically, algebraic thinking was elevated culturally around the Gnostic period (200 BCE forward), pushing aside the brain's default "geometric mode". Heck, during that period of history, people actively and forcefully campaigned against overly developing the analytical use of the mind. We should be actively mapping neurological pathways specialized for direct intuitive visual-physical cognition (understanding light intuitively at a neurological level, not symbolically or algebraically) for that to happen. Also: Understanding or explainability is not directly linked to consistency in the logical sense. A system can be consistent yet difficult to fully understand, or even inconsistent yet still partially understandable. We are talking right now here because we were put here through a series of historical events. Go back to 200 BCE and play out the Gnostic or Valentinus path to 2025.
When I think about understanding, in principle I require consistency not completeness. In fact, understandability is predicated on consistency in my view.
If I liken the quest for AGI to the quest for human flight, wherein we learned that the shape of the wing provides nearly effortless lift, while wing flapping only provides a small portion of the lift for comparatively massive energy input, then I suspect we are only doing the AGI equivalent of wing flapping at this point.
the human mind itself isn't fully consistent...or at least, consistency isn't necessarily how we operate internally (lots of contradictions, ambiguities, simultaneous beliefs that don't neatly align). Yet we still manage to "understand" things deeply. Complete logical consistency isn't strictly required for understanding in a practical, real world sense. We are totally "flapping" right now with AI, brute forcing algebraic intelligence and missing that elegant "geometric" insight. My point is simply that our brains already have that built in "wing shape" neurologically, we just haven't mapped it out or leveraged it fully yet. The real leap isn't discovering a new wing design, it's understanding we already have one, we just have to leverage it. :) :)
Good question, perhaps its best to start with what I mean by algabreic intelligence, then the contrast will be more clear. Algabreic intelligence uses the simple idea of equality to produce numerical unknowns from the known via standard mechanistic operations. So algabreic intelligence is mechanistic, operational, deductive, and quntitative. In contrast, geometric intelligence is concerned with the higher level abstract concepts of congruity, scale.
To return to my previous analogy, algabreic intelligence is wing flapping while geometric intelligence is the shape of the wing. The former is arduous time consuming and energy inefficient while the latter is effortless, and unreasonably effective.
I complement Penrose for his indifference to haters and harsh skeptics.
Our minds and consciousness do not fundamentally use linear logic to arrive at their conclusions, they use constructive and destructive interference. Linear logic is simulated upon this more primitive (and arguably superior) cognition.
It is true that any outcome of any process may be modeled in serialized terms or computational postulations, this is different than the interference feedback loop used by intelligent human consciousness.
Constructive and destructive interference is different and ultimately superior to linear logic on many levels. Despite this, the scalability of artificial systems may very well easily surpass human capabilities on any given task. There may be an arguable energy efficiency angle.
Constructive/destructive interference builds holographic renderings which work sufficiently when lacking information. A linear logic system would simulate the missing detail from learned patterns.
Constructive/destructive interference does not require intensive computation
An additive / reduction strategy may change the terms of a dilemma to support a compromised (or alternatively superior) “human” outcome which a logic system simply could not “get” until after training.
There is more, though these are a worthy start.
And consciousness is the inflection (feedback reverberation if you like) upon the potential of existential being (some animate matter in one’s brain). The existential Universe (some part of matter bound in the neuron, those micro-tubes perhaps) is perturbed by your neural firings. The quantum domain is an echo chamber. Your perspectives are not arranged states, they are potentials interfering.
Also, “you all” get intelligence and “will” wrong. I’ll pick that fight on another day.
I swear this was on the front page 2 minutes ago and now it’s halfway down page 2.
Anyway, I’m not really sure where Penrose is going with this. As a summary, incompleteness theorem is basically a mathematical reformulation of the paradox of the liar - let’s state this here for simplicity as “This statement is a lie” which is a bit easier than talking about “ All Cretans are liars”, which is the way I first heard it.
So what’s the truth value of “This statement is a lie”? It doesn’t have one. If it’s false, then it’s true. But if it’s true, then it must be false. The reason for this paradox is that it’s a self-referential statement: it refers to its own truth value in the construction of its own truth value, so it never actually gets constructed in the first place.
You can formulate the same sort of idea mathematically using sets, which is what Gödel did.
Now, the thing about this is that as far as I am aware (and I’m open to be corrected on this) this never actually happens in reality in any physical system. It seems to be an artefact of symbolic representation. We can construct a series of symbols that reference themselves in this way, but not an actual system. This is much the same way as I can write “5 + 5 = 11” but it doesn’t actually mean anything physically.
The closest thing we might get to would be something that oscillates between two states.
We also ourselves, don’t have a good answer to this problem as phrased. What is the truth value of “This statement is a lie”? I have to say “I don’t know” or “there isn’t one” which is a bit like cheating. Am I incapable of consciousness as a result? And if I am indeed conscious instead because I can make such a statement instead of simply ”True” or “False”, well I’m sure that an AI can be made to do likewise.
So I really don’t think this has anything to do with intelligence, or consciousness, or any limits on AI.
(for the record, I think the Penrose take on Gödel and consciousness is mostly silly and or confused)
I think your understanding of the incompleteness theorem is a little, well, incomplete. The proof of the theorem does involve, essentially, figuring out how to write down "this statement is not provable" and using liar-paradox-type-reasoning to show that it is neither provable nor disprovable.
But the incompleteness theorem itself is not the liar paradox. Rather, it shows that any (consistent) system rich enough to express arithmetic cannot prove or disprove all statements. There are things in the gaps. Gödel's proof gives one example ("this statement is not provable") but there are others of very different flavors. The standard one is consistency (e.g. Peano arithemtic alone cannot prove the consistency of Peano arithmetic, you need more, like much stronger induction; ZFC cannot prove the consistency of ZFC, you need more, like a large cardinal).
And this very much does come up for real systems, in the following way. If we could prove or disprove each statement in PA, then we could also solve the halting problem! For the same reason there's no general way to tell whether each statement of PA has a proof, there's no general way to tell whether each program will halt on a given input.
Nice reply. I don’t know anything about Peano arithmetic, or how it applies to the halting problem, so I can’t really evaluate this. All I know is the description of the proof that I read some time ago. Maybe there’s more to dig into on it, but as you say at the start of your post, likely none of it has anything to do with what Penrose is arguing for.
I think all the debunkings of Penrose's argument are rather overcomplicated, when there is a much simpler flaw:
Which operation can computers (including quantum computers) not perform, that human neurons can? If there is no such operation, then a human-brain-equivalent computer can be built.
I agree the arguments tend to be over complicated but I think Penroses argument is basically
>it is argued that the human mind cannot be computed on a Turing Machine... because the latter can't see the truth value of its Gödel sentence, while human minds can
And the debunk is that both Penrose and an LLM can say they see the truth value and we have no strong reason to think one is correct and the other is wrong. Either of both could be confused. Hence the argument doesn't prove anything.
Or even simpler: if cells are just machines, then there is no reason why a computer couldn't perform the same operations. I'm not a philosopher, but I believe this comes down to materialism vs a belief in the supernatural.
Having read about Penrose's positions before, this is indeed what is he proposing in a roundabout way: that there is an origin to "consciousness" that is for all intents and purposes metaphysical. In the past he pushed the belief that micro-tubules in the brain (which are a structural component of cells) act like antennas that receive cosmic consciousness from the surrounding field.
In my opinion this is also Penrose's greatest sin: using his status as a scientist to promote spiritual opinions that are indistinguishable from quantum woo disguised as scientific fact.
You raise the best objection - indeed, we have no idea how consciousness/qualia could arise from physical processes, nor can we even non-circularly define what consciousness is [1]. But assuming it arises purely through physical processes of the human brain, there is no reason to think it could not be reproduced on a different substrate.
In other words, computing that feeling is equally mysterious whether it is done by neurons, or by transistors.
[1] There are attempts, like vague implications it has something to do with information processing - but that is not actually defining what it is, just what it is associated with and how it might arise. There are other problems with these attempts, such as the fact that the weather can be thought of as an "information processing" system, reacting to changes in pressure and humidity and temperature... so is it conscious? But that is tangential.
One can argue that the described feeling is a product of suppressed instinctive behavior. Or, in other words, a detail of a particular implementation of intelligence in a certain species of mammals.
Is anyone aware of some other place where Penrose discusses AI and consciousness? Unfortunately here, the interviewer seems well out of their depth and repeatedly interrupts with non sequiturs.
It's painful, but listening to Penrose is worth it and (in the bits I watched) he somehow manages to politely stick to his thread despite the interruptions.
The longer we continue to reduce human thinking to mechanistic or computable processes, the further we might be from truly understanding the essence of what makes us human. And perhaps, as with questions about the meaning of life or the origin of the universe, this could be a mystery that remains beyond our reach.
Many years ago now I sat in on (I was a PhD student, so I didn't need to sit exams etc) a Cognitive Science intro course run by Prof. Stevan Harnad.
Harnad and I don't agree about very much, but one thing I was able to get Steven to agree was that if I introduce him to something which he thinks is a person well, that's a person, and too bad if it doesn't meet somebody's arbitrary requirements about having DNA or biological processes.
The generative AIs can't quite do that, but they're much closer than I'd be comfortable with if, like Steven and Penrose, I didn't believe that Computation is all there is. "But doesn't it feel like something to be you?" they ask me, and I wonder why on Earth anybody could ask that question and not consider that perhaps it also feels like something to be a spoon or a leaf.
I wonder if this is an example of "it works in practice but the important question is whether it works in theory."
Perhaps Penrose is right about the nature of intelligence and the fact that computers cannot ever achieve that (for some tight definition of the term). But in a practical sense, these LLMs that are popular are doing things that we generally considered "intelligent". Perhaps it's faking it well but it's faking it well enough to be useful and that's what people will use. Not the theoretical definition.
LLMs (our current "AI") doesn't use logical or mathematical rules to reason, so I don't see how Gödel's theorem would have any meaning there. They are not a rule-based program that would have to abide by non-computability - they are non-exact statistical machines. Penrose even mentions that he hasn't studied them, and doesn't exactly know how they work, so I don't think there's much substance here.
Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.
Perhaps you can explain your point in a different way?
Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.
Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
Well, if you break everything down to the lowest level of how the brain works, then so do humans. But I think there's a relevant higher level of abstraction in which it isn't -- it's probabilistic and as much intuition as anything else.
A lot of people look towards non-determanism to be a source for free will. It's often what underlies peoples thinking when they discount the ability of AI to be conscious. They want to believe they have free will and consider determinism to be incompatible with free will.
Events are either caused, or uncaused. Either can be causes. Caused events happen because of the cause. Uncaused events are by definition random. If you can detect any real pattern in an event you can infer that it was caused by something.
Relying on decision making by randomness over reasons does not seem to be a good basis of free will.
If we have free will it will be in spite of non-determinism, not because of it.
That's true with any neural network or ML model. Pick a few points, use the same algorithm with the same hyperparameters and random seed, and you'll end up with the same result. Determinism doesn't mean that the "logic" or "reason" is an effect of the algorithm doing the computations.
Not really possible. The models work fine once you fix them, it's just making sure you account for batching and concurrency's effect on how floating point gives very (very) slightly different answers based on ordering and grouping and etc.
Good point, I meant the reasoning is not encoded like a logical or mathematical rules. All the neural networks and related parts rely on e.g. matrix multiplication which works by mathematical rules, but the models won't answer your questions based on pre-recorded logical statements, like "apple is red".
If it is running on a computer/Turing machine, then it is effectively a rule-based program. There might be multiple steps and layers of abstraction until you get to the rules/axioms, but they exist. The fact they are a statistical machine, intuitively proves this, because - statistical, it needs to apply the rules of statistics, and machine - it needs to apply the rules of a computing machine.
The pumping lemma debunks the myth that computers can parse nested parentheses. Yet for all the practical purposes computers can parse nested parentheses expressions.
Gödel's theorem attracts these weird misapplications for some reason. It proved that a formal system with enough power will have true statements that cannot be proven within that formal system. The human mind can't circumvent this somehow, we also can't create a formal system within our mind that can prove every true statement.
There's very little to see here with respect to consciousness or the nature of the mind.
Penrose's argument does not require humans to prove every true statement. It is of the form - "Take a program P which can do whatever humans do and lets a generate a single statement which P cannot do, but humans can."
The core issue is that P has to seen to be correct. So, the unassailable part of the conclusion is that knowably correct programs can't simulate humans.
This argument by Penrose using Godel's theorem has been discussed (or, depending on who you ask, refuted) before in various places, it's very old. The first time I've seen it was in Hofstadter's "Godel, Escher, Bach", but a more accessible version is this lecture[1] by Scott Aaronson. There's also an interview with Aaronson with Lex Friedman where he talks about it some more[2].
Basically, Penrose's argument hinges on Godel's theorem showing that a computer is unable to "see" that something is true without being able to prove it (something he claims humans are able to do).
To see how the argument makes no sense, one only has to note that even if you believe humans can "see" truth, it's undeniable that sometimes humans can also "see" things that are not true (i.e., sometimes people truly believe they're right when they're wrong).
In the end, stripping away all talk about consciousness and other stuff we "know" makes humans different from machines, and confine the discussion entirely over what Godel's theorem can say about this stuff, humans are no different from machines, and we're left with very little of substance: both humans and computers can say things that are true but unprovable (humans can "see" unprovable truths, and LLMs can hallucinate), and both also sometimes say things that are wrong (humans are sometimes wrong, and LLMs hallucinate).
By the way "LLMs hallucinate" is a modern take on this: you just need a computer running a program that answers something that is not computable (to make interesting, think of a program that randomly responds "halts" or "doesn't halt" when asked whether some given Turing machine halts).
(ETA: if you don't find my argument convincing, just read Aaronson's notes, they're much better).
I think you're being overly dismissive of the argument. Admittedly my recollection is hazy but here goes:
Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).
When we attempt to formalize even a relatively basic branch of human thinking, simple whole-number arithmetic, as a system of finite symbols and rules, then Goedel's theorem kicks in. Such a system can never be complete - i.e. there will always be holes or gaps where true statements about whole-number arithmetic cannot be reached using our symbols and rules, no matter how we design the system.
We can of course plug any holes we find by adding more rules but full coverage will always evade us.
The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.
> Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).
> [...] there will be truths that the computer can simply never reach.
It's true that if you give a computer a list of consistent axioms and restrict it to only output what their logic rules can produce, then there will be truths it will never write -- that's what Godel's Incompleteness Theorem proves.
But those are not the only kinds of programs you can run on a computer. Computers can (and routinely do!) output falsehoods. And they can be inconsistent -- and so Godel's Theorem doesn't apply to them.
Note that nobody is saying that it's definitely the case that computers and humans have the same capabilities -- it MIGHT STILL be the case that humans can "see" truths that computers will never be able to. But this argument involving Godel's theorem simply doesn't work to show that.
I don’t see the logic of your argument. The fact that you can formulate inconsistent theories - where all falsehoods will be true - does not invalidate Gödel’s theorem. How does the fact that I can take the laws of basic arithmetic and add the axiom “1 = 0” to my system mean that Gödel doesn’t apply to basic arithmetic?
Godel's theorem only applies to consistent systems. From Wikipedia[1]:
First Incompleteness Theorem: Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e. there are statements of the language of F which can neither be proved nor disproved in F.
If a system is inconsistent, the theorem simply doesn't have anything to say about it.
All this means is that an "inconsistent" program is free to output unprovable truths (and obviously also falsehoods). There's no great insight here, other than trivially refuting Penrose's claim that "there are truths that no computer can ever output".
You’re equating computer programs producing “wrong results” and the notion of inconsistency - a technical property of formal logic systems. This is not what inconsistency means. An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it. Such formalizations are not interesting or even relevant to the discussion or argument.
I think much of the confusion arises from mixing up the object language (computer systems) and the meta language. Fairly natural since the central “trick” of the Gödel proof itself is to allow the expression of statements at the meta level to be expressed using the formal system itself.
> An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it.
That's only true if you make the program answer by following the rules of some logic that contains the principle of explosion. Not all systems of logic are like that. A computer could use fuzzy logic. It could use a system we haven't thought of yet.
You're imposing constraints on how a computer should operate, and at the same time allowing humans to "think" without similar constraints. If you do that, you don't need Godel's theorem to show that a human is more capable than a computer -- you just built computers that way.
I’m not imposing any constraints - the point is that inconsistent formulations are not interesting or relevant to the argument no matter what system of rules you look at. This has nothing to do with any particular formalism. I think the difficulty here is that words like completeness and inconsistency have very specific meanings in the context of formal logic - which do not match their use in everyday discussion.
I think we're talking past each other at this point. You seem to have brushed past without acknowledging my point about systems without the principle of explosion, and I'm afraid I must have missed one or more points you tried to make along the way, because what you're saying doesn't make much sense to me anymore.
This is probably a good point to close the discussion -- I'm thankful for the cordial talk, even if we ultimately couldn't reach common ground.
Yes! I think this medium isn’t helpful for understanding here but it’s always pleasant to disagree while remaining civil. It doesn’t help that I’m trying to reply on my phone (I’m traveling at moment) - in an environment which isn’t conducive to subtle understanding. All the best to you!
> We can of course plug any holes we find by adding more rules but full coverage will always evade us.
So if we assume that clever software can automate the process of plugging this holes. Is it then like the human mind? Are their still holes that can not be plugged not due to lack of cleverness in the software but due to limitations of the hardware sometimes called the substrate?
> The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.
If computers are limited by their substrate though it seems like humans might be limited by their substrate too, though the limits might be different.
Yes I think this is one way to attack the argument but you have to break the circularity somehow. Many of the dismissals of the Hofstadter/Penrose argument I’ve read here, I think, do not appreciate the actual argument.
Without Penrose giving solid evidence people making counter arguments tend to get dismissive then sloppy. Why put in the time to make well tuned arguments filled with evidence when the other side does not bother after all.
He misrepresents Penrose's argument. I remember Scott Aaronson met Penrose later on, and there was a clarification though they still dont agree.
In any case, here's a response to the questions (some responses are links to other comments in this page).
> Why does the computer have to work within a fixed formal system F?
The hypothesis is that we are starting with some fixed program which is assumed to be able to simulate human reasoning(just like starting with the largest prime assuming that there are finitely many primes in order to show that there are infinitely many primes). Of course, one can augment it to make it more powerful and this augmentation is in fact, how we show that the original system is limited.
Note that even a self-improving AI is itself a fixed process. We apply the reasoning on this program including its improvisation capability.
The first question is not the question I'd like answered. What I want to know is this:
> Why does the computer have to work within a CONSISTENT formal system F?
Humans are allowed to make mistakes (i.e., be inconsistent). If we don't give the computer the same benefit, you don't need Godel's theorem to show that the human is more capable than the computer: it is so by construction.
Take a group of humans who each make observations and deductions, possibly faulty. Then, they do extensive checking of their conclusions by interacting with humans and computer proof assistants etc. Let us name this process as HC.
A program which can simulate individual humans should also be able to simulate HC - ie. generate proofs which are accepted by HC.
---
Penrose's conclusion in the book is more weak - that a knowably correct process cannot simulate humans.
We now have LLMs which hallucinate etc that are not knowably correct. But, after reasoning based methods, they can try to check their output and arrive at better conclusions, as is happening currently in popular models. This is fine, and is allowed by Penrose's argument. The argument is applied to the 'generate, check, correct' process as a whole.
(I don't see how that relates to Godel's theorem, tough. If that's the current position held by Penrose, I don't disagree with him. But seeing the post's video makes me believe Penrose still stands behind the original argument involving Godel's theorem, so I don't know what to say...)
That a knowably correct program cant simulate human reasoning is basically Godel's theorem. One can use a diagonalization argument similar to Godel's proof for programs which try to deciding which Turing machines halt. Given a program P which is partially applicable, but always correct, we can use diagonalization to construct P' a more widely applicable and correct program ie. P' can say that some Turing machine will not halt while P is undecided. P'. So, this doesn't involve any logic or formal systems, but it is more general - Godel's result is a special case as the fact that a Turing machine halts can be encoded as a theorem and provability in a formal system can be encoded as a Turing machine.
Penrose indeed, believes both in the stronger claim - a program can't simulate humans and the weaker claim, a knowably correct program can't simulate humans.
The weaker claim being unassailable firstly shows that most of the usual objections are not valid and secondly, it is hard to split the difference ie. to generate the output of HC using a program which is not knowably correct. A program whose binary is uninterpretably but by magic only generates true theorems. Current AI systems including LLMs don't even come close.
This argument would fall apart if we could simulate a human mind, and there are good reasons to think we could. Human brains and their biological neurons are part of the physical world, they obey the laws of physics and operate in a predictable manner. We can replicate them in computer simulations, and we already have although not on the scale of human minds (see [1]).
It's on Penrose and dualists to show why simulated neurons would act differently than their physical counterparts. Hand-waving about supposed quantum processes in the brain is not enough, as even quantum processes could be emulated. So far, all seems to indicate that accurate models of biological neurons behave like we expect them too.
It stands to reason then, that if a human mind can be simulated, computers are capable of thought too.
I've read from Hoftadter "I am a strange loop" that should go around those ideas too. The point of how you define consciousness (he does it in a more or less computable way, a sort of self-referential loop), so it may be within the reach of what we are doing with AIs.
But in any case, it is about definitions, not having very strict ones for consciousness, intelligence and so on, and human perception and subjectivity (the Turing Test is not so much about "real" consciousness but if an observer can decide if is talking with a computer or a human).
Any theory which purports to show that Roger Penrose is able to "see" the truth of the consistency of mathematics has got to explain Edward Nelson being able to "see" just the opposite.
Consciousness, at its simplest, is awareness of a state or object either internal to oneself or in one's external environment.
AI research is centered on implementing human thinking patterns in machines. While human thought processes can be replicated, claiming that consciousness and energy awareness cannot be similarly emulated in machines does not seem like a reasonable argument.
If the Universe is computable, then human thinking is computable. All due respect to Penrose for his stellar achievements, but frankly the implications of Turing Complete, the halting problem, Church/Turing hypothesis and the point of Godel's Theorem seem to be things he does not fully understand.
I know this sounds cheeky but we all have brains that are good at some things and have failure modes as well. We are certainly seeing shadows of Human-type fallability in neural nets, which somehow seem to have a lot of similarities to human thinking.
Brains evolved in the physical world to solve problems and help organisms survive, thrive, and reproduce. Evolution is the product of a massive search over potential physical arrangements. I see no reason why the systems we develop would operate on drastically different premises.
I'm really looking forward to the point where we can put 3d glasses on a person and give them a simulated reality that is indistinguishable from reality, but composed entirely of ML-driven identities. We can already make photorealistic images on computers, produce convincing text, video, and audio, complex behavior, goal-seeking, etc, and one major trend in ML is combining all of those into models that could, in principle, run realtime inference.
I don't worry about philosophical zombies, dualism, quantum conciousness, or anything like that. I just want to get to the point past the uncanny valley- call it the spooky jungle- that cannot be distinguished from reality.
OK, try this for size, bearing in mind that it is a heuristic argument.
No one can "know", with certainty, the location of any particle. Or, to be slightly more accurate, the more we know of its location, the less we know of its movement. This is essentially Heisenberg/QM 101.
But we see the results of "computation" all around us, all the time: Any time a chemical or physical reaction settles to an observable result, whether observed by one of us, that is, a human, or another physical entity, like a tree, a squirrel, a star, etc. This is essentially a combination of Rovelli's Relational QM and the viewing of QM through an information centric lens.
In other words, we can and do have solid reality at a macro level without ever having detailed knowledge (whatever that might mean) at a micro/nano/femto level.
Having said that, I read your comment as implying that "the human mind" (in quotes because that is not a well defined concept, at least not herein; if we can agree on an operational definition, we may be able to go quite far) is somehow disconnected from physical reality, that is, that you are suggesting a dualist position, in which we have physics and physical chemistry and everything we get from them, e.g., genetics, neurophysiology, etc., all based ultimately on QM, and we have "consciousness" or "the mind" as somehow being outside/above all of that.
I have no problem with that suggestion. I don't buy it, and am mostly a reductionist at heart, so to speak, but I have no problem with it.
What I'd like to see in support of that position would be repeatable, testable statements as to how this "outside/above" "thing" somehow interacts with the physical substrate of our biological lives.
Preferably without reference to the numinous, the ephemeral, or the magical.
Honestly, I really would like to see this. It would represent one of the greatest advances in knowledge in human history.
I'm only talking about the physical world - phenomena that don't correspond to something computable, which are very common, and include the next five seconds of the amplifier noise heard on your headphones, are dealt with by being ignored or averaged out. Collective motion is somewhat predictable and includes things like popular opinion or temperature, but individual deviations aren't covered.
The problem with translating that into proof of dualism is that everything outside the computable looks the same. A hypothesis is something you can assume to compute a prediction, so if any hypothesis is true, the phenomenon must be computable. If the phenomenon is not computable, no computable hypothesis will match. The second you ascribe properties to a soul that can distinguish it from randomness, or properties of randomness that distinguished it from free will you've made one or the other computable, and whichever is computable won't match reality, if we suppose we're looking for something outside of rational explanation and not a "second material."
Here's a concrete example. If you had access to a Halting oracle, it would only be checkable on Turing machines that you yourself could decide the halting problem for. Any answers beyond those programs wouldn't match any conceivable hypothesis.
honestly this whole argument about penrose, gödel, non-computability etc feels way too complicated for what seems pretty obvious to me, humans are complex biology with basic abstractions: we take in sensory data, process it moment by moment, store it with varying abstraction levels (memories, ideas, feelings), and evolve continuously based on new inputs, evolution itself is just genetic programming responding to external conditions. It looks random sometimes but that's because complexity makes it difficult to simulate fully, small variations explode into chaos and we call it randomness, doesn't mean it fundamentally is. The whole thing about consciousness being somehow outside computation just feels like confusion between being the system (external view) and experiencing it from within (internal subjective view), doesn't break computation...there’s no fundamental contradiction or noncomputability introduced by subjective experience, randomness, or complexity, just different perspectives within the system. If you want to understand say genius for example, go into neurology and look at raw horse power and abstract thinking.. Neurological capacity (the hardware), Neurodiversity (the software style), Nurture (the training data and tuning environment) - (Mottron & Dawson, Baron-Cohen, Jensen & Deary & Haier). It's part of why I personally think we're really at the point that spiritual/godly exploration should be the most important thing, but that sounds woo woo and crazy, I suppose. (I probably just over simplified a bunch of stuff I don't fully understand)
Penrose is a dualist, he believes the mind is detached from the material world.
He has been desperately seeking proof of quantum phenomenons in the brain, so he may have something to point to when asked how this mind, supposedly external to the physical realm, can pilot our bodies.
I am not a dualist, and I don't think what Penrose has to say about AI or consciousness holds much value.
> He has been desperately seeking proof of quantum phenomenons in the brain, so he may have something to point to when asked how this mind, supposedly external to the physical realm, can pilot our bodies.
I have never see anyone with this approach try and tackle how something non-physical controls or interacts with the physical without also being what we normally call physical at least not in a rigorous approach to the issue. It always seems to lead to inconsistency or reformulation of existing definitions and meanings without producing anything new.
I imagine what they really mean is that there's something that can't be built by us (and maybe that can't even be "plugged into" by things we build), that we can't build anything that carries out the same function, and this is what determines the "human conscience".
What is intelligence if not computation? Even if it turns out our brains require quantum computation in microtubules (unlikely, imho), it's still computation.
Sure, it has limits and runs into paradoxes, so what? The fact that 'we can see' the paradox but somehow maths can't, is just a Chinese-room type argument, conflating different 'levels' of the system.
If you ever had the misfortune to read The Emperor's New Mind (published 1989) you would know that he has not had a plot to lose for quite some time now.
I read it around that time and thouroughtly enjoyed it; IIRC is basically a pop-sci intro to number theory, physics, cosmology, biology, etc, with only the last couple of chapters attempting to tie it all together into the quantum-consciousness stuff.
He seems to have been stuck in that groove ever since, though.
If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.
Daniel Dennett thoroughly debunks Penrose' argument in Chapter 15 of Darwin's Dangerous Idea. Quoting reviewers of a Penrose paper ... "quite fallacious,"
"wrong," "lethal flaw" and "inexplicable mistake," "invalid," "deeply flawed." "The Al community [of 1995] was, not surprisingly, united in its dismissal of Penrose's argument."
It strikes me as quite arrogant to assume that those are the only possibilities. People, even experts in a field, disagree about topics and the implications of evidence all the time. Arguing that honest disagreement must reduce down to one of the three categories you list is basically saying "my point of view is so obviously correct that only bad thinkers or bad people could disagree". But that's almost certainly not the case.
It is vacuously true that a Turing machine can simulate a human mind - this is the quantum Church-Turing thesis. Since a Turing machine can solve any arbitrary system of Schrodinger equations, it can solve the system describing every atom in the human body.[1]
The problem is that this might take more energy than the Sun for any physical computer. What is far less obvious is whether there exist any computable higher-order abstractions of the human mind that can be more feasibly implemented. Lots of layers to this - is there an easily computable model of neurons that encapsulates cognition, or do we have to model every protein and mRNA?
It may be analogous to integration: we can numerically integrate almost anything, but most functions are not symbolically integrable and most differential equations lack closed-form solutions. Maybe the only way to model human intelligence is "numerical."
In fact I suspect higher-order cognition is not Turing computable, though obviously I have no way of proving it. My issue is very general: Turing machines are symbolic, and one cannot define what a symbol actually is without using symbols - which means it cannot be defined at all. "Symbol" seems to be a primitive concept in humans, and I don't see how to transfer it to a Turing machine / ChatGPT reliably. Or, as a more minor point, our internal "common sense physics simulator" is qualitatively very powerful despite being quantitatively weak (the exact opposite of Sora/Veo/etc), which again does not seem amenable to a purely symbolic formulation: consider "if you blow the flame lightly it will flicker, if you blow hard it will go out." These symbols communicate the result without any insight into the computation.
[1] This doesn't have anything to do with Penrose's quantum consciousness stuff, it just assumes humans don't have metaphysical souls.
Mr. Penrose stands as a living testament to the curious duality of human genius: one can wield equations like a virtuoso, bending the arc of physics itself through sheer mathematical brilliance, while simultaneously tripping over philosophical nuance with all the grace of a tourist fumbling through a subway turnstile. A titan in the realm of numbers, yet a dilettante in the theater of ideas.
ps: i'd like to take a moment to thank DeepSeek for helping me with the specific phrasing of this critique
Three criticisms of Penrose's argument:
1. I don't think human reasoning is consistent in the technical sense, which makes the incompleteness theorem inapplicable regardless of what you think about us and Turing machines.
2. The human brain is full of causal cycles at all scales. Even if you think human reasoning is axiomatisable, it's not at all obvious to me that the set of axioms would be finite or even computable. Again this rules out any application of Gödel's theorem.
3. Penrose's argument revolves around the fact that the sentence encoding "true but not provable" in Gödel's argument is actually provably true in the outer logical system being used to prove Gödel's theorem, just not the inner logical system being studied. But as all logicians know, truth is a slippery concept and is itself internally indefinable (Tarski's theorem), so there's no guarantee that this notion of "truth" used in the outer system is the same as the "real" truth predicate of the inner system (at best it's something like an arbitrary choice, dependent on your encoding). Penrose is referring to "truth" at multiple logical levels and conflating them.
In other words: you can't selectively chose to apply Gödel's theorem to the situation but not any of the other results of mathematical logic.
> it's not at all obvious to me that the set of axioms would be finite or even computable
The reasoning is representable with and by a finite number of elementary physical particles and so must itself be finite. Because it is finite it is computable.
Said another way, you would need an infinitely large brain (or an infinitely deep one) to create infinite reasoning.
I think that doesn't work, because we don't know how to represent and predict the state of a cloud of elementary particles to that level of detail. You could argue that the mathematics proves that this is possible in principle, but I counter that you have no idea whether the theory extrapolates to such situations in real life because it is way out of humanity's compute budget to test. Like the rest of physics, I expect new regimes would come with new phenomena that we don't understand.
> Because it is finite it is computable.
Busy Beaver numbers are finite, but not computable.
The Busy Beaver game is finite in space, but infinite in time. If you restrict the execution to a finite amount of runtime, it becomes computable.
An immortal human might be able to produce incomputable reasoning, but I would say it's more reasonable to talk about humans with finite runtime.
True but not relevant. In this case "it" is the number of states of a finite volume that we believe to be fundamentally quantised.
Borealid isn't saying that any finite output is computable, but that outputs of this specific thing is computable because as far as we know it has a finite number of states.
This implies that brains can't compute the general nth BB function which is also true as far as we know.
The Busy Bever numbers may be finite, but the machine (specifically its tape) that produces them is not. If the Busy Bever is running on a Turing machine with a finite tape length the number becomes computable.
Turning it around, the answer to "can a machine of infinite size do things a finite computer can't" is "yes". That answer ends up being the reason many things aren't computable, including the halting problem.
The halting problem is a trick in disguise. The trick is: no one said the program you are checking halts had to have finite code, or finite storage. Once you see the trick the halting problem looses a lot of its mystique.
I'm not sure what you mean here - the Turing machine that represents a particular BB number halts by definition, which means that it can only visit a finite segment of the tape. Nevertheless BB numbers are incomputable in general.
On your second point - allowing infinitely many steps of computation lets you solve the halting problem for regular Turing machines, but you still get an infinitary version of the halting problem that's incomputable (same proof more or less). So I don't think that's really the issue at stake.
I'm not sure about it makes sense to apply Gödel's theorem to AI. Personally, I prefer to think about it in terms of basic computability theory:
We think, that is a fact.
Therefore, there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
Now, the question is: can we create a smaller function capable of performing the same feat?
If we assume that that function is computable in the Turing sense then, kinda yes, there are an infinite number of turing machines that given enough time will be able to produce the expected results. Basically we need to find something between our own brain and the Kolmogorov complexity limit. That lower bound is not computable, but given that my cats understands when we are discussing to take them to the vet then... maybe we don't really need a full sized human brain for language understanding.
We can run Turing machines ourselves, so we are at least Turing equivalent machines.
Now, the question is: are we at most just Turing machines or something else? If we are something else, then our own CoT won't be computable, no matter how much scale we throw at it. But if we are then it is just matter of time until we can replicate ourselves.
Many philosophical traditions which incorporate a meditation practice emphasize that your consciousness is distinct from the contents of your thoughts. Meditation (even practiced casually) can provide a direct experience of this.
When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs. So human thinking is probably computable, and I think that LLMs can be said to be ”think” in ways that are analogous to what we do.
But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
I don’t necessarily think that you need to subscribe to dualism or religious beliefs to explain consciousness - it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
But I do think it’s still important to maintain a distinction between “thinking” (computable, we do it, AIs do it as well) and “consciousness” (we experience it, probably many animals experience it also, but it’s orthogonal to the linguistic or logical reasoning processes that AIs are currently capable of).
At some point this vague experience of awareness may be all that differentiates us from the machines, so we shouldn’t dismiss it.
> It's very difficult to find some way of defining rather precisely something we can do that we can say a computer will never be able to do. There are some things that people make up that say that, "While it's doing it, will it feel good?" or, "While it's doing it, will it understand what it's doing?" or some other abstraction. I rather feel that these are things like, "While it's doing it, will it be able to scratch the lice out of it's hair?" No, it hasn't got any hair nor lice to scratch from it, okay?
> You've got to be careful when you say what the human does, if you add to the actual result of his effort some other things that you like, the appreciation of the aesthetic... then it gets harder and harder for the computer to do it because the human beings have a tendency to try to make sure that they can do something that no machine can do. Somehow it doesn't bother them anymore, it must have bothered them in earlier times, that machines are stronger physically than they are...
- Feynman
https://www.youtube.com/watch?v=ipRvjS7q1DI
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
Maybe we can swap out "think" with "experience consciousness"
> When it comes to the various kinds of thought-processes that humans engage in (linguistic thinking, logic, math, etc) I agree that you can describe things in terms of functions that have definite inputs and outputs.
Function can mean inputs-outputs. But it can also mean system behaviors.
For instance, recurrence is a functional behavior, not a functional mapping.
Similarly, self-awareness is some kind of internal loop of information, not an input-output mapping. Specifically, an information loop regarding our own internal state.
Today's LLMs are mostly not very recurrent. So might be said to be becoming more intelligent (better responses to complex demands), but not necessarily more conscious. An input-output process has no ability to monitor itself, no matter how capable of generating outputs. Not even when its outputs involve symbols and reasoning about concepts like consciousness.
So I think it is fair to say intelligence and consciousness are different things. But I expect that both can enhance the other.
Meditation reveals a lot about consciousness. We choose to eliminate most thought, focusing instead on some simple experience like breathing, or a concept of "nothing".
Yet even with this radical reduction in general awareness, and our higher level thinking, we remain aware of our awareness of experience. We are not unconscious.
To me that basic self-awareness is what consciousness is. We have it, even when we are not being analytical about it. In meditation our mind is still looping information about its current state, from the state to our sensory experience of our state, even when the state has been reduced so much.
There is not nothing. We are not actually doing nothing. Our mental resting state is still a dynamic state we continue to actively process, that our neurons continue to give us feedback on, even when that processing has been simplified to simply letting that feedback of our state go by with no need to act on it in any way.
So consciousness is inherently at least self-awareness in terms of internal access to our own internal activity. And that we retain a memory of doing this minimal active or passive self-monitoring, even after we resume more complex activity.
My own view is that is all it is, with the addition of enough memory of the minimal loop, and a rich enough model of ourselves, to be able to consider that strange self-awareness looping state afterwards. Ask questions about its nature, etc.
This is what I wrote while I was thinking about the same topic before I can across your excellent comment; as if it’s a summary of what you just said:
Consciousness is nothing but the ability to have internal and external senses, being able to enumerate them, recursively sense them, and remember the previous steps. If any of those ingredients are missing, you cannot create or maintain consciousness.
When I was a kid, I used to imagine if that society ever developed AI, there would be widespread pushback to the idea that computers could ever develop consciousness.
I imagined the Catholic Church, for example, would be publishing missives reminding everyone that only humans can have souls, and biologists would be fighting an quixotic battle to claim that consciousness can arise from physical structures and forces.
I'm still surprised at how credulous and accepting societies have been of AI developments over the last few years.
> I think that LLMs can be said to be ”think” in ways that are analogous to what we do. ... But human consciousness produces an experience (the experience of being conscious) as opposed to some definite output. I do not think it is computable in the same way.
"We've all been dancing around the basic issue: does Data have a soul?" -- Captain Louvois. https://memory-alpha.fandom.com/wiki/The_Measure_Of_A_Man_(e...
That may be an illusion. And easily outputtable in the same way. Function calling in the output to release certain hormones etc.
I for one (along with many thinkers) define intelligence as the extent to which an agent can solve a particular task. I choose the definition to separate it from issues involving consciousness.
Both matter of course.
And that's a useful and pragmatic definition, because it's very hard to measure the other definition even just for other humans.
>it seems entirely possible (maybe even likely) that what we experience as consciousness is some kind of illusory side-effect of biological processes as opposed to something autonomous and “real”.
I've heard this idea before but I have never been able to make head or tail of it. Consciousness can't be an illusion, because to have an illusion you must already be conscious. Can a rock have illusions?
I think it’s more apt to say that free will is an illusion.
Well, it entirely depends on how you even define free will.
Btw, Turing machines provide some inspiration for an interesting definition:
Turing (and Gödel) essentially say that you can't predict what a computer program does: you have to run it to even figure out whether it'll halt. (I think in general, even if you fix some large fixed step size n, you can't even predict whether an arbitrary program will halt after n steps or not, without essentially running it anyway.)
Humans could have free will in the same sense, that you can't predict what they are doing, without actually simulating them. And by an argument implied by Turing in his paper on the Turing test, that simulation would have the same experience as the human would have had.
(To go even further: if quantum fluctuations have an impact on human behaviour, you can't even do that simulation 100% accurately, because of the no cloning theorem.
To be more precise: I'm not saying, like Penrose, that human brains use quantum computing. My much weaker claim is that human brains are likely a chaotic system, so even a very small deviation in starting conditions can quickly lead to differences in outcome.
If you are only interested in approximate predictions, identical twins show that just getting the same DNA and approximation of the environment gets you pretty far in making good predictions. So cell level scans could be even better. But: not perfect.)
There is no you to have the illusion.
To state it's a turing machine might be a bit much but there might be a map between substrates to some degree, and computers can have a form of consciousness, an inner experience, basically the hidden layers and clearly the input of senses, but it wouldn't be the same qualia as a mind, I suspect it has more to due with chemputation and is dependent on the substrate doing the computing as opposed to a facility thereof, up to some accuracy limit, we can only detect light we have receptors for after all. To have qualia distinct to another being you need to compute on a substrate that can accurately fool the computation, fake sugar instead of sugar for example.
What we have and AI don't are emotions. After all, that what animates us to survive and reproduce. Without emotions we can't classify and therefore store our experiences because there no reason to remember something which we are indifferent about. This includes everything not accessible by our senses. Our abilities are limited to what is needed for survival and reproduction because all the rest would consume our precious resources.
The larger picture is that our brains are very much influenced by all the chemistry that happens around our units of computation (neurones); especially hormones. But (maybe) unlike consciousness, this is all "reproducible", meaning it can be part of the algorithm.
We don’t know that LLMs generating tokens for scenarios involving simulations of conscious don’t already involve such experience. Certainly such threads of consciousness would currently be much less coherent and fleeting than the human experience, but I see no reason to simply ignore the possibility. To whatever degree it is even coherent to talk about the conscious experience of others than yourself (p-zombies and such), I expect that as AIs’ long term coherency improves and AI minds become more tangible to us, people will settle into the same implicit assumption afforded to fellow humans that there is consciousness behind the cognition.
The very tricky part then is to ask if the consciousness/phenomenological experience that you postulate still happens if, say, we were to compute the outputs of an LLM by hand… while difficult, if every single person on earth did one operation per second, plus some very complicated coordination and results gathering, we could probably predict a couple of tokens for an LLM at some moderate frequency… say, a couple of tokens a month? a week? A year? A decade? Regardless… would that consciousness still have an experience? Or is there some threshold of speed and coherence, or coloration that would be missing and result in failure for it to emerge?
Impossible to answer.
Btw I mostly think it’s reasonable to think that there might be consciousness, phenomenology etc are possible in silicon, but it’s tricky and unverifiable ofc.
> would that consciousness still have an experience?
If the original one did, then yes, of course. You're performing the exact same processing.
Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain. The individual humans performing the simulation are now comparable to the individual neurons in a real brain. Similarly, in your scenario, the humans are just the computer hardware running the LLM. Apart from that it's the same LLM. Anything that the original LLM experiences, the simulated one does too, otherwise they're not simulating it fully.
You are assuming that consciousness can be reproduced by simulating the brain. Which might be possible but it's by no means certain.
Yes that’s my main point - if you accept the first one, then you should accept the second one (though some people might find the second so absurd as to reject the first).
> Imagine if instead of an LLM the billions of people instead simulated a human brain. Would that human brain experience consciousness? Of course it would, otherwise they're not simulating the whole brain.
However, I don’t really buy “of course it would,” or in another words the materialist premise - maybe yes, maybe no, but I don’t think there’s anything definitive on the matter of materialism in philosophy of mind. as much as I wish I was fully a materialist, I can never fully internalize how sentience can uh emerge from matter… in other words, to some extent I feel that my own sentience is fundamentally incompatible with everything I know about science, which uh sucks, because I definitely don’t believe in dualism!
It would certainly with sufficient accuracy honestly say to you that it's conscious and believes it whole heartily, but in practice it would need to a priori be able describe external sense data, as it's not separate necessarily from the experiences, which intrinsically requires you to compute in the world itself otherwise it would only be able to compute on, in a way it's like having edge compute at the skins edge. The range of qualia available at each moment will be distinct to each experiencer with the senses available, and there likely will be some overlap in interpretation based on your computing substrate.
We in a way can articulate the underlying chemputation of the universe mediated through our senses, reflection and language, turn a piece off (as it is often non continuous) and the quality of the experience changes.
But do you believe in something constructive? Do you agree with Searle that computers calculate? But then numbers and calculation are immaterial things that emerge from matter?
> We think, that is a fact.
It likely is a fact, but we don't really know what we mean by "think".
LLMs have illuminated this point from a relatively new direction: we do not know if their mechanism(s) for language generation are similar to our own, or not.
We don't really understand the relationship between "reasoning" and "thinking". We don't really understand the difference between Kahneman's "fast" and "slow" thinking.
Something happens, probably in our brains, that we experience and that seems causally prior to some of our behavior. We call it thinking, but we don't know much about what it actually is.
I think we have a pretty good idea that we are not stochastic parrots - sophisticated or not. Anyone suggesting that we’re running billion parameter models in order to bang out a snarky comment is probably trying to sell you something (and crypto’s likely involved.)
I think you’re right, LLMs have demonstrated that relatively sophisticated mathematics involving billions of params and an internet full of training data is capable of some truly, truly, remarkable things. But as Penrose is saying, there are provable limits to computation. If we’re going to assume that intelligence as we experience it is computable, then Gödel’s theorem (and, frankly, the field of mathematics) seems to present a problem.
I've never had any time for Penrose. Gödel’s theorem "merely" asserts that in any system capable of a specific form of expression there are statements which are true but not provable. What this has to do with (a) limits to computation or (b) human intelligence has never been clear to me, despite four decades or more of interest in the topic. There's no reason I can see why we should think that humans are somehow without computational limits. Whether our limits correspond to Gödel’s theorem or not is mildly interesting, but not really foundational from my perspective.
At the end of the day Penrose's arguments is just Dualism.
Humans have a special thingy that makes the consciousness Computers do not have the special thingy Therefore Computers cannot be consciousness.
But Dualism gets you laughed at these days so Dualists have to code their arguments and pretend they aren't into that there Dualism.
Penrose's arguments against AI has always felt to me like special pleading that humans (or to stretch a bit further, carbon based lifeforms) are unique.
> I think we have a pretty good idea that we are not stochastic parrots - sophisticated or not. Anyone suggesting that we’re running billion parameter models
On the contrary, we have 86B neurons in the brain, the weighting of the connections is the important thing, but we are definitely 'running' a model with many billions of parameters to produce our output.
The theory by which the brain mainly works by predicting the next state is called predictive coding theory, and I would say that I find it pretty plausible. At the very least, we are a long way from knowing for certain that we don't work in this way.
> I think we have a pretty good idea that we are not stochastic parrots
what if we are?
and our brain is the "billion parameter model", continuously "trining", that takes input and spits out output
I understand we as a species might be reluctant to admit that we are just matter and our thinking/consciousness is just electricity flowing
btw, I'm not selling anything :)
I don't think its useful or even interesting to talk about AI in relation to how humans think, or whether or not they will be "conscious" whatever that might mean.
AIs are not going to be like humans because they will have perfect recall of a massive database of facts, and be able to do math well beyond any human brain.
The interesting question to me is, when will we be able to give AI very large tasks, and when will it to be able to break the tasks down into smaller and smaller tasks and complete them.
When will it be able to set its own goals, and know when it has achieved them?
When will it be able to recognize that it doesn't know something and do the work to fill in the blanks.
I get the impression that LLMs don't really know what they are saying at the moment, so don't have any way to test what they are saying is true or not.
Worth pointing out that we aren't Turing equivalent machines - infinite storage is not a computability class that is realizable in the universe, so far as we know (and such a claim would require extraordinary evidence).
As well, perhaps, worth noting that because a subset of the observable universe is performing some function, then it is an assumption that there is some finite or digital mathematical function equivalent to that function; a reasonable assumption but still an assumption. Most models of the quantum universe involve continuously variable values, not digital values. Is there a Turing machine that can output all the free parameters of the standard model?
> Is there a Turing machine that can output all the free parameters of the standard model?
Sure, just hard code them.
> As well, perhaps, worth noting that because a subset of the observable universe is performing some function, then it is an assumption that there is some finite or digital mathematical function equivalent to that function; a reasonable assumption but still an assumption. Most models of the quantum universe involve continuously variable values, not digital values.
Things seem to be quantised at a low enough level.
Also: interestingly enough quantum mechanics is both completely deterministic and linear. That means even if it was continuous, you could simulate it to an arbitrary precision without errors building up chaotically.
(Figuring out how chaos, as famously observed in the weather, arises in the real world is left as an exercise to the reader. Also a note: the Copenhagen interpretation introduces non-determinism to _interpret_ quantum mechanics but that's not part of the underlying theory, and there are interpretations that have no need for this crutch.)
> there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
We mistakenly assume, they are true because perhaps we want them to be true. But we have no proof that either of these are true.
Descartes disagrees.
And? Mind-body dualism as Descartes imagined has been practically disproven on almost every front.
Mind-body dualism has nothing to do with this. The point is that, as Descartes observed, the fact that I myself am thinking proves that I exist. This goes directly against what northern-lights said, when he said that we have no proof that reasoning exists or that we do it.
Kant addressed this Cartesian duality in the "The paralogisms of pure reason" section of the Transcendental Dialectic within his Critique of Pure Reason. He points out that the "I" in "I think, therefore I am" is a different "I" in the subject part vs the object part of that phrase.
Quick context: His view of what constitutes a subject, which is to say a thinking person in this case, is one which over time (and time is very important here) observes manifold partial aspects about objects through perception, then through apprehension (the building of understanding through successive sensibilities over time) the subject schematizes information about the object. Through logical judgments, from which Kant derives his categories, we can understand the object and use synthetic a priori reasoning about the object.
So for him, the statement "I am" means simply that you are a subject who performs this perception and reasoning process, as one's "existence" is mediated and predicated on doing such a process over time. So then "I think, therefore I am" becomes a tautology. Assuming that the "I" in "I am" exists as an object, which is to say a thing of substance, one which other thinking subjects could reason about, becomes what he calls "transcendental illusion", which is the application of transcendental reasoning not rooted in sensibility. He calls this metaphysics, and he focuses on the soul (the topic at hand here), the cosmos, and God as the three topics of metaphysics in his Transcendental Dialectic.
I think that in general, discussion about epistemology with regard to AI would be better if people started at least from Kant (either building on his ideas or critical of them), as his CPR really shaped a lot of the post-Enlightenment views on epistemology that a lot of us carry with us without knowing. In my opinion, AI is vulnerable to a criticism that empiricists like Hume applied to people (viewing people as "bundles of experience" and critiquing the idea that we can create new ideas independent of our experience). I do think that AI suffers from this problem, as estimating a generative probability distribution over data means that no new information can be created that is not simply a logically ungrounded combination of previous information. I have not read any discussion of how Kant's view of our ability to make new information (application of categories grounded by our perception) might influence a way to make an actual thinking machine. It would be fascinating to see an approach that combines new AI approaches as the way the machine perceives information and then combines it with old AI approaches that build on logic systems to "reason" in a way that's grounded in truth. The problem with old AI is that it's impossible to model everything with logic (the failure of logical posivitism should have warned them), however it IS possible to combine logic with perception like Kant proposed.
I hope this makes sense. I've noticed a lack of philosophical rigor around the discussion of AI epistemology, and it feels like a lot of American philosophy research, being rooted in modern analytical tradition that IMO can't adapt easily to an ontological shift from human to machine as the subject, hasn't really risen to the challenge yet.
“Cogito ergo cogito”?
And don't forget Church-Turing, Gödel Numbers, and all the other stuff. Programming is math and Gödel did essential work on the theory of computation. It would be weird NOT to include his work in this conversation.
But this is a great question. Many believe no. Personally I'm unsure, but lean no. Penrose is a clear no but he has some wacky ideas. Problem is, it's hard to tell a bad wacky idea from a good wacky idea. Rephrasing Clarke's Second Law: Genius is nearly indistinguishable from insanity. The only way to tell is with time.But look into things like NARS and Super Turing machines (Hypercomputation). There's a whole world of important things that are not often discussed when it comes to the discussion of AGI. But for those that don't want to dig deep into the math, pick up some Sci-Fi and suspend your disbelief. Star Trek, The Orville and the like have holographic simulations and I doubt anyone would think they're conscious, despite being very realistic. But The Doctor in Voyager or Isaac in The Orville are good examples of the contrary. The Doctor is an entity you see become conscious. It's fiction, but that doesn't mean there aren't deep philosophical questions. Even if they're marked by easy to digest entertainment. Good stories are like good horror, they get under your skin, infect you, and creep in
Edit:
I'll leave you with another question. Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
> Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
In addition to the detectability problem, I wrote in the adjacent comment, this question can be further refined.
A Turing machine is an abstract concept. Do we need to take into account material/organizational properties of its physical realization? Do we need to take into account computational complexity properties of its physical realization?
Quantum mechanics without Penrose's Orch OR is Turing computable, but its runtime on classical hardware is exponential in, roughly, the number of interacting particles. So, theoretically, we can simulate all there is to simulate about a given person.
But to get the initial state of the simulation we need to either measure the person's quantum state (thus losing some information) or teleport his/her quantum state into a quantum computer (the no-cloning theorem doesn't allow to copy it). The quantum computer in this case is a physical realization of an abstract Turing machine, but we can't know its initial state.
The quantum computer will simulate everything there are to simulate, but the interaction of a physical human with the initial state of the Universe via photons of the cosmic microwave background. Which may deprive the simulated one of "free will" (see "The Ghost in the Quantum Turing Machine" by Scott Aaronson). Or maybe we can simulate those photons too, I'm not sure about it.
Does all of it have anything to do with consciousness? Yeah, those are interesting questions.
There's no evidence that hypercomputation is anything that happens in our world, is there? I'm fairly confident of the weaker claim that there's no evidence of hypercomputation in any biological system. (Who know what spinning, charged black holes etc are doing?)
> Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
A Turing machine can in principle beat the Turing test. But so can a giant lookup table, if there's any finite time limit (however generous) placed on the test.
The 'magic' would in the implementation of the table (or the Turing machine) into something that can answer in a reasonable amount of time and be physically realised in a reasonable amount of space.
Btw, that's an argument from Scott Aaronson's https://www.scottaaronson.com/papers/philos.pdf
> What's the relevance of the Turing test? It's been beaten for over half a century.
I would be very interested if you have any sources on anyone beating the Turing test in anything close to Turing's original adversarial formulation.
> Regardless of our Turing or Super-Turing status; is a Turing machine sufficient for consciousness to arise?
Another question. How do you go about detecting whether consciousness has arisen?
It's a great question. Best we got is ~~RLHF~~ Potter's definition: I know it when I see it
All of this is a fine thought experiment, but in practice there are physical limitations to digital processors that don’t seem to manifest in our brains (energy use in the ability to think vs running discrete commands)
It’s possible that we haven’t found a way to express your thinking function digitally, which I think is true, but I have a feeling that the complexity of thought requires the analog-ness of our brains.
If human-like cognition isn't possible on digital computers, it's certainly is on quantum ones. The Deutsch-Church-Turing principle shows that a quantum Turing machine can efficiently simulate any physically realizable computational process.
> certainly is on quantum ones
I don’t think that’s a certainty.
In 50% of possible worlds, it’s 100% certain!
Does that math still check out if math works differently in other worlds?
It is a big mistake to think that most computability theory applies to AI, including Gödel’s Theorem. People start off wrong by talking about AI “algorithms.” The term applies more correctly to concepts like gradient descent. But the inferences of the resulting neural nets is not an algorithm. It is not a defined sequence of operations that produces a defined result. It is better described as a heuristic, a procedure that approximates a correct result but provides no mathematical guarantees.
Other aspects of ANN that show that Gödel doesn’t apply is that they are not formal systems. Formal system is a collection of defined operations. The building blocks of ANN could perhaps be built into a formal system. Petri nets have been demonstrated to be computationally equivalent to Turing machines. But this is really an indictment on the implementation. It’s the same as using your PC, implementing a formal system like its instruction set to run a heuristic computation. Formal system can implement informal systems.
I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
AI is most definitely an algorithm. It runs on a computer, what else could it be? Humans didn't create the algorithm directly, but it certainly exists within the machine. The computer takes an input, does a series of computing operations on it, and spits out a result. That is an algorithm.
As for humans, there is no way you can look at the behavior of a human and know for certain it is not a Turing machine. With a large enough machine, you could simulate any behavior you want, even behavior that would look, on first observation, to not be coming from a Turing machine; this is a form of the halting problem. Any observation you make that makes you believe it is NOT coming from a Turing machine could be programmed to be the output of the Turing machine.
> But the inferences of the resulting neural nets is not an algorithm.
Incorrect.
The comment above confuses some concepts.
Perhaps this will help: consider a PRNG implemented in software. It is an algorithm. The question of the utility of a PRNG (or any algorithm) is a separate thing.
This.
Heuristic or not, AI is still ultimately an algorithm (as another comment pointed out, heuristics are a subset of algorithms). AI cannot, to expand on your PRNG example, generate true random numbers; an example that, in my view, betrays the fundamental inability of an AI to "transcend" its underlying structure of pure algorithm.
On one level, yes you’re right. Computing weights and propagating values through an ANN is well defined and very algorithmic.
On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
I suspect maybe at that level you can think of it as an algorithm with unreliable outputs. I don’t know what that idea gains over thinking it’s not algorithmic and just a heuristic approximation.
"Heuristic" and "algorithmic" are not antipodes. A heuristic is a category of algorithm, specifically one that returns an approximate or probabilistic result. An example of a widely recognized algorithm that is also a heuristic is the Miller-Rabin primality test.
https://xlinux.nist.gov/dads/HTML/heuristic.html
https://xlinux.nist.gov/dads/HTML/millerRabin.html
https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality...
“Algorithm” just means something which follows a series of steps (like a recipe). It absolutely does not require understanding and doesn’t require determinism or reliable outputs. I am sympathetic to the distinction that (I think) you’re trying to make but ANNs and inference are most certainly algorithms.
> On the level where the learning is done and knowledge is represented in these networks there is no evidence anyone really understands how it works.
It is hard to assess the comment above. Depending on what you mean, it is incorrect, inaccurate, and/or poorly framed.
The word “really” is a weasel word. It suggests there is some sort of threshold of understanding, but the threshold is not explained and is probably arbitrary. The problem with these kinds of statements is that they are very hard to pin down. They use a rhetorical technique that allows a person to move the goal posts repeatedly.
This line of discussion is well covered by critics of the word “emergence”.
> But the inferences of the resulting neural nets is not an algorithm
It is a self-delimiting program. It is an algorithm in the most basic sense of the definition of “partial recursive function” (total in this case) and thus all known results of computability theory and algorithmic information theory apply.
> Formal system is a collection of defined operations
Not at all.
> I don’t think you have to look at humans very hard to see that humans don’t implement any kind of formal system and are not equivalent to Turing machines.
We have zero evidence of this one way or another.
—
I’m looking for loopholes around Gödel’s theorems just as much as everyone else is, but this isn’t it.
Heuristics implemented within a formal system are still bound by the limitations of the system.
Physicists like to use mathematics for modeling the reality. If our current understanding of physics is fundamentally correct, everything that can possibly exist is functionally equivalent to a formal system. To escape that, you would need some really weird new physics. Which would also have to be really inconvenient new physics, because it could not be modeled with our current mathematics or simulated with our current computers.
To be fair, I muddled concepts of formal/informal systems versus completeness and consistency. I think if you start from an assumption that ANN is a formal system(not a given), you must conclude that they are necessarily inconsistent. The AI we have now hallucinates way too much to conclude any truth derived from its “reasoning.”
But surely any limits on formal systems apply to informal systems? By this, I am more or less suggesting that formal systems are the best we can do, the best possible representations of knowledge, computability, etc., and that informal systems cannot be "better" (a loaded term herein, for sure) than formal systems.
So if Gödel tells us that either formal systems will be consistent and make statements they cannot prove XOR be inconsistent and therefore unreliable, at least to some degree, then surely informal systems will, at best, be the same, and, at worst, be much worse?
I suspect that if formal systems were unequivocally “better” than formal systems our brains would be formal systems.
The desirable property of formal systems is that the results they produce are proven in a way that can be independently verified. Many informal systems can produce correct results to problems without a known, efficient algorithmic solution. Lots of scheduling and packing problems are NP-complete but that doesn’t stop us from delivering heuristic based solutions that work good enough.
Edit: I should probably add that I’m pretty rusty on this. Godels theorem tells ua that if a formal system is consistent, it will be incomplete. That is, there will be true statements that cannot be proven in the system. If the system is complete, that is, all true/false statements can be proven, then the system will be incomplete. That is you can prove contradictory things in the system.
AI we have now isn’t really either of these. It’s not working to derive truth and falsehood from axioms and a rule system. It’s just approximating the most likely answers that match its training data.
All of this has almost no relation to the questions we’re interested in like how intelligent can AI be or can it attain consciousness. I don’t even know that we have definitions for these concepts suitable for beginning a scientific inquiry.
Yeah I don’t know why GP would think computability theory doesn’t apply to AI. Is there a single example of a problem that isn’t computable by a Turing machine that can be computed by AI?
It does apply to AI in terms of the computers we compute neural networks on may be equivalent to Turning machines but the ANN networks are not. If you did reduce the ANN down to a formal system, you will likely find that in terms of Godels theorem that it would be sufficiently powerful to prove a falsehood. Thus not meeting the consistency property we would like in a system used to prove things.
“Organisms are Algorithms.” --Yuval Noah Harari
Excuse me, what are you talking about? You think there is any of computability that doesn't apply to AI? With all respect and I do not intend this in a mean way but just intend to rightly call all of this as exactly nonsense. I think there is a fundamental misunderstanding of computational theory and Turing machines, Church-Turing thesis, etc. any standard text on the subject should clear this up.
> Now, the question is: can we create a smaller function capable of performing the same feat?
Where does this question come from? Especially where does the 'smaller' requirement come from?
> If we assume that that function is computable in the Turing sense
This is a big assumption. I'm not saying it's wrong, but I am saying it's not reasonable to just handwave and assume that it's right.
Gödel’s incompleteness theorem, and, say, the halting problem seem to fall squarely into the bucket of “basic computability theory” in precisely the way that “we think, that is a fact”, does not (D.A. hat tip)
You’re arguing that we know artificial reasoning exists because we are capable of reasoning. This presupposes that reasoning is computable and that we ourselves reason by computation. But that’s exactly what Penrose is saying isn’t the case - you’re saying we’re walking Turing machines, we’re intelligent, so we must be able to effectively create copies of that intelligence. Penrose is saying that intelligence is poorly defined, that it requires consciousness which is poorly understood, and that we are not meat-based computers.
Your last question misses the point completely. “If we are something else, then out CoT won’t be computable…” it’s like you’re almost there but you can’t let go of “we are meat-machines, everything boils down to computation, we can cook up clones”. Except, “basic computability theory” says that’s not even wrong.
Penrose is a dualist, he does not believe that function can be computed in our physical universe. He believes the mind comes from another realm and "pilots" us through quantum phenomenons in the brain.
Interesting. Does that fit with the simulation hypothesis? That the world's physics are simulated on one computer, but us characters are simulated on different machines, with some latency involved?
Its all pop pseudoscience. Things exist. Anything that exists has an identity. Physics exists and other things (simulations, computing, etc.) that exist are subject to those physics. To say that it happens the other way around is poor logic and/or lacks falsifiability.
>Things exist
oh, cogito existo sum! checkmate theists!
dude, the simulation hypothesis does not mean things don't exist, it means they don't necessarily exist in the way you have, rather unimaginatively, imagined, and you have no way to tell.
and Occam's Razor does not solve the problem.
> … and you have no way to tell. This is exactly my point. If we have no way to tell, what experiment could you possibly use to test whether we’re in a simulation or not? The simulation hypothesis lacks falsifiability and is pseudoscience.
eh, its possible that weird shit is happening in physics. however, there is no evidence that this is the case. its just vibes, really.
Which is—to use the latest philosophy lingo—dumb. To be fair to Penrose, the “Gödel’s theory about formal systems proves that souls exist” is an extremely common take; anyone following LLM discussions has likely seen it rediscovered at least once or twice.
To pull from the relevant part of Hofstadter’s incredible I am a Strange Loop (a book also happens to more rigorously invoke Gödel for cognitive science):
Highly recommend it for anyone who liked Gödel, Escher, Bach, but wants more explicit scientific theses! He basically wrote it to clarify the more artsy/rhetorical points made in the former book.It feels really weird to say that Roger Penrose is being dumb.
It's accurate. But it feels really weird.
It's not uncommon for great scientists to be totally out of their depth even in nearby fields, and not realize it. But this isn't the hard part of either computability or philosophy of mind.
https://en.m.wikipedia.org/wiki/Nobel_disease
No, Penrose is not dumb. He gives a very good argument in his books on limitations of AI, which is almost always misrepresented including in most of this thread. It is worth reading "Shadows of the Mind".
We really ought to stop mythicizing people into superhuman heroes/geniuses.
Penrose is an authority in some fields of theoretical physics, but that doesn't give any value to what he has to say on consciousness or AI.
On that topic, he has clearly adopted an unscientific approach: he wants to believe the soul exists and is immaterial and seeks evidence for it.
hes a damn good mathematician. it is indeed weird to experience him not breaking down the exact points of assumption he makes on arriving at his conclusion. he is old though, so...
he starts with "consciousnes is not computable". You can not just ignore it as a central argument withouth explaining why your preference to think on it as basic computability theory makes more sence than his.
What's more, whatever you like to call the transoforming of information into thinked information by definition can not be a (mathematical) function, because it would require all people to process the same information in the same way and this is plainly false
>> What's more, whatever you like to call the transoforming of information into thinked information by definition can not be a (mathematical) function, because it would require all people to process the same information in the same way and this is plainly false
No this isn't the checkmate you think it is. It could still be a mathematical function. But every person transforming information into "thinked information" could have a different implementation of this function. Which would be expected as no person is made of the same code (DNA).
I think the complication here is that brains are probabilistic, which admits the possibility that they can’t be directly related to non probabilistic computability classes. I think there’s a paper I forget the name of that says quantum computers can decide the halting problem with some probability (which makes sense because you could always just flip a coin and decide it with some probability) - maybe brains are similar
>Therefore, there is a function capable of transforming information into "thinked information", or what we usually call reasoning. We know that function exists, because we ourselves are an example of such function.
"Thinked information" is a colour not an inherent property of information. The fact that information has been thought is like the fact it is copyrighted. It is not something inherent to the information, but a property of its history.
https://ansuz.sooke.bc.ca/entry/23
No, I mean, it's nice but I don't think any of that works. You say "Therefore, there is a function capable ..." that is a non-sequitur. But, let's set that aside, I think the key point here is about Turing machines and computability. Do you really think your mind and thought-process is a Turing machine? How many watts of power did it take to write your comment? I think it is an absolute certainty that human intelligence is not like a Turing machine at all. Do you find it much more troublesome to think about continuous problems or is ironically more troublesome to discretize continuous problems in order to work with them?
[dead]
Not every fact is computable. We are not Turing machines.
We don't know every fact, either, so I don't know how you can use that idea to say that we're not Turing machines. Apart, of course, from the trivial fact that we are far more limited than a Turing machine...
With sufficient compute capacity, a complete physical simulation of a human should be possible. This means that, even though we are fallible, there is nothing that we do which can't be simulated on a Turing machine.
May still only yield a philosophical zombie. You can simulate gravity but never move something with its simulation.
If I have a 9-DOF sensor in meatspace and am feeding that to a "simulation" that helps a PID coalesce faster then my simulation can move something. When I tell my computer to simulate blackbody radiation...
What you said sounds good, but I don't think it's philosophically robust.
I think you misunderstood my point. A simulation is never the actual simulated phenomenon. When you understand consciousness as a "physical" phenomenon (e.g. as in most forms of panprotopsychism), believing in being able to create it by computation is like believing in being able to generate gravity by computation.
we're out of my wheelhouse but it feels to me that entropy and gravity are fundamentally linked and a quick search shows i'm not alone: https://en.wikipedia.org/wiki/Entropic_gravity
read this as: literally creating gravity by simulating it hard enough.
I don't see how computation itself can be a plausible cause here. The physical representation of the computation might be the cause, but the computation itself is not substrate-independent in its possible effect. That is again my point.
I'm arguing with an AI about this too, because my firm belief is that the act of changing a 1 to a 0 in a computer must radiate heat - a 1 is a voltage, it's not an abstract "idea", so that "power" has to go somewhere. It radiates out.
I'm not really arguing with you, i just think if i simulate entropy (entropic processes, "CSRNG", whatever) on my computer ...
I agree and the radiation/physical effect is in my opinion the only possibility a normal computer may somehow be able to cause some kind of consciousness.
wow, we were at an impasse, but somehow i managed to make myself understood. Happy Mardi Gras!
To be clear, the topic of philosophical zombies has to do with consciousness.
You can simulate a black hole on a mainframe, but that doesn't mean the math is going to escape and eat your solar system.
We don't know if consciousness is computable, because we don't know what consciousness is.
There are suggestions it isn't even local, never mind Turing-computable.
the heat radiated is entropy and will eventually wind up in a black hole. we missed a step.
The entire p-zombie concept presumes dualism (or something close enough to it that I'm happy to lump it all into the generic category of "requires woo"). Gravity having an effect on something is measurable and provable, whereas qualia are not.
Why should a complete simulation be possible? in fact there are plenty of things we can do that can't be simulated on a Turing machine. just one example the Busy Beaver Problem is an uncountable problem for large N, so by definiton is not coumptable and yet humans can prove properties like "BB(n) grows faster than any computable function"
Proving properties and computing values are quite different things, and proofs can absolutely be done on Turing machines, e.g. with proof assistants like Lean.
well you try feeding the Busy Beaver Problem with large N to lean then and see what comes out.
Do you think a machine proof of "BB(n) grows faster than any computable function" would require that?
no, see the problem is that the machine needs a well defined problem, and the "BB(n) grows faster than any defined problem" is well defined but you would not come up with an insight like that by executing the BB(n) function. that insight requires a leap out of the problem into a new area and then sure after it is defined as a new problem you enter again in the computability realm in a different dimension. But if the machine tries to come up with insight like that by executing the BB(n) function it will get stuck in infinite loops.
As long as you take the assumption that the universe is finite, follows a fixed set of laws, and is completely deterministic, then I think it follows (if not perfectly, then at least to a first order) that anything within the universe could be simulated using a theoretical computer, and you could also simulate a smaller universe on a real computer, although a real computer that simulated something of this complexity would be extremely hard to engineer.
It's not entirely clear, though, that the universe is deterministic- our best experiments suggest there is some remaining and relevant nondeterminism.
Turing machines, Goedel incompleteness, Busy Beaver Functions, and (probably) NP problems don't have any relevance to simulating complex phenomena or hard problems in biology.
I feel like Penrose presupposes the human mind is non computable.
Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.
And in any case: any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable. So I’m not convinced I could be convinced without a computable proof.
And finally just like computable numbers are dense in the reals, maybe computable thoughts are dense in transcendence.
This is accurate from his Emperor's New Mind. Penrose essentially takes for granted that human brains can reason about or produce results that are otherwise uncomputable. Of course, you can reduce all (historical) human reasoning to a computable heuristic as it is finite, but for some reason he just doesn't see this.
His intent at the time was to open a physical explanation for free will by taking the recourse to quantum nano-tubules magnifying true randomness to the level of human cognition. As much as I'm also skeptical that this actually moves the needle on whether or not we have free will (...vs occasionally having access to statistically-certain nondeterminism? Ok...) the computable stuff was just in service of this end.
I strongly suspect he just hasn't grasped how powerful heuristics are at overcoming general restrictions on computation. Either that or this is an ideological commitment.
Kind of sad—penrose tilings hold a special place in my heart.
> His intent at the time was to open a physical explanation for free will by taking the recourse to quantum nano-tubules magnifying true randomness to the level of human cognition. As much as I'm also skeptical that this actually moves the needle on whether or not we have free will (...vs occasionally having access to statistically-certain nondeterminism? Ok...) the computable stuff was just in service of this end.
Free will is a useful abstraction. Just like life and continuity of self are.
> I strongly suspect he just hasn't grasped how powerful heuristics are at overcoming general restrictions on computation.
Allowing approximations or "I don't know" is what's helpful. The bpf verifier can work despite the halting problem being unsolvable, not because it makes guesses (uses heuristics) but because it's allowed to lump in "I don't know" with "no".
> Free will is a useful abstraction. Just like life and continuity of self are.
I think it’s more useful to think of them as language games (in the Wittgenstein sense) than abstractions.
You could reasonably consider free will to be an abstraction or a language game, but it is closely linked to moral responsibility, prisons, punishment, etc, which are very much not language games.
I don't think free will exists because I don't think supernatural phenomena exist, and there's certainly no natural explanation for free will (Penrose was correct about that). But I have a very non-nihilistic view on things [1].
[1] https://sunshowers.io/posts/there-is-no-free-will/
I suppose if you really wanted to you could view condensing useful abstractions out of a highly detailed system as a kind of language game, but I'm not convinced that that's useful in the context of investigating particular abstractions rather than investigating the nature of the process of making abstractions.
I think the very concept of an abstraction is a language game, one that speaks to the old Platonic ideals of Greek philosophy - with all the good and bad that implies. This specific language game takes on a very concrete meaning to programmers that can be quantitatively analyzed by a compiler but epistemologically it’s just another abstract concept (I hate philosophical inception).
If stories are to be believed real geniuses can tap into God’s mind. (See Ramanujan)
If so then it really comes down to believing something not because you can prove it but because it is true.
I’m just a mediocre mathematician with rigor mortis. So I won’t be too hard on Penrose.
Those stories are not to be believed.
> Penrose essentially takes for granted that human brains can reason about or produce results that are otherwise uncomputable.
That's Penrose's old criticism. We're past that. It's the wrong point now.
Generative AI systems are quite creative. Better than the average human at art. LLMs don't have trouble blithering about advanced abstract concepts. It's concrete areas where these systems have trouble, such as arithmetic. Common sense is still tough. Hallucinations are a problem. Lying is a problem. None of those areas are limited by computability. It's grounding in the real world that's not working well.
(A legit question to ask today is this: We now know how much compute it takes to get to the Turing test level of faking intelligence. How do biological brains, with such a slow clock rate, do it? That was part of the concept behind "nanotubules". Something in there must be running fast, right?)
> Something in there must be running fast, right?
Nah. It just needs to be really wide. This is a very fuzzy comparison, but a human brain has ~100 trillion synaptic connections, which are the closest match we have to "parameters" in AI models. The largest such models currently have on the order of ~2 trillion parameters. (edit to add: and this is a low end estimate of the differences between them. There might be more stuff in neurons that effectively acts as parameters, and should be counted as such in a comparison.)
So AI models are still at least two orders of magnitude off from humans in pure width. In contrast, they run much, much faster.
IMO creativity is beside the point, I mean, it is just one of those things that the human brain happens to be good at, so we identify it with consciousness (in the sense that it falls out of the same organ). But really all sorts of stuff can look creative. I mean, just to use an example that is clearly identified as a creative product: a Jackson Pollock painting is clearly a creative work, but a lot of the beauty comes from purely physical processes, the human influences the toss but after that it’s just a jumble of fluid dynamics.
We wouldn’t call the air creative, right? Or if we do, we must conclude that creativity doesn’t require consciousness.
> Better than the average human at art.
Given that we struggle with even a basic consensus about which humans are better at art than others, I don't think this sentence carries any meaning whatsoever.
No, we struggle with consensus about which expert-level humans are better than others at art.
They’re better than your average human at producing jpegs. I think if you put any of your average humans in a closed room with nothing but canvases, paint, and a mirror, within a month or two they’d be producing pretty interesting paintings without having been fed an image of every single artwork humanity has ever made.
> It's concrete areas where these systems have trouble, such as arithmetic. Common sense is still tough. Hallucinations are a problem. Lying is a problem
Gestures broadly at humanity
> How do biological brains, with such a slow clock rate, do it?
And much lower energy expenditure.
A brain consumes something like 20 W while working.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable.
This is a fallacy. Just because you need to serialize a concept to communicate it doesnt mean the concept itself is computable. This is established and well proven:
https://en.wikipedia.org/wiki/List_of_undecidable_problems
The fact that we can come up with this kind of uncumputable problems is a big plus in supprt of Penrose's Idea that consciousnes is not computable and goes way beyond compatability.
That's how I understood Penrose's reasoning too. He differentiated between the computer and whatever is going on in our brain. Computers are just "powerful" enough to encode something that mimics intelligence on the surface (the interviewer tried to pin him on that "something new"), but is still the result of traditional computation, without the involvement of consciousness (his requirement for intelligence).
Deciding an undecidable problem is well, undecidable. But describing it is clearly not. Otherwise we would not have been able to write about it.
well that is the "beyond computable". we are somehow able to say that this function will not halt, and we wouldnt be able to do that if we only had computable power to simulate it because that would proove that the probem was decidable in the first place.
how you comunicate it does not alter the nature of the problem.
> My thoughts are serialized and obviously countable.
You might want to consider doing a bit of meditation...anyone who describes their thoughts as 'serialized' and 'obviously countable' has not much time actually looking at their thoughts.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable
Are you aware of how little of modern mathematics has been formalised? As in, properly formalised on a computer. Not just written up into a paper that other mathematics can read and nod along to.
Mathematics might seem very formal and serialised (and it is, compared to most other human endeavours) but that’s actually quite far from the truth. Really, it all exists in the mind of the mathematician and a lot of it is hard, if not currently impossible, to pin down precisely enough to enter into a formal system.
I think you probably do understand some things ‘transcendently’! Almost by definition they’re the things you’re least aware of understanding.
Experience is what's hard to square with computability. David Chalmer's calls this the hard problem. As long as you're taking about producing speech or other behaviors, it's easy to see how that might be a computation (and nothing more).
It's harder (for me) to see how it's possible to say that pain is just a way of describing things, i.e. that there's in principle no difference between feeling pain and computing a certain function.
> I feel like Penrose presupposes the human mind is non computable
i would be a bit more aggressive: Penrose asserts without evidence
So do devotees of computability.
Remember - there is no such thing as an objective consciousness meter.
Emulating the behaviours we associate with consciousness - something that still hasn't been achieved - solves the problem of emulation, not the problem of identity.
The idea that an emulation is literally identical to the thing it emulates in this instance only is a very strange belief.
Nowhere else in science is a mathematical model of something considered physically identical and interchangeable with the entity being modelled.
> Nowhere else in science is a mathematical model of something considered physically identical and interchangeable with the entity being modelled
you can make the argument that everything in science is a mathematical model... if you measure a basketball arcing through the sky, you are not actually privy to any existential sensing of the ball, you are proxying the essence of the basketball using photons, and even collecting those photons is not really "collecting those photons", etc.
> Perhaps he and other true geniuses can understand things transcendently. Not so for me. My thoughts are serialized and obviously countable.
You needn't be a genius. Go on a few vipassana meditation retreats and your perception of all this may shift a bit.
> any kind of theorem or idea communicated to another mathematician needs to be serialized into language which would make it computable
Hence the suggestion by all mystical traditions that truth can only be experienced, not explained.
It may be possible for an AI to have access to the same experiences of consciousness that humans have (around thought, that make human expressions of thought what they are) - but we will first need to understand the parts of the mind / body that facilitate this and replicate them (or a sufficient subset of them) such that AI can use them as part of its computational substrate.
This is not a presupposition for Penrose, but a conclusion. The argument for the conclusion is the subject of several of his books.
Secondly, the issue is not being a genius, but an ability to reflect. What can be shown, uncontroversially, is that a formal computer system which is knowably correct, a human (or indeed a machine apart which is not the original system) can know something(like a mathematical theorem) which is not accesible to the system. This is due to a standard diagonalization argument used in logic and computability.
The important qualifier is 'knowably correct' which doesn't apply to LLMs which are famous for their hallucinations. But, this is not a solid argument for LLMs being able to do everything that humans can do. Because correctness need not refer to immediate outputs, but outputs which are processed through several verification systems.
would you mind summarizing what the main argument is? I've watched several of his interviews (but not read the books), and I don't really understand why he concludes that consciousness is not computable.
I watched half of the video. He keeps appealing to the idea that Goedel applies to AI because AI doesn't understand what it's doing. But I seriously doubt that we humans really know what we're doing, either.
IIRC, his Goedel argument against AI is that someone could construct a Goedel proposition for an intelligent machine which that machine could reason its way through to hit a contradiction. But, at least by default, humans don't base their epistemology on such reasoning, and I don't see why a conscious machine would either. It's not ideal, but frankly, when most humans hit a contradiction, they usually just ignore whichever side of the contradiction is most inconvenient for them.
The argument does not need to involve typical human behaviour with its faults. Because if a computer can simulate humans, it can also simulate humans with error correction and verification mechanisms. So, the computer should also be able to simulate a process where a group of humans write down initial deductions and then verify it extensively for logical errors using both computers and other humans.
Most of the objections have been covered in his book "Shadows of the Mind".
Also, the fact that most human behaviour is not about deducing theorems isn't relevant as that is used as a counterexample which attacks the 'computers can simulate humans' hypothesis. This particular behaviour is chosen, as it is easy to make reflective arguments precise.
If they can imagine things transcendently they can only assert those things are true. They can't prove it.
If they prove it then they have either shown that the idea is not transcendent or that Gödel's theorum is false.
That's the same as saying "I know the answer, when you are speculating"
Penrose does not require transcendent insights, but merely the ability to examine a finite knowably correct system to arrive at a correct statement which is not provable by the system. In fact, the construction of a Godel statement is mechanical, but this does not mean that the original system can see it. It is a bit like given a supposed finite list of all primes, we can construct a new prime by multiplying them and adding 1. This construction is a simple computation.
You can always arrive at a correct statement. A random expression generator can occasionally do that. You just can't tell if it is true. A simple integer counter can generate the Gödel statement, it just doesn't have the ability to identify it. You could take a guess, which is what people are doing when they say they they understand, they have applied hurisics to convince themselves, the problem is either decidable, or they are simply wrong.
> You just can't tell if it is true.
But in the Penrose argument, we can start from a true system and use reflection to arrive at another true statement which is not deducible from the original system.
This is important to the argument as one starts with a proposed program which can perform mathematical reasoning correctly and is not just a random generator. Then, the inability to see the new statement is a genuine limitation.
> I feel like Penrose presupposes the human mind is non computable.
Yes. He has also written books about it.
https://en.wikipedia.org/wiki/Roger_Penrose#Consciousness
From where do those serialised thoughts arise?
I think (but may be wrong) that you are thinking metamathematics is a part of mathematics, which (to my knowledge) it is not.
He explicitly believes that, yes.
What is "understanding transcendently"? Just because Penrose is an authority on some subjects in theoretical physics doesn't mean he is a universal genius and that his ideas on consciousness or AI hold any value.
We gotta stop making infaillible super heroes/geniuses of people.
In this particular case, Penrose is a convinced dualist and his theories are unscientific. There are very good reasons to not be a dualist, a minority view in philosophy, which I would encourage anyone to seek if they want to better understand Penrose's position and where it came from.
He’s written and cowritten 5 books on the subject, going back nearly 40 years. I think he can as much as anyone can be considered an “authority” on something as inherently hard to observe or develop falsifiable theories as subjective conscious experience.
This isn’t an example of physicist stumbling into a new field for the first time and saying “oh that’s an easy problem. you just need to…”
The ideas of a very smart person who has spent decades thinking about a problem tend to be very valuable even if you don’t agree with them.
> I feel like Penrose presupposes the human mind is non computable.
He has been very clear in that he claims exactly that and that there is Quantum Mechanics (wave function collapse in particular) involved.
I personally think he's probably wrong, but who really knows?
Isn't there quantum mechanics in every particle interaction though?
Goedel's theorem is only a problem if you assume that intelligence is complete. (where complete means: able to determine whether any formal statement is true or false). We know that anything running on a computer is incomplete (e.g. Turing halting problems). For any of this to be interesting, Penrose would have to demonstrate that human intelligence is complete in some sense of the word. This seems highly unlikely. Superficially, human intelligence is not remotely complete since it is frequently unable to answer questions that have yes or no answers, and even worse, is frequently wrong. So not complete, either.
Anything a single human can do, reasoning wise, AI will eventually be able to do.
Anything emerging out of a collective of humans interacting and reasoning (or interacting without reasoning or flawed reasoning) the AIs (plural) will eventually be able to do.
Only thing is machine kind does not need sleep, does not get tired, etc, so it will fail to fully emulate human behavior, with all the pros and cons of that for us to benefit from and deal with.
I'm not sure what is the point of a theoretical discussion beyond this.
Some people (e.g. Penrose) reject this, believing that there's something mystical about human thought-- perhaps a soul, perhaps some quantum magic.
Any technology that's advanced enough is indistinguishable from magic, or something along these lines...
But we are working on quantum magic!
Yeah but, so what?
Whether or not there is some magic that makes humans super special really has no bearing on whether or not we can make super duper powerful computers that can be given really hard problems.
In my view its inevitable that we'll build an AI that is more capable than a human. And that the AI will be able to build better computers, and write better software. That the singularity.
His argument is that incompleteness means that humans will always be able to do things that non-quantum machines can't. I don't agree with this view.
> In my view its inevitable that we'll build an AI that is more capable than a human.
Seems pretty likely, with a big uncertainty on timeframe.
> And that the AI will be able to build better computers, and write better software. That the singularity.
This could happen, but I don't agree it's an inevitable consequence of the first.
I'm not sure you have a proof of that to claim there is no point in theoretical discussion. He has a point, that current "AI" isn't conscious and so far there is no indication that it will be. It doesn't mean it can't happen either.
Basically we aren't up to "Do Androids Dream of Electric Sheep?" so far.
Have you heard of brain organoids?
A Google search yields "three-dimensional tissues that mimic the human brain and are grown in a lab"
The distinction between natural and what is man-made (aka artificial) is itself artificial.
We are learning how to recreate nature, whether from silicon or in 3D tissue or ab initio (google "synthetic life")
Mimicking brain doesn't necessarily make it conscious. So again, all of that proves nothing so far.
Brain organpids do not mimic the brain, unless you are blind and cannot google. They are the brain.
He sets up a definition where "real intelligence" requires consciousness, then argues AI lacks consciousness, therefore AI lacks real intelligence. This is somewhat circular.
The argument that consciousness can't be computable seems like a stretch as well.
Consciousness is not a result, it cannot be computed. It is a process, and we don't know how it interacts with computation. There are only two things I can really say about consciousness, and both are speculation: I think it isn't observable, and I think it is not a computation. For the first point, I can see no mechanism by which consciousness could affect the world so there is no way to observe it. For the second, imagine a man in a vast desert filled only with a grid of rocks that have two sides, a dark and light side and he has a small book which gives him instructions on how to flip these rocks. It seems unlikely that the rocks are sentient, yet certain configurations of rocks and books could produce the thought computation of the human mind. When does the sentience happen? If the man flips only a single rock according to those rules, would the computer be conscious? I doubt it. Does the consciousness exist between the flips of rock when he walks to the next stone? The idea that computation creates consciousness seems plainly untenable to me.
Indeed, I also think consciousness cannot be reduced to computation.
Here is one more thing to consider. All consciousness we can currently observe is embodied; all humans have a body and identity. We can interact with separate people corresponding to separate consciousnesses.
But if computation is producing consciousness, how is its identity determined? Is the identity of the consciousness based on the set of chips doing the computation? It is based on the algorithms used (i.e., running the same algorithm anywhere animates the same consciousness)?
In your example, if we say that consciousness somehow arises from the computation the man performs itself, then a question arises: what exactly is conscious in this situation? And what are the boundaries of that consciousness? Is the set of rocks as a whole? Is it the computation they are performing itself? Does the consciousness has a demarcation in space and time?
There are no satisfying answers to these questions if we assume mere computation can produce consciousness.
Just wanted to point out that I absolutely share your view here. I would like to add that the concept of virtualization and the required representation of computation makes substrate-independent consciousness rather absurd.
To me the only explanation for consciousness I find appealing is panprotopsychism.
I think to argue usefully about consciousness you've got to be able to define what you mean by it. If you use in the sense of a boxer is knocked unconscious as he's not aware of anything much versus conscious where he knows what's going on and can react and punch back, then AI systems can also be aware or not and react or not.
If you say it's all about the feelings and machines can't feel that way then it gets rather vague and hard to reason about. I mean they don't have much in the way of feelings now but I don't see why they shouldn't in the future.
I personally feel both those aspects of consciousness are not woo but the results mechanisms built by evolution for functional purposes. I'm not sure how they could have got their otherwise unless you are going to reject evolution and go for divine intervention or some such.
Consider a universe purely dictated by the mathematical laws of physics. It would be indistinguishable from our own to an observer, but such a universe would effectively be a fixed 4D structure, a statue incapable of experience. You have experience, yes? You think therefore you are. There exists something beyond maths and physics, experiencing the our universe, and you are that thing. How could such an entity develop from physical processes?
> no mechanism by which consciousness could affect the world
Where would the placebo effect fit in this thought experiment?
> a grid of rocks that have two sides, a dark and light side and he has a small book
Where did the book come from?
[dead]
Penrose believes that consciousness originates from quantum mechanics and the collapse of the wavefunction. Obviously you couldn't (effectively) simulate that with a classical computer. It's a very unconventional position, but it's not circular.
https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
https://en.wikipedia.org/wiki/Shadows_of_the_Mind
I do not see the “circularity”, it may lack foundation, but that is a different argument.
Where’s the circle?
The fundamental result of Gödel's theorem is that logical completeness and logical consistency are complimentary; if a logical system has consistent rules then it will contain statements that are unprovable by the rules but true nonetheless, so it is incomplete. Alternately, if there is a proof available for all true statements via the rules then the rules used are inconsistent.
I think this means that "AGI" is limited as we are. If we build a machine that proves all true statements then it must use inconsistent rules, implying it is not a machine we can understand in the usual sense. OTOH, if it is using consistent rules (that do not contain contradiction) then it cannot prove all true statements so it ia not generally intelligent, but we can understand how it works.
I agree with Dr. Penrose about the misnomer of "artificial Intelligence". We ought to be calling the current batch of intelligence technologies "algabreic intelligence" and admiting that we seek "geometric intelligence" and have no idea how to get there.
The issue isn't the mere existence of two thinking modes (algebraic vs. geometric), but that we’ve culturally prioritized and trained mostly algebraic modes (linear language, math, symbolic logic). This has obscured our natural geometric capacity, especially the neurological pathways specialized in visually processing and intuitively understanding phenomena, particularly light itself (photons, vision, direct visual intuition of physics). Historically, algebraic thinking was elevated culturally around the Gnostic period (200 BCE forward), pushing aside the brain's default "geometric mode". Heck, during that period of history, people actively and forcefully campaigned against overly developing the analytical use of the mind. We should be actively mapping neurological pathways specialized for direct intuitive visual-physical cognition (understanding light intuitively at a neurological level, not symbolically or algebraically) for that to happen. Also: Understanding or explainability is not directly linked to consistency in the logical sense. A system can be consistent yet difficult to fully understand, or even inconsistent yet still partially understandable. We are talking right now here because we were put here through a series of historical events. Go back to 200 BCE and play out the Gnostic or Valentinus path to 2025.
Thanks for the reply.
When I think about understanding, in principle I require consistency not completeness. In fact, understandability is predicated on consistency in my view.
If I liken the quest for AGI to the quest for human flight, wherein we learned that the shape of the wing provides nearly effortless lift, while wing flapping only provides a small portion of the lift for comparatively massive energy input, then I suspect we are only doing the AGI equivalent of wing flapping at this point.
the human mind itself isn't fully consistent...or at least, consistency isn't necessarily how we operate internally (lots of contradictions, ambiguities, simultaneous beliefs that don't neatly align). Yet we still manage to "understand" things deeply. Complete logical consistency isn't strictly required for understanding in a practical, real world sense. We are totally "flapping" right now with AI, brute forcing algebraic intelligence and missing that elegant "geometric" insight. My point is simply that our brains already have that built in "wing shape" neurologically, we just haven't mapped it out or leveraged it fully yet. The real leap isn't discovering a new wing design, it's understanding we already have one, we just have to leverage it. :) :)
Now I think you come to thw crux of it.
What does geometric thinking mean?
Good question, perhaps its best to start with what I mean by algabreic intelligence, then the contrast will be more clear. Algabreic intelligence uses the simple idea of equality to produce numerical unknowns from the known via standard mechanistic operations. So algabreic intelligence is mechanistic, operational, deductive, and quntitative. In contrast, geometric intelligence is concerned with the higher level abstract concepts of congruity, scale.
To return to my previous analogy, algabreic intelligence is wing flapping while geometric intelligence is the shape of the wing. The former is arduous time consuming and energy inefficient while the latter is effortless, and unreasonably effective.
I complement Penrose for his indifference to haters and harsh skeptics.
Our minds and consciousness do not fundamentally use linear logic to arrive at their conclusions, they use constructive and destructive interference. Linear logic is simulated upon this more primitive (and arguably superior) cognition.
It is true that any outcome of any process may be modeled in serialized terms or computational postulations, this is different than the interference feedback loop used by intelligent human consciousness.
Constructive and destructive interference is different and ultimately superior to linear logic on many levels. Despite this, the scalability of artificial systems may very well easily surpass human capabilities on any given task. There may be an arguable energy efficiency angle.
Constructive/destructive interference builds holographic renderings which work sufficiently when lacking information. A linear logic system would simulate the missing detail from learned patterns.
Constructive/destructive interference does not require intensive computation
An additive / reduction strategy may change the terms of a dilemma to support a compromised (or alternatively superior) “human” outcome which a logic system simply could not “get” until after training.
There is more, though these are a worthy start.
And consciousness is the inflection (feedback reverberation if you like) upon the potential of existential being (some animate matter in one’s brain). The existential Universe (some part of matter bound in the neuron, those micro-tubes perhaps) is perturbed by your neural firings. The quantum domain is an echo chamber. Your perspectives are not arranged states, they are potentials interfering.
Also, “you all” get intelligence and “will” wrong. I’ll pick that fight on another day.
I swear this was on the front page 2 minutes ago and now it’s halfway down page 2.
Anyway, I’m not really sure where Penrose is going with this. As a summary, incompleteness theorem is basically a mathematical reformulation of the paradox of the liar - let’s state this here for simplicity as “This statement is a lie” which is a bit easier than talking about “ All Cretans are liars”, which is the way I first heard it.
So what’s the truth value of “This statement is a lie”? It doesn’t have one. If it’s false, then it’s true. But if it’s true, then it must be false. The reason for this paradox is that it’s a self-referential statement: it refers to its own truth value in the construction of its own truth value, so it never actually gets constructed in the first place.
You can formulate the same sort of idea mathematically using sets, which is what Gödel did.
Now, the thing about this is that as far as I am aware (and I’m open to be corrected on this) this never actually happens in reality in any physical system. It seems to be an artefact of symbolic representation. We can construct a series of symbols that reference themselves in this way, but not an actual system. This is much the same way as I can write “5 + 5 = 11” but it doesn’t actually mean anything physically.
The closest thing we might get to would be something that oscillates between two states.
We also ourselves, don’t have a good answer to this problem as phrased. What is the truth value of “This statement is a lie”? I have to say “I don’t know” or “there isn’t one” which is a bit like cheating. Am I incapable of consciousness as a result? And if I am indeed conscious instead because I can make such a statement instead of simply ”True” or “False”, well I’m sure that an AI can be made to do likewise.
So I really don’t think this has anything to do with intelligence, or consciousness, or any limits on AI.
(for the record, I think the Penrose take on Gödel and consciousness is mostly silly and or confused)
I think your understanding of the incompleteness theorem is a little, well, incomplete. The proof of the theorem does involve, essentially, figuring out how to write down "this statement is not provable" and using liar-paradox-type-reasoning to show that it is neither provable nor disprovable.
But the incompleteness theorem itself is not the liar paradox. Rather, it shows that any (consistent) system rich enough to express arithmetic cannot prove or disprove all statements. There are things in the gaps. Gödel's proof gives one example ("this statement is not provable") but there are others of very different flavors. The standard one is consistency (e.g. Peano arithemtic alone cannot prove the consistency of Peano arithmetic, you need more, like much stronger induction; ZFC cannot prove the consistency of ZFC, you need more, like a large cardinal).
And this very much does come up for real systems, in the following way. If we could prove or disprove each statement in PA, then we could also solve the halting problem! For the same reason there's no general way to tell whether each statement of PA has a proof, there's no general way to tell whether each program will halt on a given input.
Nice reply. I don’t know anything about Peano arithmetic, or how it applies to the halting problem, so I can’t really evaluate this. All I know is the description of the proof that I read some time ago. Maybe there’s more to dig into on it, but as you say at the start of your post, likely none of it has anything to do with what Penrose is arguing for.
> I swear this was on the front page 2 minutes ago and now it’s halfway down page 2.
It set off the flamewar detector. I've turned that off now.
Thanks!
I think all the debunkings of Penrose's argument are rather overcomplicated, when there is a much simpler flaw:
Which operation can computers (including quantum computers) not perform, that human neurons can? If there is no such operation, then a human-brain-equivalent computer can be built.
I agree the arguments tend to be over complicated but I think Penroses argument is basically
>it is argued that the human mind cannot be computed on a Turing Machine... because the latter can't see the truth value of its Gödel sentence, while human minds can
And the debunk is that both Penrose and an LLM can say they see the truth value and we have no strong reason to think one is correct and the other is wrong. Either of both could be confused. Hence the argument doesn't prove anything.
Or even simpler: if cells are just machines, then there is no reason why a computer couldn't perform the same operations. I'm not a philosopher, but I believe this comes down to materialism vs a belief in the supernatural.
Having read about Penrose's positions before, this is indeed what is he proposing in a roundabout way: that there is an origin to "consciousness" that is for all intents and purposes metaphysical. In the past he pushed the belief that micro-tubules in the brain (which are a structural component of cells) act like antennas that receive cosmic consciousness from the surrounding field.
In my opinion this is also Penrose's greatest sin: using his status as a scientist to promote spiritual opinions that are indistinguishable from quantum woo disguised as scientific fact.
You know that sinking feeling when you’re in a coffee shop and the ex girlfriend who dumped you walks in with another guy?
Compute the operation of that feeling.
You raise the best objection - indeed, we have no idea how consciousness/qualia could arise from physical processes, nor can we even non-circularly define what consciousness is [1]. But assuming it arises purely through physical processes of the human brain, there is no reason to think it could not be reproduced on a different substrate.
In other words, computing that feeling is equally mysterious whether it is done by neurons, or by transistors.
[1] There are attempts, like vague implications it has something to do with information processing - but that is not actually defining what it is, just what it is associated with and how it might arise. There are other problems with these attempts, such as the fact that the weather can be thought of as an "information processing" system, reacting to changes in pressure and humidity and temperature... so is it conscious? But that is tangential.
One can argue that the described feeling is a product of suppressed instinctive behavior. Or, in other words, a detail of a particular implementation of intelligence in a certain species of mammals.
Hard problems like that can take a long time to solve, multiple phds and/or breakthroughs. Maybe a good longbet[1].
[1] https://longbets.org/
Is anyone aware of some other place where Penrose discusses AI and consciousness? Unfortunately here, the interviewer seems well out of their depth and repeatedly interrupts with non sequiturs.
The Emperors New Mind - Roger Penrose - published 1989
Shadows Of The Mind - Roger Penrose - published 1994
https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument
It's painful, but listening to Penrose is worth it and (in the bits I watched) he somehow manages to politely stick to his thread despite the interruptions.
This is the most frustrating interview I have ever tried to watch
The emperor’s new mind is his work on this (not specifically on LLMs obviously).
The longer we continue to reduce human thinking to mechanistic or computable processes, the further we might be from truly understanding the essence of what makes us human. And perhaps, as with questions about the meaning of life or the origin of the universe, this could be a mystery that remains beyond our reach.
Many years ago now I sat in on (I was a PhD student, so I didn't need to sit exams etc) a Cognitive Science intro course run by Prof. Stevan Harnad.
Harnad and I don't agree about very much, but one thing I was able to get Steven to agree was that if I introduce him to something which he thinks is a person well, that's a person, and too bad if it doesn't meet somebody's arbitrary requirements about having DNA or biological processes.
The generative AIs can't quite do that, but they're much closer than I'd be comfortable with if, like Steven and Penrose, I didn't believe that Computation is all there is. "But doesn't it feel like something to be you?" they ask me, and I wonder why on Earth anybody could ask that question and not consider that perhaps it also feels like something to be a spoon or a leaf.
I wonder if this is an example of "it works in practice but the important question is whether it works in theory."
Perhaps Penrose is right about the nature of intelligence and the fact that computers cannot ever achieve that (for some tight definition of the term). But in a practical sense, these LLMs that are popular are doing things that we generally considered "intelligent". Perhaps it's faking it well but it's faking it well enough to be useful and that's what people will use. Not the theoretical definition.
LLMs (our current "AI") doesn't use logical or mathematical rules to reason, so I don't see how Gödel's theorem would have any meaning there. They are not a rule-based program that would have to abide by non-computability - they are non-exact statistical machines. Penrose even mentions that he hasn't studied them, and doesn't exactly know how they work, so I don't think there's much substance here.
Despite the appearance, they do: despite the training, neurons, transformers and all, ultimately it is a program running in a turing machine.
But it is only a program computing numbers. The code itself has nothing to do with the reasoning capabilities of the model.
Nothing to do with it? You certainly don’t mean that. The software running an LLM is causally involved.
Perhaps you can explain your point in a different way?
Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers.
Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
Well, if you break everything down to the lowest level of how the brain works, then so do humans. But I think there's a relevant higher level of abstraction in which it isn't -- it's probabilistic and as much intuition as anything else.
Pick a model, a seed, a temperature and fix some floating-point annoyances and the output is a deterministic algorithm from the input.
A lot of people look towards non-determanism to be a source for free will. It's often what underlies peoples thinking when they discount the ability of AI to be conscious. They want to believe they have free will and consider determinism to be incompatible with free will.
Events are either caused, or uncaused. Either can be causes. Caused events happen because of the cause. Uncaused events are by definition random. If you can detect any real pattern in an event you can infer that it was caused by something.
Relying on decision making by randomness over reasons does not seem to be a good basis of free will.
If we have free will it will be in spite of non-determinism, not because of it.
That's true with any neural network or ML model. Pick a few points, use the same algorithm with the same hyperparameters and random seed, and you'll end up with the same result. Determinism doesn't mean that the "logic" or "reason" is an effect of the algorithm doing the computations.
The logic or reason is emergent in the combinations of the activations of different artificial neurons, no?
Maybe consciousness is just what lives in the floating-point annoyances
Not really possible. The models work fine once you fix them, it's just making sure you account for batching and concurrency's effect on how floating point gives very (very) slightly different answers based on ordering and grouping and etc.
> LLMs (our current "AI") doesn't use logical or mathematical rules to reason.
I'm not sure I can follow... what exactly is decoding/encoding if not using logical and mathematical rules?
Good point, I meant the reasoning is not encoded like a logical or mathematical rules. All the neural networks and related parts rely on e.g. matrix multiplication which works by mathematical rules, but the models won't answer your questions based on pre-recorded logical statements, like "apple is red".
If it is running on a computer/Turing machine, then it is effectively a rule-based program. There might be multiple steps and layers of abstraction until you get to the rules/axioms, but they exist. The fact they are a statistical machine, intuitively proves this, because - statistical, it needs to apply the rules of statistics, and machine - it needs to apply the rules of a computing machine.
The pumping lemma debunks the myth that computers can parse nested parentheses. Yet for all the practical purposes computers can parse nested parentheses expressions.
Gödel's theorem attracts these weird misapplications for some reason. It proved that a formal system with enough power will have true statements that cannot be proven within that formal system. The human mind can't circumvent this somehow, we also can't create a formal system within our mind that can prove every true statement.
There's very little to see here with respect to consciousness or the nature of the mind.
Penrose's argument does not require humans to prove every true statement. It is of the form - "Take a program P which can do whatever humans do and lets a generate a single statement which P cannot do, but humans can."
The core issue is that P has to seen to be correct. So, the unassailable part of the conclusion is that knowably correct programs can't simulate humans.
He says humans can transcend the rules in a way that Godels theorem shows is impossible for computers.
This argument by Penrose using Godel's theorem has been discussed (or, depending on who you ask, refuted) before in various places, it's very old. The first time I've seen it was in Hofstadter's "Godel, Escher, Bach", but a more accessible version is this lecture[1] by Scott Aaronson. There's also an interview with Aaronson with Lex Friedman where he talks about it some more[2].
Basically, Penrose's argument hinges on Godel's theorem showing that a computer is unable to "see" that something is true without being able to prove it (something he claims humans are able to do).
To see how the argument makes no sense, one only has to note that even if you believe humans can "see" truth, it's undeniable that sometimes humans can also "see" things that are not true (i.e., sometimes people truly believe they're right when they're wrong).
In the end, stripping away all talk about consciousness and other stuff we "know" makes humans different from machines, and confine the discussion entirely over what Godel's theorem can say about this stuff, humans are no different from machines, and we're left with very little of substance: both humans and computers can say things that are true but unprovable (humans can "see" unprovable truths, and LLMs can hallucinate), and both also sometimes say things that are wrong (humans are sometimes wrong, and LLMs hallucinate).
By the way "LLMs hallucinate" is a modern take on this: you just need a computer running a program that answers something that is not computable (to make interesting, think of a program that randomly responds "halts" or "doesn't halt" when asked whether some given Turing machine halts).
(ETA: if you don't find my argument convincing, just read Aaronson's notes, they're much better).
[1] https://www.scottaaronson.com/democritus/lec10.5.html
[2] https://youtu.be/nAMjv0NAESM?si=Hr5kwa7M4JuAdobI&t=2553
I think you're being overly dismissive of the argument. Admittedly my recollection is hazy but here goes:
Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).
When we attempt to formalize even a relatively basic branch of human thinking, simple whole-number arithmetic, as a system of finite symbols and rules, then Goedel's theorem kicks in. Such a system can never be complete - i.e. there will always be holes or gaps where true statements about whole-number arithmetic cannot be reached using our symbols and rules, no matter how we design the system.
We can of course plug any holes we find by adding more rules but full coverage will always evade us.
The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.
> Computers are symbol manipulating machines and moreover are restricted to a finite set of symbols (states) and a finite set of rules for their transformation (programs).
> [...] there will be truths that the computer can simply never reach.
It's true that if you give a computer a list of consistent axioms and restrict it to only output what their logic rules can produce, then there will be truths it will never write -- that's what Godel's Incompleteness Theorem proves.
But those are not the only kinds of programs you can run on a computer. Computers can (and routinely do!) output falsehoods. And they can be inconsistent -- and so Godel's Theorem doesn't apply to them.
Note that nobody is saying that it's definitely the case that computers and humans have the same capabilities -- it MIGHT STILL be the case that humans can "see" truths that computers will never be able to. But this argument involving Godel's theorem simply doesn't work to show that.
I don’t see the logic of your argument. The fact that you can formulate inconsistent theories - where all falsehoods will be true - does not invalidate Gödel’s theorem. How does the fact that I can take the laws of basic arithmetic and add the axiom “1 = 0” to my system mean that Gödel doesn’t apply to basic arithmetic?
Godel's theorem only applies to consistent systems. From Wikipedia[1]:
If a system is inconsistent, the theorem simply doesn't have anything to say about it.All this means is that an "inconsistent" program is free to output unprovable truths (and obviously also falsehoods). There's no great insight here, other than trivially refuting Penrose's claim that "there are truths that no computer can ever output".
[1] https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
You’re equating computer programs producing “wrong results” and the notion of inconsistency - a technical property of formal logic systems. This is not what inconsistency means. An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it. Such formalizations are not interesting or even relevant to the discussion or argument.
I think much of the confusion arises from mixing up the object language (computer systems) and the meta language. Fairly natural since the central “trick” of the Gödel proof itself is to allow the expression of statements at the meta level to be expressed using the formal system itself.
> An inconsistent formalization of human knowledge in the form of a computer program is trivial and uninteresting - it just answers “yes that’s true” to every single question you ask it.
That's only true if you make the program answer by following the rules of some logic that contains the principle of explosion. Not all systems of logic are like that. A computer could use fuzzy logic. It could use a system we haven't thought of yet.
You're imposing constraints on how a computer should operate, and at the same time allowing humans to "think" without similar constraints. If you do that, you don't need Godel's theorem to show that a human is more capable than a computer -- you just built computers that way.
I’m not imposing any constraints - the point is that inconsistent formulations are not interesting or relevant to the argument no matter what system of rules you look at. This has nothing to do with any particular formalism. I think the difficulty here is that words like completeness and inconsistency have very specific meanings in the context of formal logic - which do not match their use in everyday discussion.
I think we're talking past each other at this point. You seem to have brushed past without acknowledging my point about systems without the principle of explosion, and I'm afraid I must have missed one or more points you tried to make along the way, because what you're saying doesn't make much sense to me anymore.
This is probably a good point to close the discussion -- I'm thankful for the cordial talk, even if we ultimately couldn't reach common ground.
Yes! I think this medium isn’t helpful for understanding here but it’s always pleasant to disagree while remaining civil. It doesn’t help that I’m trying to reply on my phone (I’m traveling at moment) - in an environment which isn’t conducive to subtle understanding. All the best to you!
> We can of course plug any holes we find by adding more rules but full coverage will always evade us.
So if we assume that clever software can automate the process of plugging this holes. Is it then like the human mind? Are their still holes that can not be plugged not due to lack of cleverness in the software but due to limitations of the hardware sometimes called the substrate?
> The argument is that computers are subject to this same limitation. I.e. no matter how we attempt to formalize human thinking using a computer - i.e. as a system of symbols and rules, there will be truths that the computer can simply never reach.
If computers are limited by their substrate though it seems like humans might be limited by their substrate too, though the limits might be different.
Yes I think this is one way to attack the argument but you have to break the circularity somehow. Many of the dismissals of the Hofstadter/Penrose argument I’ve read here, I think, do not appreciate the actual argument.
Penrose is claiming there is new physics which is not computable, but to my knowledge Penrose offers no experimental evidence for it.
> 11:43 but new physics of a particular kind what I'm claiming from the girdle argument you see this is the plot which
> 11:50 I think has got lost what I claim is saying that the physics that in is
> 11:57 involved in conscious thinking has to be non-computable physics now the physics
> 12:02 we know there's a little bit of a glitch here because it's not completely clear
>12:07 but as far as we can see the physics we know is computable you see uh what about general
link for 11:43: https://youtu.be/biUfMZ2dts8?si=Epe3gmfCzwhj_g41
Without Penrose giving solid evidence people making counter arguments tend to get dismissive then sloppy. Why put in the time to make well tuned arguments filled with evidence when the other side does not bother after all.
He misrepresents Penrose's argument. I remember Scott Aaronson met Penrose later on, and there was a clarification though they still dont agree.
In any case, here's a response to the questions (some responses are links to other comments in this page).
> Why does the computer have to work within a fixed formal system F?
The hypothesis is that we are starting with some fixed program which is assumed to be able to simulate human reasoning(just like starting with the largest prime assuming that there are finitely many primes in order to show that there are infinitely many primes). Of course, one can augment it to make it more powerful and this augmentation is in fact, how we show that the original system is limited.
Note that even a self-improving AI is itself a fixed process. We apply the reasoning on this program including its improvisation capability.
> Can humans "see" the truth of G(F)?
https://news.ycombinator.com/item?id=43238449
> one only has to note that even if you believe humans can "see" truth, it's undeniable that sometimes humans can also "see" things that are not true
https://news.ycombinator.com/item?id=43238417
The first question is not the question I'd like answered. What I want to know is this:
> Why does the computer have to work within a CONSISTENT formal system F?
Humans are allowed to make mistakes (i.e., be inconsistent). If we don't give the computer the same benefit, you don't need Godel's theorem to show that the human is more capable than the computer: it is so by construction.
Take a group of humans who each make observations and deductions, possibly faulty. Then, they do extensive checking of their conclusions by interacting with humans and computer proof assistants etc. Let us name this process as HC.
A program which can simulate individual humans should also be able to simulate HC - ie. generate proofs which are accepted by HC.
---
Penrose's conclusion in the book is more weak - that a knowably correct process cannot simulate humans.
We now have LLMs which hallucinate etc that are not knowably correct. But, after reasoning based methods, they can try to check their output and arrive at better conclusions, as is happening currently in popular models. This is fine, and is allowed by Penrose's argument. The argument is applied to the 'generate, check, correct' process as a whole.
I don't have a problem with that.
(I don't see how that relates to Godel's theorem, tough. If that's the current position held by Penrose, I don't disagree with him. But seeing the post's video makes me believe Penrose still stands behind the original argument involving Godel's theorem, so I don't know what to say...)
That a knowably correct program cant simulate human reasoning is basically Godel's theorem. One can use a diagonalization argument similar to Godel's proof for programs which try to deciding which Turing machines halt. Given a program P which is partially applicable, but always correct, we can use diagonalization to construct P' a more widely applicable and correct program ie. P' can say that some Turing machine will not halt while P is undecided. P'. So, this doesn't involve any logic or formal systems, but it is more general - Godel's result is a special case as the fact that a Turing machine halts can be encoded as a theorem and provability in a formal system can be encoded as a Turing machine.
Penrose indeed, believes both in the stronger claim - a program can't simulate humans and the weaker claim, a knowably correct program can't simulate humans.
The weaker claim being unassailable firstly shows that most of the usual objections are not valid and secondly, it is hard to split the difference ie. to generate the output of HC using a program which is not knowably correct. A program whose binary is uninterpretably but by magic only generates true theorems. Current AI systems including LLMs don't even come close.
This argument would fall apart if we could simulate a human mind, and there are good reasons to think we could. Human brains and their biological neurons are part of the physical world, they obey the laws of physics and operate in a predictable manner. We can replicate them in computer simulations, and we already have although not on the scale of human minds (see [1]).
It's on Penrose and dualists to show why simulated neurons would act differently than their physical counterparts. Hand-waving about supposed quantum processes in the brain is not enough, as even quantum processes could be emulated. So far, all seems to indicate that accurate models of biological neurons behave like we expect them too.
It stands to reason then, that if a human mind can be simulated, computers are capable of thought too.
[1] https://openworm.org/
I've read from Hoftadter "I am a strange loop" that should go around those ideas too. The point of how you define consciousness (he does it in a more or less computable way, a sort of self-referential loop), so it may be within the reach of what we are doing with AIs.
But in any case, it is about definitions, not having very strict ones for consciousness, intelligence and so on, and human perception and subjectivity (the Turing Test is not so much about "real" consciousness but if an observer can decide if is talking with a computer or a human).
Any theory which purports to show that Roger Penrose is able to "see" the truth of the consistency of mathematics has got to explain Edward Nelson being able to "see" just the opposite.
Consciousness, at its simplest, is awareness of a state or object either internal to oneself or in one's external environment.
AI research is centered on implementing human thinking patterns in machines. While human thought processes can be replicated, claiming that consciousness and energy awareness cannot be similarly emulated in machines does not seem like a reasonable argument.
shorter roger penrose (tl/dw or tl/hi):
1. assume consciousness is not computable. therefore computing machines cannot be conscious.
2. Corollary: assume intelligence requires consciousness. therefore computing machines cannot be AI
If the Universe is computable, then human thinking is computable. All due respect to Penrose for his stellar achievements, but frankly the implications of Turing Complete, the halting problem, Church/Turing hypothesis and the point of Godel's Theorem seem to be things he does not fully understand.
I know this sounds cheeky but we all have brains that are good at some things and have failure modes as well. We are certainly seeing shadows of Human-type fallability in neural nets, which somehow seem to have a lot of similarities to human thinking.
Brains evolved in the physical world to solve problems and help organisms survive, thrive, and reproduce. Evolution is the product of a massive search over potential physical arrangements. I see no reason why the systems we develop would operate on drastically different premises.
It's sad to see the interviewer wasting the opportunity to interview Penrose. I found Lex Fridman does a much better job: https://www.youtube.com/watch?v=hXgqik6HXc0
Comically unqualified interviewer - where'd they find this guy?
It’s actually Sacha Baron Cohen in a new mask.
I'm really looking forward to the point where we can put 3d glasses on a person and give them a simulated reality that is indistinguishable from reality, but composed entirely of ML-driven identities. We can already make photorealistic images on computers, produce convincing text, video, and audio, complex behavior, goal-seeking, etc, and one major trend in ML is combining all of those into models that could, in principle, run realtime inference.
I don't worry about philosophical zombies, dualism, quantum conciousness, or anything like that. I just want to get to the point past the uncanny valley- call it the spooky jungle- that cannot be distinguished from reality.
Sounded like Rodger was trying to make the "Chinese room argument"[0]. And here is a humorous counter point[1].
[0] https://en.m.wikipedia.org/wiki/Chinese_room
[1] https://www.reddit.com/r/maybemaybemaybe/comments/10kmre3/ma...
If anyone thinks the human mind is computable, tell me the location of even one particle.
OK, try this for size, bearing in mind that it is a heuristic argument.
No one can "know", with certainty, the location of any particle. Or, to be slightly more accurate, the more we know of its location, the less we know of its movement. This is essentially Heisenberg/QM 101.
But we see the results of "computation" all around us, all the time: Any time a chemical or physical reaction settles to an observable result, whether observed by one of us, that is, a human, or another physical entity, like a tree, a squirrel, a star, etc. This is essentially a combination of Rovelli's Relational QM and the viewing of QM through an information centric lens.
In other words, we can and do have solid reality at a macro level without ever having detailed knowledge (whatever that might mean) at a micro/nano/femto level.
Having said that, I read your comment as implying that "the human mind" (in quotes because that is not a well defined concept, at least not herein; if we can agree on an operational definition, we may be able to go quite far) is somehow disconnected from physical reality, that is, that you are suggesting a dualist position, in which we have physics and physical chemistry and everything we get from them, e.g., genetics, neurophysiology, etc., all based ultimately on QM, and we have "consciousness" or "the mind" as somehow being outside/above all of that.
I have no problem with that suggestion. I don't buy it, and am mostly a reductionist at heart, so to speak, but I have no problem with it.
What I'd like to see in support of that position would be repeatable, testable statements as to how this "outside/above" "thing" somehow interacts with the physical substrate of our biological lives.
Preferably without reference to the numinous, the ephemeral, or the magical.
Honestly, I really would like to see this. It would represent one of the greatest advances in knowledge in human history.
I'm only talking about the physical world - phenomena that don't correspond to something computable, which are very common, and include the next five seconds of the amplifier noise heard on your headphones, are dealt with by being ignored or averaged out. Collective motion is somewhat predictable and includes things like popular opinion or temperature, but individual deviations aren't covered.
The problem with translating that into proof of dualism is that everything outside the computable looks the same. A hypothesis is something you can assume to compute a prediction, so if any hypothesis is true, the phenomenon must be computable. If the phenomenon is not computable, no computable hypothesis will match. The second you ascribe properties to a soul that can distinguish it from randomness, or properties of randomness that distinguished it from free will you've made one or the other computable, and whichever is computable won't match reality, if we suppose we're looking for something outside of rational explanation and not a "second material."
Here's a concrete example. If you had access to a Halting oracle, it would only be checkable on Turing machines that you yourself could decide the halting problem for. Any answers beyond those programs wouldn't match any conceivable hypothesis.
honestly this whole argument about penrose, gödel, non-computability etc feels way too complicated for what seems pretty obvious to me, humans are complex biology with basic abstractions: we take in sensory data, process it moment by moment, store it with varying abstraction levels (memories, ideas, feelings), and evolve continuously based on new inputs, evolution itself is just genetic programming responding to external conditions. It looks random sometimes but that's because complexity makes it difficult to simulate fully, small variations explode into chaos and we call it randomness, doesn't mean it fundamentally is. The whole thing about consciousness being somehow outside computation just feels like confusion between being the system (external view) and experiencing it from within (internal subjective view), doesn't break computation...there’s no fundamental contradiction or noncomputability introduced by subjective experience, randomness, or complexity, just different perspectives within the system. If you want to understand say genius for example, go into neurology and look at raw horse power and abstract thinking.. Neurological capacity (the hardware), Neurodiversity (the software style), Nurture (the training data and tuning environment) - (Mottron & Dawson, Baron-Cohen, Jensen & Deary & Haier). It's part of why I personally think we're really at the point that spiritual/godly exploration should be the most important thing, but that sounds woo woo and crazy, I suppose. (I probably just over simplified a bunch of stuff I don't fully understand)
> The interviewer is barely treading water in the ocean of Penrose's thought. He mistakes his spasmodic thrashing for swimming.
The comments below this video are utterly insane. Roger Penrose seems to have a fanatical cult attached to him.
Penrose is a dualist, he believes the mind is detached from the material world.
He has been desperately seeking proof of quantum phenomenons in the brain, so he may have something to point to when asked how this mind, supposedly external to the physical realm, can pilot our bodies.
I am not a dualist, and I don't think what Penrose has to say about AI or consciousness holds much value.
> He has been desperately seeking proof of quantum phenomenons in the brain, so he may have something to point to when asked how this mind, supposedly external to the physical realm, can pilot our bodies.
I have never see anyone with this approach try and tackle how something non-physical controls or interacts with the physical without also being what we normally call physical at least not in a rigorous approach to the issue. It always seems to lead to inconsistency or reformulation of existing definitions and meanings without producing anything new.
I imagine what they really mean is that there's something that can't be built by us (and maybe that can't even be "plugged into" by things we build), that we can't build anything that carries out the same function, and this is what determines the "human conscience".
So, to my eyes, typical baseless speculation
Well someone has lost the plot.
What is intelligence if not computation? Even if it turns out our brains require quantum computation in microtubules (unlikely, imho), it's still computation.
Sure, it has limits and runs into paradoxes, so what? The fact that 'we can see' the paradox but somehow maths can't, is just a Chinese-room type argument, conflating different 'levels' of the system.
If you ever had the misfortune to read The Emperor's New Mind (published 1989) you would know that he has not had a plot to lose for quite some time now.
I read it around that time and thouroughtly enjoyed it; IIRC is basically a pop-sci intro to number theory, physics, cosmology, biology, etc, with only the last couple of chapters attempting to tie it all together into the quantum-consciousness stuff.
He seems to have been stuck in that groove ever since, though.
If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong.
- Arthur C Clarke
For those of us without time to watch a video - what is the most important AI myth?
The question is answered in the full title:
> "Gödel's theorem debunks the most important AI myth. AI will not be conscious"
Same statement from Penrose here with Lex Friedman: "Consciousness is Not a Computation" [1].
[1] https://www.youtube.com/watch?v=hXgqik6HXc0
that carbon chauvinism isn't real.
Daniel Dennett thoroughly debunks Penrose' argument in Chapter 15 of Darwin's Dangerous Idea. Quoting reviewers of a Penrose paper ... "quite fallacious," "wrong," "lethal flaw" and "inexplicable mistake," "invalid," "deeply flawed." "The Al community [of 1995] was, not surprisingly, united in its dismissal of Penrose's argument."
[dead]
[flagged]
It strikes me as quite arrogant to assume that those are the only possibilities. People, even experts in a field, disagree about topics and the implications of evidence all the time. Arguing that honest disagreement must reduce down to one of the three categories you list is basically saying "my point of view is so obviously correct that only bad thinkers or bad people could disagree". But that's almost certainly not the case.
I suspect you're conflating the concepts of intelligence and consciousness. It is completely unsurprising that a Turing Machine can have intelligence.
It is vacuously true that a Turing machine can simulate a human mind - this is the quantum Church-Turing thesis. Since a Turing machine can solve any arbitrary system of Schrodinger equations, it can solve the system describing every atom in the human body.[1]
The problem is that this might take more energy than the Sun for any physical computer. What is far less obvious is whether there exist any computable higher-order abstractions of the human mind that can be more feasibly implemented. Lots of layers to this - is there an easily computable model of neurons that encapsulates cognition, or do we have to model every protein and mRNA?
It may be analogous to integration: we can numerically integrate almost anything, but most functions are not symbolically integrable and most differential equations lack closed-form solutions. Maybe the only way to model human intelligence is "numerical."
In fact I suspect higher-order cognition is not Turing computable, though obviously I have no way of proving it. My issue is very general: Turing machines are symbolic, and one cannot define what a symbol actually is without using symbols - which means it cannot be defined at all. "Symbol" seems to be a primitive concept in humans, and I don't see how to transfer it to a Turing machine / ChatGPT reliably. Or, as a more minor point, our internal "common sense physics simulator" is qualitatively very powerful despite being quantitatively weak (the exact opposite of Sora/Veo/etc), which again does not seem amenable to a purely symbolic formulation: consider "if you blow the flame lightly it will flicker, if you blow hard it will go out." These symbols communicate the result without any insight into the computation.
[1] This doesn't have anything to do with Penrose's quantum consciousness stuff, it just assumes humans don't have metaphysical souls.
Mr. Penrose stands as a living testament to the curious duality of human genius: one can wield equations like a virtuoso, bending the arc of physics itself through sheer mathematical brilliance, while simultaneously tripping over philosophical nuance with all the grace of a tourist fumbling through a subway turnstile. A titan in the realm of numbers, yet a dilettante in the theater of ideas.
ps: i'd like to take a moment to thank DeepSeek for helping me with the specific phrasing of this critique