In re-addressing points made by Penrose about Artificial Intelligence in his book The Emperor’s New Mind (1989) a new view arises that gives a formal structure to a new model of neurological systems.
In responding to statements by Roger Penrose in his book The Emperor’s New Mind (1989) that deals with questions of Artificial Intelligence I hope to narrow some of these arguments and from that give substance to a new model of neurological systems. I will focus on three specific areas: 1. The Turing model of intelligence in computers, 2. Gödel’s Theorem and numbers and 3. Organic and non-organic systems.
In Alan Turing’s paper “Computing Machinery and Intelligence” (1950) he considers the ‘imitation game’ as a test for whether a computer can pass for a human being as being a valid standard for a machine being intelligent. Penrose finds the Turing test of machine intelligence a ‘valid indication of the presence of thought, intelligence, understanding, or consciousness is actually quite a strong one’ (Penrose, 1989: 9) and ‘Thus I am, as a whole, prepared to accept the Turing test as a roughly valid one in its chosen context’ (Penrose, 1989: 10). Now while Penrose is clear to point out the weaknesses found in the Turing test of machine intelligence, such that a computer must be able to imitate a human but a human does not have to imitate a computer (Penrose, 1989: 9), he has still accepted it as a test of a machine’s ability to have the following human properties: thought, intelligence, understanding, and consciousness.
In my paper “The Turing Machine: A Question of Linguistics?” I raise the issue that the Turing test of intelligence is just a question of language rather than intellect and even such factors as the type of language used, the physical and cultural abilities and knowledge of a human may cause a person to ‘fail’ this type of intelligence test (Tice, 1997/2004). Penrose has placed too many extraneous factors on this type of test and seems to miss the point that real intelligence is something beyond a game.
Penrose uses Gödel’s Theorem as a ‘proof’ that mathematical insight is by nature non-algorithmic (Penrose, 1989: 416). Unfortunately, Penrose has confused the fact that because Gödel’s Theorem states that not all axiomatic propositions can be proved, and hence, the thought process used for such thinking is non-algorithmic, the nature of mathematical ‘insight’, the action of realizing that a non-algorithmic process is itself a non-algorithmic process, gives light to the reason to suppose that human thought is non-algorithmic (Penrose, 1989: 417 and 429).
In my book Formal Constraints to Formal Languages (In Press) I address the question of Gödel’s Theorem and Hilbert’s axiomatic foundations and that it did not provide an ‘absolute’ factor to the provability of propositions of number theory (Tice, In Press: 9). Also the use of a Universal Truth Machine [UTM] is given to present the basic procedure for Gödel’s Incompleteness Theorem (Tice, In Press: 13). An interesting result occurs when I substitute the words ‘will never’ with the word ‘may’ in the following sentence from the UTM:
UTM will never say G is true.
Resulting in the following sentence:
UTM may say G is true.
What results is that the Universal Truth Machine [UTM] becomes universal and changes the primary strengths found in Gödel’s Incompleteness Theorem, namely that some propositions can be axiomatically proved and some may not, but the robustness of the axiomatic system stays intact because it has accounted for such variants.
- Quote paper
- Professor Bradley Tice (Author), 2005, A Theory on Neurological Systems, Munich, GRIN Verlag, https://www.grin.com/document/206681