Sketch of the line of argumentation, to be developed in a sequence of articles. The plan is to write each article in such a way that it appears to be almost trivial. The argument is broken up into very small steps that can be understood without special knowledge of mathematics or computer science. The line of thought should be presented in a form that shows it is actually simple and trivial (which it is).
Programs as finite texts over finite alphabets. Each program only contains a finite amount of information.
Programming languages – Interpreters – Special purpose languages – Universal programming languages – Turing machines and other mathematical “programming languages”
Computable functions. Programs computing functions. Functions as (infinite) lists of input-output pairs. Programs of computable functions as compressed representations of such lists. Regularity in such lists expressed by the programs.
Representation of arbitrary data as natural numbers. Representation of Programs by natural numbers. Gödel numbers. Results valid for functions (and programs) of natural numbers are valid for functions (and programs) of arbitrary data.
Programs computing total functions of natural numbers are not Turing-enumerable. Proof of this by the diagonal method. Constructive nature of this proof. So every algorithm producing programs computing total functions is incomplete. The diagonalization method can always be used to produce another computable function and the program computing it, but although this operation is Turing-computable itself, integrating it into an algorithm yields an incomplete program again. So it must be applied “from the outside”, not under the control of the algorithm itself.
Side-step: Turing-enumerability of programs of a programming language (programming languages are decidable). Halting-problem for Turing machines. Impossibility to prove equivalence of arbitrary programs with an algorithm. Impossibility to prove correctness of arbitrary software by an algorithm. Programming is always risky and error-prone.
Set of Programs producing programs computing total functions is again not Turing-enumerable. Sketch of Proof. Productive sets and productive functions. The set of such programs is a productive set. Trying to integrate the productive function into the algorithm does yield an incomplete program. So again, the extension process must be applied from the outside, not under the control of the algorithm itself.
Definition of creative systems. Creative systems cannot be algorithms.
Because of the possibility of Gödelization (mapping of data onto natural numbers) all these results are valid for programs processing arbitrary types of data.
Any kind of knowledge can be viewed as programs calculating total functions or programs producing such programs. Declarative knowledge can be viewed as programs formulated in a special purpose programming language and interpreted by some procedures that act as the interpreter. Applying such knowledge can be viewed as the production and subsequent execution of programs. All these programs halt after some time, so they can be viewed as programs computing total functions.
Creativity (adding new programs to a set of programs that is not Turing-enumerable) is the core of general intelligence. A generally intelligent system cannot be an algorithm but must be a creative system. Any algorithm (even an algorithm producing programs) is limited. It contains a limited amount of knowledge that has a limited reach. General intelligence requires a mechanism to extend the set of programs (the knowledge) but this cannot be part of the system as far as it can be viewed as an algorithm.
Algorithms and formal theories are equivalent notions. There cannot be formal theories of creative systems. If science is about describing systems with fixed laws, creative systems are outside its scope. They are inside the scope of a wider area of “Wissenschaft”, however.
Artificial intelligence may be possible but truly intelligent systems cannot be algorithms. They must contain an extension mechanism not under the control of their algorithmic part.
It is interesting to note that the basic results from computability theory where already known in the 1950s and 1960s (and even earlier) when the traditional AI paradigm was created. The traditional AI paradigm ignored these insights. This is the reason it developed into a dead track. All contemporary “AI” systems can be described as algorithms. Where they contain learning mechanisms, these are limited. It would be interesting to work out the history of early AI to see how this happened. Why where the results of people like Gödel, Turing, Kleene etc. ignored by AI, instead of turning them into the core of the discipline and defining the aim of the discipline as developing creative systems, i.e. systems that can go beyond algorithms? Has this been worked out by any historian of science already?