In Search of a New Philosophical Term

I am looking for a new philosophical term. A short (perhaps one, two or three syllable) word for an entity that exists but cannot be described completely by any single formal theory (or algorithm). I believe human beings, their societies and cultures are such entities, but I think many physical systems are such entities as well. The word “system” actually does not fit here because it has a connotation of something systematic, that is something that can be captured completely by some theory. I am thinking of physical entities for which the set of equations describing them cannot be solved except for special cases, i.e. where the mathematical description contains functions that are not turing-computable. In mathematics, such entities are known, entities for which it can be shown that every formal theory of them is incomplete. You may always be able to extend a given theory, but the resulting theory will be incomplete again. Such entities cannot be exhaustively described in terms of a formal theory. If physical “systems” of this kind exist, they cannot be perfectly simulated by any algorithm. They would generate new information (new with respect to any given theory). Something as simple as a set of three bodies moving around each other might already be such a system (it looks like the “three body problem” cannot be solved exactly. Kurt Ammon called (a certain kind of) such objects “creative systems” but I want to avoid the term “system”. Any suggestions? It might be a synthetic neologism, but should capture the idea in such a way that it has a chance to catch on.

Much of science is built on the tacit assumption that everything can be described in terms of formal theories. Everything is a “system” in this sense. But this is just a hypothesis and I think it is wrong. In mathematics, there are mathematical entities that are not completely formalizable (i.e. they have more true properties than can be derived in any single theory about them). If such things exists in mathematics, there is no a-priori reason they cannot exist in physical reality as well. What exists and what can be formalized is not necissarily the same. I want a short and crisp term for the unformalizable. The hypothesis that everything that exists is formalizable is built into our language. There is no short, simple word for the non-formalizable (yet). There is a large range of possibilites we cannot see because our language has been restricted.

Advertisements

Project Sketch

Sketch of the line of argumentation, to be developed in a sequence of articles. The plan is to write each article in such a way that it appears to be almost trivial. The argument is broken up into very small steps that can be understood without special knowledge of mathematics or computer science. The line of thought should be presented in a form that shows it is actually simple and trivial (which it is).

Programs as finite texts over finite alphabets. Each program only contains a finite amount of information.

Programming languages – Interpreters – Special purpose languages – Universal programming languages – Turing machines and other mathematical “programming languages”

Computable functions. Programs computing functions. Functions as (infinite) lists of input-output pairs. Programs of computable functions as compressed representations of such lists. Regularity in such lists expressed by the programs.

Representation of arbitrary data as natural numbers. Representation of Programs by natural numbers. Gödel numbers. Results valid for functions (and programs) of natural numbers are valid for functions (and programs) of arbitrary data.

Turing-enumerability.

Programs computing total functions of natural numbers are not Turing-enumerable. Proof of this by the diagonal method. Constructive nature of this proof. So every algorithm producing programs computing total functions is incomplete. The diagonalization method can always be used to produce another computable function and the program computing it, but although this operation is Turing-computable itself, integrating it into an algorithm yields an incomplete program again. So it must be applied “from the outside”, not under the control of the algorithm itself.

Side-step: Turing-enumerability of programs of a programming language (programming languages are decidable). Halting-problem for Turing machines. Impossibility to prove equivalence of arbitrary programs with an algorithm. Impossibility to prove correctness of arbitrary software by an algorithm. Programming is always risky and error-prone.

Set of Programs producing programs computing total functions is again not Turing-enumerable. Sketch of Proof. Productive sets and productive functions. The set of such programs is a productive set. Trying to integrate the productive function into the algorithm does yield an incomplete program. So again, the extension process must be applied from the outside, not under the control of the algorithm itself.

Definition of creative systems. Creative systems cannot be algorithms.

Because of the possibility of Gödelization (mapping of data onto natural numbers) all these results are valid for programs processing arbitrary types of data.

Any kind of knowledge can be viewed as programs calculating total functions or programs producing such programs. Declarative knowledge can be viewed as programs formulated in a special purpose programming language and interpreted by some procedures that act as the interpreter. Applying such knowledge can be viewed as the production and subsequent execution of programs. All these programs halt after some time, so they can be viewed as programs computing total functions.

Creativity (adding new programs to a set of programs that is not Turing-enumerable) is the core of general intelligence. A generally intelligent system cannot be an algorithm but must be a creative system. Any algorithm (even an algorithm producing programs) is limited. It contains a limited amount of knowledge that has a limited reach. General intelligence requires a mechanism to extend the set of programs (the knowledge) but this cannot be part of the system as far as it can be viewed as an algorithm.

Algorithms and formal theories are equivalent notions. There cannot be formal theories of creative systems. If science is about describing systems with fixed laws, creative systems are outside its scope. They are inside the scope of a wider area of “Wissenschaft”, however.

Artificial intelligence may be possible but truly intelligent systems cannot be algorithms. They must contain an extension mechanism not under the control of their algorithmic part.

It is interesting to note that the basic results from computability theory where already known in the 1950s and 1960s (and even earlier) when the traditional AI paradigm was created. The traditional AI paradigm ignored these insights. This is the reason it developed into a dead track. All contemporary “AI” systems can be described as algorithms. Where they contain learning mechanisms, these are limited. It would be interesting to work out the history of early AI to see how this happened. Why where the results of people like Gödel, Turing, Kleene etc. ignored by AI, instead of turning them into the core of the discipline and defining the aim of the discipline as developing creative systems, i.e. systems that can go beyond algorithms? Has this been worked out by any historian of science already?

Thoughts about Intelligence and Creativity

Some unordered notes (to be worked out further) on some general principles and limits of intelligence.

Reality has more features that we can perceive. What we perceive is more than what we understand. And our understanding has several levels, from perceiving shapes to conceptual interpretation and deep analysis. On each level, we can capture only a fraction of the information of the level before it. (See also https://creativisticphilosophy.wordpress.com/2015/02/19/dividing-the-stream-of-perceptions/)

The primary sense data are processed quickly, by neuronal systems having a high degree of parallelism. However, the level of analysis is rather shallow. To process large amounts of data quickly, you have to have an algorithm, a fixed way of processing the data. Such an algorithm can only recognize a limited range of structures. An algorithm limits the ways in which the bits of data are combined. An algorithm is a restriction. It prevents universality. The data could be combined in so many ways that you would get what is known as a combinatorial explosion if you would not limit it somehow. The system, having only a limited processing capacity, would be overwhelmed by the hyper-astronomically growing number of possibilities. Therefore a system processing a large amount of data must restrict the way it combines the data. As a result, it can process large amounts of data quickly but will be blind to a lot of the regularity that is contained in the data and could theoretically be discovered.

In order to discover such hidden features, you cannot process large amounts of information at once because this would lead to a combinatorial explosion. You would, instead, have to process small amounts of information at any given time, trying to find some pattern. Only when you discover a pattern, you can try to scan large amounts of data for it, essentially applying a newly found algorithm to the data. But that algorithm will in turn be blind to other regularity the data might contain. Each algorithm you may use to analyze data is incomplete, because it has to limit the way data is combined, or it will not be efficient, leading to combinatorial explosions again.

Intelligence could be defined as the ability to find new instances of regularity in data, regularity that was not known before. It can therefore be defined as the ability to construct new knowledge (new algorithms). This is only possible, in principle, by analyzing small amounts of data at any given time. Any algorithm you may use to analyze larger amounts of data will be limited and may be missing some of the structure that is there (i.e. it will restrict the generality of the intelligence). (See also https://creativisticphilosophy.wordpress.com/2015/05/16/how-intelligent-can-artificial-intelligence-systems-become/ and https://denkblasen.wordpress.com/2015/05/25/a-note-on-cognitive-limits/).

This limit to intelligence should be valid for single human beings but also for groups of human beings, like scientific communities or cultures. It would also hold for any artificial intelligent system. Such systems cannot be made arbitrarily intelligent. One could try to do so by putting many small intelligent systems in parallel (something like an artificial intelligent community) but since such systems would not be limited by any algorithm (or formal theory), they could develop into totally different directions, disagree with each other and suffer from misunderstandings if one would try to connect them together. If you connect them in a way that limits the possibility of misunderstandings in their communication or that stops them from disagreeing or from developing into totally different directions, you end up with a parallel algorithm again that can harmoniously analyze large amounts of data but is limited in what it can do.

You either get shallow processing of large amounts of data or deep analysis of small amounts of data with the potential of new discoveries, but you cannot have both at once. As a result, there is a limit to how intelligent a system can become.

There is no limit to what can be discovered by an intelligent system: if a structure is present in a set of data, it can be found if the system doing the analysis is not an algorithm (i.e. a system describable in terms of a finite formal theory – an algorithmic system, on the other hand, will necessarily be systematically blind to some structures). On the other hand, an artificial superintelligence is not possible. Processes of intelligent data analysis in such a system might be faster than they are in a human being, but they will not be much more sophisticated. Higher sophistication by adding of smart algorithms leads to limitations, i.e. to systematic blind spots. Higher sophistication by attempting to process more data at a time leads to combinatorial explosions which cannot be managed by whatever additional speed or processing power one would add. (See also http://asifoscope.org/2013/01/18/on-algorithmic-and-creative-animals/ and also http://asifoscope.org/2015/01/20/turning-the-other-way/)

For shallow analysis you need algorithms. Speed in terms of amount of data (bits) processed per time (seconds) may be high, but the depth of processing is limited. If the goal of cognition is to find regularity (and thus compress data), the algorithmic system will not find all regularity that is there. It cannot compress data optimally in all instances. Such a system will have blind spots.

Finding all regularity may be viewed as the ability to find the smallest self-expanding program that can produce the data (i.e. an optimal compression of the data). If an algorithm analyzes a stream of data, i.e. it parses the data, and the stream of data is longer than the algorithm itself, the algorithm may be seen as a compression of the data. If the compression is loss-free, i.e. the algorithm can reproduce the original data then the data must contain some redundancy if it is longer than the algorithm. The data will then not exhaust the information carrying capacity of the information channel. Therefore, it must be possible to add some information to that channel that is not parsed by the given algorithm. Hence the algorithm must be incomplete since there is data it cannot parse. It systematically has a blind spot.

Therefore, an intelligent system able to find arbitrary regularity cannot itself be an algorithm. Instead it must be a system that can produce new knowledge (and thus does not have a fixed representation as a finite text, and does not have a Goedel number). It must be changing over time, incorporating information that enters it from the analyzed information stream. This information reprograms the system, so it changes the way the system works. The system cannot have a fixed way in which it is working because then it would be an algorithm and would have a blind spot.

The possibility that the system self-destructs (becomes mad) cannot be excluded. That is a risk involved in intelligence/creativity.

Sophisticated knowledge has a high efficiency but a low universality. It is special and will “miss” many features of the data it processes (i.e. it has blind spots). On the other hand, it is efficient, which means that it allows large amounts of data to be processed. The processing of large amounts of data in a short time means that only a limited subset of the properties of that data can be considered, making analysis shallow.

Simple knowledge, on the other hand, has a high universality but a low efficiency. It allows for new features of data to be discovered. It therefore has the potential of a deep analysis that does not miss properties, but it has a low efficiency and can only process small amounts of data at a time, since applying it to large sets of data leads to combinatorial explosions.

The simple knowledge is what is called “reflection basis” in K. Ammon’s dissertation. (see Ammon, Kurt: “The Automatic Development of Concepts and Methods“, Doctoral Dissertation, University of Hamburg, 1987).

New knowledge forms by incorporating information from data into the knowledge base. This might occasionally happen through the application of sophisticated knowledge but most of the time is the result of applying simple knowledge to small amounts of data, leading to the discovery novel (from the system’s point of view) properties of the data. As a result, new more sophisticated knowledge forms. This knowledge is special and more efficient.

The small amounts of data that are processed by simple knowledge might be input data from the input stream, but might also be chunks of knowledge that are experimentally plugged together in different ways and then experimentally applied to the input stream (perhaps in its entirety). This might occasionally lead to sudden changes of perception (e.g. changing from two-dimensional vision to three-dimensional vision). Successful (i.e. efficient) structures are then retained. This is a way of incorporating information from the environment into the system.

The total universality of a creative system lies in the emptiness of its core (i.e. there is no fixed, i.e. unchangeable, special knowledge restricting what it can do).

The trade-of between efficiency and generality is a special case of (or another way of expressing) the trade of between explicitness/exactness and generality described in https://creativisticphilosophy.wordpress.com/2013/10/12/the-trade-off-between-explicitness-and-generality/. A result of it is that there is a fundamental limit to how intelligent a system can become.

Sophisticated knowledge can be used to filter out expected components from the data stream, leaving the surprising parts that can then be analyzed by less sophisticated knowledge. The end result might be more comprehensive sophisticated knowledge where the previously surprising data becomes expected data.

(A lot of this is already contained in K. Ammon’s dissertation in one form or another).

The Perfect Fire

File:Fire02.jpg

In https://embassyofthefuture.wordpress.com/2015/04/18/civilizations/, I have defined civilizations as creative dissipative systems. It looks like not many people have understood what I meant by that and how terrible a thing it is.

A dissipative system is keeping its structure (or is growing) by using up some resource. It is producing entropy and it is getting rid of this entropy by using a low entropy resource and sheding high entropy waste. For example, a plant is taking up low entropy solar radiation and is radiating high entropy termal radiation. Another example is a flame. A fire turns a low entropy fuel into high entropy end products and heat radiation.

A creative system is a system that can change the way it works. It can reprogram itself. It can find new resources and new ways of using them. It can find new technologies.

Think of a fire that, when it has used up its fuel, does not burn down but changes its chemistry or its technology and starts burning something else. An intelligent fire that will find every available resource (including those unavailable at first).

Within the process of that fire burning, there will be some selection: the faster burning parts of it that have learnt more tricks will outcompete the slower parts.

Think of putting such a fire on the surface of a planet. Come back some time later and you will only find a desert.

We are such a fire. A creative dissipative system. The perfect fire. Thats us.

(The picture is from https://commons.wikimedia.org/wiki/File:Fire02.jpg).

Visiting an Old Philosopher

Looking for a destination for a little evening bycicle tour after work, a recent comment on https://nosignofit.wordpress.com/2015/05/27/body-person-community-watsuji-tetsuro-and-the-ground-of-ethics/ reminded me of Max Scheeler, providing an idea for a little tour. So I decided to ride along the river Rhine, then through some parks (Volksgarten and Vorgebirgspark) to visit Colognes Südfriedhof (southern grave yard), where the old philosopher is residing.

File:Grab Scheler Köln Südfriedhof (Max, Maria, Max Georg) nah.JPG

Unfortunately, I was a bit late, and the old man did not receive me. When I arrived, I just saw the gardener locking the estate for the night, so I might have to return another day (although a visit to the library might be more fruitful in intellectual terms). From there, however, I had a nice ride home through a series of parks left and right of Militärringstraße.

Scheler is considered the founder of a special development in German philosophy of the early 20th century called philosophical anthropology, about which I might wright something at a later time. especially about some ideas developed by Scheler and the other philosophers who are considered to belong here, notably Helmuth Plessner and Arnold Gehlen. These thinkers, while differing massively in their political opinions, share certain ideas I am interested in, especially a concept of “world-openness” (Weltoffenheit) of humans (to use a term introduced by Scheler), a concept that shows similarities and connections to the notion of creativity I am investigating in some of my own philosophical studies. But exploring that connection will have to wait until a later time.

(The picture is from http://commons.wikimedia.org/wiki/File:Grab_Scheler_K%C3%B6ln_S%C3%BCdfriedhof_%28Max,_Maria,_Max_Georg%29_nah.JPG).