Why Philosophy is Going to Stay

In a nutshell, philosophy deals with those subjects that cannot be completely formalized. The sciences are about areas of knowledge for which complete formal theories are possible. Scientism includes the belief that what is formalizable and what is real is the same, i.e. everything can be described in terms of formal theories. Analytical philosophy is trying to turn philosophy into a science, but if everything can be formalized, philosophy is (as some scientists state) unnecessary, so analytical philosophy is making itself obsolete.

However, if, as I think, reality cannot be completely formalized, science is inherently incomplete and philosophy is not going away. Especially, human cognitive processes cannot be completely formalized in principle. Each formal description of such processes is incomplete and partial. Cognition develops historically (something formalizable systems don’t do). Cognitive science then turns out not to be a science but a historical discipline. Human thinking does not follow fixed laws.

As a result, there is no complete and at the same time exact formal description of cognition and of its products, like society, culture, and even science itself, cannot be described completely in terms of a single formal theory. Philosophy is not going away. As long as you do “normal science” in the Kuhnian sense, you don’t need philosophy, but if you are working in any field of the humanities, or psychology or “social sciences”, you permanently need philosophy. Here, you do not have a fixed methodology. You have to be reflexive and look at what you are doing from a meta- (and meta-meta-…level) all of the time. You have to look at what you are doing critically all of the time. In the sciences, you also have to do that, but only occasionally, if you bump into anomalies and you have to shift your paradigm.

In mathematics, there are entities for which we can prove that a complete formal description is impossible. If such entities exist in mathematics, there is no a-priory reason why they should not also exist in physical reality. Human beings and their societies and cultures seem to be such entities for which a complete formalization is impossible. If that is so, philosophy is not going to go away.

Advertisements

The Core of Philosophy

In a way (that I am going to explore in some articles on Creativistic Philosophy), one could say that computability theory (which could be called “formalizability theory”), as one can find it in the works of Post, Kleene, Turing, Church and Gödel, forms the very core of philosophy. From here, one can investigate why philosophy still exists, why it will not go away and what is the nature of the analytic/continental divide and the science/humanities divide.

Estimating the Complexity of Innate Knowledge

File:GeneticCode21-version-2.svg

The following is a very crude estimate of the informational complexity of the innate knowledge of human beings. To be more exact, it is a crude estimate of an upper limit to the information content of this knowledge. It might be off by an order of magnitude or so. So this is a “back of an envelope” or “back of a napkin” kind of calculation. It just gives a direction into which to try to get a more accurate calculation.

According to the human proteome  project (http://www.proteinatlas.org/humanproteome/brain), about 67 % of the human genes are expressed in the brain. Most of these genes are also expressed in other parts of the body, so they probably form part of the general biochemical machinery of all cells. However, 1223 genes have an elevated level of expression in the brain. In one way or the other, the brain-specific structures must be encoded in these genes, or mainly in these genes.

There are about 20.000 genes in the human genome. So the 1223 genes. So about 6.115 % of our genes are brain specific. Probably, we share many of these with primates and other animals, like rodents, so the really human-specific part of the brain-specific genes might be much smaller. However, I am only interested here in an order-of-magnitude-result for an upper limit.

I have no information about the total length of these brain-specific genes, so I can only assume that they have average length.

According to https://en.wikipedia.org/wiki/Human_genome, the human genome has  3,095,693,981 base paris (of course, there is variation here).  Only about 2 % of this is coding DNA. There is also some non-coding DNA that has a function (in regulation, or in production of some types of RNA) but let us assume that the functional part of the genome is maybe 3%. That makes something in the order of 92 – 93 million base pares with a function (probably less). That makes 30 million to 31 million triplets. If the brain genes have average length, 6.115 % of this would be brain specific. That makes that is something like 1.89 million triplets.

The triplets code for 20 amino acids. There are also start- and stop-signals. The exact information content of a triplet would depend on how often it appears, and they are definitely not equally distributed, but let us assume that each of them codes for one out of 20 possibilities (calculating the exact information content of a triplet will require much more sophisticated reasoning and specific information, but for our purposes, this is enough). The information content of a triplet can then be estimated as the dual logarithm of 20 (you need 4 bits to encode 16 possibilities and 5 bits to encode 32 possibilites, so this should be between 4 and 5 bits). The dual logarithm of 20 is 4.322. So we multiply this with the number of triplets and get  8.200.549 bits. This is  1.025.069 bytes, or roughly a megabyte (something like 200 – 300 times the length of this blog article).

So the information content of the brain coding genes that determine the structure of the brain is in the order of a megabyte (smaller than many pieces of commercial software). The structure of the brain is generated by the information contained in these genes. This is probably an overestimate because many of these genes might not be involved in the encoding of the connective pattern of the neurons, but, for example, in the glial immune system of the brain or other brain specific, “non-neuronal” stuff.

If the brain’s structure is encoded in these genes, the information content of these structures cannot be larger than the information content of these genes. Since there are many more neurons, a lot of their connectivity must be repetitive. Indeed, the cortex consists of neuronal columns that show a lot of repetitive structure. If one would describe the innate brain circuitry, i.e. that found in a newborn (or developing in the small child in processes of ripening), and you compress that information to the smallest possible size, determining its information content, that information content cannot be larger than the information content of the genes involved in its generation. The process of transcribing those genes and building the brain structures as a result can be viewed as a process of informtion transformation, but it cannot create new information not contained in those genes. The brain structure might contain random elements (i.e. new information created by random processes) and information taken up from the environment through processes of perception, experimentation and learning, but this additional information is, by definition, not part of the innate structures. So the complexity of the innate structures or the innate knowledge, i.e. the complexity of the innate developmental core of cognition, must be limited by the information content of the genes involved in generating the brain.

The above calculation shows that this should be in the order of magnitude of a megabyte or so.

This means also that the minimum complexity of an artificial intelligent system capable of reaching human-type general intelligence cannot be larger than that.

We should note, however, that human beings who learn and develop their intelligence are embedded in a world they can interact with through their bodies and senses and that they are embedded into societies. These societies are the carriers of cultures whose information content is larger by many orders of magnitude. The question would be if it is possible to embed an artificial intelligent system into a world and a culture or society in a way to enable it to reach human-like intelligence. (This also raises ethical questions.)

In other words, the total complexity of innate knowledge of humans can hardly extend the amount of information contained in an average book, and is probably much smaller. It cannot be very sophisticated or complex.

(The picture, displaying the genetic code, is from https://commons.wikimedia.org/wiki/File:GeneticCode21-version-2.svg)

A Note on Analytic Philosophy and Phenomenology

Around the beginning of the 20th century, philosophy split into two directions, “analytic” and “continental” philosophy. Or so the story goes, as told by some philosophers of the “analytic camp”. Elsewhere I have argued that I don’t think these are valid terms. There was a larger number of different directions of philosophy and what we now call “analytic philosophy” is just one of them and does not have a special status.

Some of the directions of thought lumped together as “continental philosophy” might also have more in common with the “analytic” ones than the proponents of analytic philosophy are realizing. Take, for example, phenomenology, as proposed by Husserl, on one hand, and some strands of analytic philosophy that are connected to classical “artificial intelligence” and “cognitive science”, on the other. The people working within those paradigms might not see the similarity, but there is one:

In those parts of analytic philosophy that are trying to develop models of human mind and language (and feed into AI and cognitive science), an attempt is made to get a scientific description of the human mind by finding the laws according to which the mind is working. This is based on the assumption that such fixed laws of thought and perception or more generally of cognitive processes do exist.

This results in (and from) an ahistorical view of the human mind. The human mind is not viewed as something that develops historically but as something that has a fixed structure (which is determined by genes and only develops by means of genetic, i.e. biological changes).

In phenomenology, as put forward by Husserl, an attempt is made to arrive at objective descriptions of things as experienced. In order to do so and to ward off any psychologism, the human mind and its foundation in psychology, neurology or biology is left out of the description. Husserl starts with some “transcendental mind” that is itself not analyzed or described. In opposition to the current of “philosophical anthropology” of the 1920s and 1930s, there is something like a proscription of anthropology in his approach. He starts with some concept of rationality that is not itself analyzed.

However, such an approach would only work if the human mind would be ahistorical and not developing. In a way, Husserl is stepping into the same trap of the European rationalistic tradition as the analytic tradition.

If the human mind is culture- and time-specific and changes historically, this exclusion of anthropology does not work. If the mind is “programmable”, i.e. if information taken up from the environment is integrated into it, there would be no invariable rationality or “Vernunft” that can be excluded from view (as in Husserl’s approach) or that can be described by means of formalisms (as in cognitive science, AI and related strands of analytic philosophy).

Any formal description of the human mind is then incomplete and extensible and any description of the experienced world is specific for a certain time and culture, or even for an individual or a certain stage in an individual’s life. Fixing the human rationality, either to exclude it from view or to describe it scientifically, does not work if this mind does not have a fixed border and information from outside it can be integrated into it in a process akin to programming.

The real rift in early 20th century would then not be between “analytic philosophy” and “continental philosophy”, but between a rationalistic tradition viewing the mind as stable and fixed and another tradition that views it as something that is historically developing and world-open. This latter direction would contain some brands of historicism (one could put Dilthey here, although he is a borderline case)[1] and the mentioned philosophical anthropologists (Plessner, Gehlen, Scheeler), who described the human being as “world-open”. One could probably also include pragmatism here.

There are obviously different ways to group the different currents of philosophy. We should not just take the analytic/continental division as given. Reality is much more complicated.

___________________

[1] Dilthey criticized Kant’s apriori as rigid and dead because it was ahistorical. However, he made several attempts to develop a psychology and never finished any of these projects. This might have been an impossible project because a complete description of psychology is impossible if the mind is historic. The topic seems to have melted away under his hands, but he seems not to have come to the conclusion that he was trying something impossible.

Thoughts about Intelligence and Creativity

Some unordered notes (to be worked out further) on some general principles and limits of intelligence.

Reality has more features that we can perceive. What we perceive is more than what we understand. And our understanding has several levels, from perceiving shapes to conceptual interpretation and deep analysis. On each level, we can capture only a fraction of the information of the level before it. (See also https://creativisticphilosophy.wordpress.com/2015/02/19/dividing-the-stream-of-perceptions/)

The primary sense data are processed quickly, by neuronal systems having a high degree of parallelism. However, the level of analysis is rather shallow. To process large amounts of data quickly, you have to have an algorithm, a fixed way of processing the data. Such an algorithm can only recognize a limited range of structures. An algorithm limits the ways in which the bits of data are combined. An algorithm is a restriction. It prevents universality. The data could be combined in so many ways that you would get what is known as a combinatorial explosion if you would not limit it somehow. The system, having only a limited processing capacity, would be overwhelmed by the hyper-astronomically growing number of possibilities. Therefore a system processing a large amount of data must restrict the way it combines the data. As a result, it can process large amounts of data quickly but will be blind to a lot of the regularity that is contained in the data and could theoretically be discovered.

In order to discover such hidden features, you cannot process large amounts of information at once because this would lead to a combinatorial explosion. You would, instead, have to process small amounts of information at any given time, trying to find some pattern. Only when you discover a pattern, you can try to scan large amounts of data for it, essentially applying a newly found algorithm to the data. But that algorithm will in turn be blind to other regularity the data might contain. Each algorithm you may use to analyze data is incomplete, because it has to limit the way data is combined, or it will not be efficient, leading to combinatorial explosions again.

Intelligence could be defined as the ability to find new instances of regularity in data, regularity that was not known before. It can therefore be defined as the ability to construct new knowledge (new algorithms). This is only possible, in principle, by analyzing small amounts of data at any given time. Any algorithm you may use to analyze larger amounts of data will be limited and may be missing some of the structure that is there (i.e. it will restrict the generality of the intelligence). (See also https://creativisticphilosophy.wordpress.com/2015/05/16/how-intelligent-can-artificial-intelligence-systems-become/ and https://denkblasen.wordpress.com/2015/05/25/a-note-on-cognitive-limits/).

This limit to intelligence should be valid for single human beings but also for groups of human beings, like scientific communities or cultures. It would also hold for any artificial intelligent system. Such systems cannot be made arbitrarily intelligent. One could try to do so by putting many small intelligent systems in parallel (something like an artificial intelligent community) but since such systems would not be limited by any algorithm (or formal theory), they could develop into totally different directions, disagree with each other and suffer from misunderstandings if one would try to connect them together. If you connect them in a way that limits the possibility of misunderstandings in their communication or that stops them from disagreeing or from developing into totally different directions, you end up with a parallel algorithm again that can harmoniously analyze large amounts of data but is limited in what it can do.

You either get shallow processing of large amounts of data or deep analysis of small amounts of data with the potential of new discoveries, but you cannot have both at once. As a result, there is a limit to how intelligent a system can become.

There is no limit to what can be discovered by an intelligent system: if a structure is present in a set of data, it can be found if the system doing the analysis is not an algorithm (i.e. a system describable in terms of a finite formal theory – an algorithmic system, on the other hand, will necessarily be systematically blind to some structures). On the other hand, an artificial superintelligence is not possible. Processes of intelligent data analysis in such a system might be faster than they are in a human being, but they will not be much more sophisticated. Higher sophistication by adding of smart algorithms leads to limitations, i.e. to systematic blind spots. Higher sophistication by attempting to process more data at a time leads to combinatorial explosions which cannot be managed by whatever additional speed or processing power one would add. (See also http://asifoscope.org/2013/01/18/on-algorithmic-and-creative-animals/ and also http://asifoscope.org/2015/01/20/turning-the-other-way/)

For shallow analysis you need algorithms. Speed in terms of amount of data (bits) processed per time (seconds) may be high, but the depth of processing is limited. If the goal of cognition is to find regularity (and thus compress data), the algorithmic system will not find all regularity that is there. It cannot compress data optimally in all instances. Such a system will have blind spots.

Finding all regularity may be viewed as the ability to find the smallest self-expanding program that can produce the data (i.e. an optimal compression of the data). If an algorithm analyzes a stream of data, i.e. it parses the data, and the stream of data is longer than the algorithm itself, the algorithm may be seen as a compression of the data. If the compression is loss-free, i.e. the algorithm can reproduce the original data then the data must contain some redundancy if it is longer than the algorithm. The data will then not exhaust the information carrying capacity of the information channel. Therefore, it must be possible to add some information to that channel that is not parsed by the given algorithm. Hence the algorithm must be incomplete since there is data it cannot parse. It systematically has a blind spot.

Therefore, an intelligent system able to find arbitrary regularity cannot itself be an algorithm. Instead it must be a system that can produce new knowledge (and thus does not have a fixed representation as a finite text, and does not have a Goedel number). It must be changing over time, incorporating information that enters it from the analyzed information stream. This information reprograms the system, so it changes the way the system works. The system cannot have a fixed way in which it is working because then it would be an algorithm and would have a blind spot.

The possibility that the system self-destructs (becomes mad) cannot be excluded. That is a risk involved in intelligence/creativity.

Sophisticated knowledge has a high efficiency but a low universality. It is special and will “miss” many features of the data it processes (i.e. it has blind spots). On the other hand, it is efficient, which means that it allows large amounts of data to be processed. The processing of large amounts of data in a short time means that only a limited subset of the properties of that data can be considered, making analysis shallow.

Simple knowledge, on the other hand, has a high universality but a low efficiency. It allows for new features of data to be discovered. It therefore has the potential of a deep analysis that does not miss properties, but it has a low efficiency and can only process small amounts of data at a time, since applying it to large sets of data leads to combinatorial explosions.

The simple knowledge is what is called “reflection basis” in K. Ammon’s dissertation. (see Ammon, Kurt: “The Automatic Development of Concepts and Methods“, Doctoral Dissertation, University of Hamburg, 1987).

New knowledge forms by incorporating information from data into the knowledge base. This might occasionally happen through the application of sophisticated knowledge but most of the time is the result of applying simple knowledge to small amounts of data, leading to the discovery novel (from the system’s point of view) properties of the data. As a result, new more sophisticated knowledge forms. This knowledge is special and more efficient.

The small amounts of data that are processed by simple knowledge might be input data from the input stream, but might also be chunks of knowledge that are experimentally plugged together in different ways and then experimentally applied to the input stream (perhaps in its entirety). This might occasionally lead to sudden changes of perception (e.g. changing from two-dimensional vision to three-dimensional vision). Successful (i.e. efficient) structures are then retained. This is a way of incorporating information from the environment into the system.

The total universality of a creative system lies in the emptiness of its core (i.e. there is no fixed, i.e. unchangeable, special knowledge restricting what it can do).

The trade-of between efficiency and generality is a special case of (or another way of expressing) the trade of between explicitness/exactness and generality described in https://creativisticphilosophy.wordpress.com/2013/10/12/the-trade-off-between-explicitness-and-generality/. A result of it is that there is a fundamental limit to how intelligent a system can become.

Sophisticated knowledge can be used to filter out expected components from the data stream, leaving the surprising parts that can then be analyzed by less sophisticated knowledge. The end result might be more comprehensive sophisticated knowledge where the previously surprising data becomes expected data.

(A lot of this is already contained in K. Ammon’s dissertation in one form or another).

A Note on Computational Models of Cognition

File:Nyhavn lego.jpg

Another set of draft notes to be worked into more elaborate articles:

In https://creativisticphilosophy.wordpress.com/2014/06/23/a-note-on-analytic-philosophy/ I have already stated what I generally think about analytic philosophy. Cognition can always work in more different ways than any of the formalisms developed inside analytic philosophy or “Artificial Intelligence” (AI)is describing.

The AI people are trying to develop computational models of human cognition. But their idea of “computation” is very limited. I have seen a lot of software during my life (I am myself a programmer) but the only software I have ever seen that was working according to AI principles was, well, a piece of artificial intelligence software (as far as I remember, it was based on what they called “semantic networks”, and the results were not very impressing – I turned away from that field of research). There is a lot of different software working in many different ways. For example, there is software that is controlling air planes or the brakes of your car. There is image processing software processing your photographs. There is some software playing music to you. There is the software of internet applications like the WordPress blogging platform. And so on and so on. None of this software is working in terms of conceptual hierarchies, semantic networks, etc.

I could describe my own ideas about cognition as “computational”, but the approaches I see in AI and analytic philosophy are rather ridiculous. Computation (or software) is a much more ductile, pliable, plastic “material” than these people think. It is not even restricted to fixed representational languages and fixed algorithms. The models of AI and analytic philosophy look like somebody is trying to model the whole world from Lego bricks. Reality is far more complex. It simply does not work that way. These models seem to come from a philosophical tradition that started in the 17th century (or even earlier?), a tradition providing a simplistic model of how thinking and language work.

It is obvious that our processes of perception, thinking and acting are just that: processes. Something is happening. And one can describe them as processes in which information is processed and stored. In that very general sense, one can think of cognition as information processing or computation (although not necessarily digital). In this sense, it makes sense to me to think about it in computational terms. However, we should not buy into the simplistic models as described in http://plato.stanford.edu/entries/mental-representation/. If we buy into those limited and restricted notions of computation, we have, in a sense, already fallen prey to those theories. There might be some thinking processes that work in terms of concepts, propositions and logical inferences and stuff like that, but that is just a fraction of the whole story (just as in the case of our computers, where the majority of applications does not work in such ways).

I would classify my own approach of thinking about cognition as “computational”, but not in the sense this term is used in classical AI.

(The picture, showing a scene from Legoland in Denmark, is from https://commons.wikimedia.org/wiki/File:Nyhavn_lego.jpg).

Does Meditation Reduce Creativity?

Just a question. I have no answer to this and answering it would require some serious scientific research:

Meditation is, first and foremost, training of attention. As far as I understand, regular practice of meditation can, to a great extent, improve attention by promoting the ability to reduce the uncontrolled straying of thoughts. My question is: does this have a negative impact on creativity? What one learns to suppress through meditation seems to be what is called the “default network” of the brain.

 My personal experience in the “default mode” of the mind is that my thoughts are wandering around. It looks like I am analyzing all kinds of problems. At the same time, my impression is that in this default state, I am having my best ideas. So it seems to me that this is the “creative mode” of the mind.

While in a state of concentration, I can work on a specific problem and apply known methods; however, my creativity, i.e. the ability to generate new ideas, to move out of the scope of known methods of thinking, seems to be highest in the state in which the mind is wandering uncontolled. when brain activity is strongest in the default network.

As far as I understand, it seems to be this default activity that is reduced in meditation (if I am not wrong on this). So could it be that people who practice a lot of meditation gain a highly improved ability of concentration, but at the same time loose some creativity? If the highly improved attention gained through practicing meditation where only advantagous, our brains would likely have developed in such a way that attention would be better from the start. This, however, has not happened. Our thoughts are straying around without controll. The reason might simply be that this uncontrolled straying is necessary for developing new ideas and that the resulting creativity was selected for. The way our thinking and perception is working, with less than optimal attention and thoughts being distracted and wandering might be a compromise between the advantages and disadvantages of attention and concentration on one side and creativity and innovativeness on the other.

So  I suggest that researchers working on meditation and its effects try to design experiments investigating a possible (negative) effect of meditation on creativity.