Why Philosophy is Going to Stay

In a nutshell, philosophy deals with those subjects that cannot be completely formalized. The sciences are about areas of knowledge for which complete formal theories are possible. Scientism includes the belief that what is formalizable and what is real is the same, i.e. everything can be described in terms of formal theories. Analytical philosophy is trying to turn philosophy into a science, but if everything can be formalized, philosophy is (as some scientists state) unnecessary, so analytical philosophy is making itself obsolete.

However, if, as I think, reality cannot be completely formalized, science is inherently incomplete and philosophy is not going away. Especially, human cognitive processes cannot be completely formalized in principle. Each formal description of such processes is incomplete and partial. Cognition develops historically (something formalizable systems don’t do). Cognitive science then turns out not to be a science but a historical discipline. Human thinking does not follow fixed laws.

As a result, there is no complete and at the same time exact formal description of cognition and of its products, like society, culture, and even science itself, cannot be described completely in terms of a single formal theory. Philosophy is not going away. As long as you do “normal science” in the Kuhnian sense, you don’t need philosophy, but if you are working in any field of the humanities, or psychology or “social sciences”, you permanently need philosophy. Here, you do not have a fixed methodology. You have to be reflexive and look at what you are doing from a meta- (and meta-meta-…level) all of the time. You have to look at what you are doing critically all of the time. In the sciences, you also have to do that, but only occasionally, if you bump into anomalies and you have to shift your paradigm.

In mathematics, there are entities for which we can prove that a complete formal description is impossible. If such entities exist in mathematics, there is no a-priory reason why they should not also exist in physical reality. Human beings and their societies and cultures seem to be such entities for which a complete formalization is impossible. If that is so, philosophy is not going to go away.

Advertisements

Philosophical Excavations

I have started a new blog to publish some research into the history of philosophy as well as some reflections and meta-level thoughts about the results of that reserach. I have published an introductory article Starting to Dig and a first research article on the little-known Austrian philosopher Karl Faigl (more articles on him are planned). My first “philosophical digging campaign” is concentrating on some (predominantly right wing) philosophy from eraly 20th century Germany and Austria. If you are interested in this project, just follow that blog. I will only publish occasionally there since the time I can spend on this project is currently quite limited, but I hope that bit by bit I will be able to present some interesting stuff here (about this particular direction of philosophy as well as some others).

There is no Science of Science

There is history of science and there is philosophy of science. There is no science of science. The reason for this is that every formal method is limited (as can be demonstrated mathematically, in computability theory) and as a result, there is no formal or algorithmic way to produce arbitrary scientific knowledge. Every methodology of science must necessarily be incomplete. Therefore, the methodology of science involves creativity, i.e. the ability to go from one formal system to another, and the totality of these processes cannot be described within any single formal theory or algorithm.

As a result, the meta-discipline of science is a branch of philosophy and will remain so, and science develops historically. The meta-disciplines of science are thus necessarily inside the humanities, and will always remain so. There are no fixed laws describing what scientists do. Science, if understood as the description of systems following fixed laws, is not applicable to itself.

The Core of Philosophy

In a way (that I am going to explore in some articles on Creativistic Philosophy), one could say that computability theory (which could be called “formalizability theory”), as one can find it in the works of Post, Kleene, Turing, Church and Gödel, forms the very core of philosophy. From here, one can investigate why philosophy still exists, why it will not go away and what is the nature of the analytic/continental divide and the science/humanities divide.

Estimating the Complexity of Innate Knowledge

File:GeneticCode21-version-2.svg

The following is a very crude estimate of the informational complexity of the innate knowledge of human beings. To be more exact, it is a crude estimate of an upper limit to the information content of this knowledge. It might be off by an order of magnitude or so. So this is a “back of an envelope” or “back of a napkin” kind of calculation. It just gives a direction into which to try to get a more accurate calculation.

According to the human proteome  project (http://www.proteinatlas.org/humanproteome/brain), about 67 % of the human genes are expressed in the brain. Most of these genes are also expressed in other parts of the body, so they probably form part of the general biochemical machinery of all cells. However, 1223 genes have an elevated level of expression in the brain. In one way or the other, the brain-specific structures must be encoded in these genes, or mainly in these genes.

There are about 20.000 genes in the human genome. So the 1223 genes. So about 6.115 % of our genes are brain specific. Probably, we share many of these with primates and other animals, like rodents, so the really human-specific part of the brain-specific genes might be much smaller. However, I am only interested here in an order-of-magnitude-result for an upper limit.

I have no information about the total length of these brain-specific genes, so I can only assume that they have average length.

According to https://en.wikipedia.org/wiki/Human_genome, the human genome has  3,095,693,981 base paris (of course, there is variation here).  Only about 2 % of this is coding DNA. There is also some non-coding DNA that has a function (in regulation, or in production of some types of RNA) but let us assume that the functional part of the genome is maybe 3%. That makes something in the order of 92 – 93 million base pares with a function (probably less). That makes 30 million to 31 million triplets. If the brain genes have average length, 6.115 % of this would be brain specific. That makes that is something like 1.89 million triplets.

The triplets code for 20 amino acids. There are also start- and stop-signals. The exact information content of a triplet would depend on how often it appears, and they are definitely not equally distributed, but let us assume that each of them codes for one out of 20 possibilities (calculating the exact information content of a triplet will require much more sophisticated reasoning and specific information, but for our purposes, this is enough). The information content of a triplet can then be estimated as the dual logarithm of 20 (you need 4 bits to encode 16 possibilities and 5 bits to encode 32 possibilites, so this should be between 4 and 5 bits). The dual logarithm of 20 is 4.322. So we multiply this with the number of triplets and get  8.200.549 bits. This is  1.025.069 bytes, or roughly a megabyte (something like 200 – 300 times the length of this blog article).

So the information content of the brain coding genes that determine the structure of the brain is in the order of a megabyte (smaller than many pieces of commercial software). The structure of the brain is generated by the information contained in these genes. This is probably an overestimate because many of these genes might not be involved in the encoding of the connective pattern of the neurons, but, for example, in the glial immune system of the brain or other brain specific, “non-neuronal” stuff.

If the brain’s structure is encoded in these genes, the information content of these structures cannot be larger than the information content of these genes. Since there are many more neurons, a lot of their connectivity must be repetitive. Indeed, the cortex consists of neuronal columns that show a lot of repetitive structure. If one would describe the innate brain circuitry, i.e. that found in a newborn (or developing in the small child in processes of ripening), and you compress that information to the smallest possible size, determining its information content, that information content cannot be larger than the information content of the genes involved in its generation. The process of transcribing those genes and building the brain structures as a result can be viewed as a process of informtion transformation, but it cannot create new information not contained in those genes. The brain structure might contain random elements (i.e. new information created by random processes) and information taken up from the environment through processes of perception, experimentation and learning, but this additional information is, by definition, not part of the innate structures. So the complexity of the innate structures or the innate knowledge, i.e. the complexity of the innate developmental core of cognition, must be limited by the information content of the genes involved in generating the brain.

The above calculation shows that this should be in the order of magnitude of a megabyte or so.

This means also that the minimum complexity of an artificial intelligent system capable of reaching human-type general intelligence cannot be larger than that.

We should note, however, that human beings who learn and develop their intelligence are embedded in a world they can interact with through their bodies and senses and that they are embedded into societies. These societies are the carriers of cultures whose information content is larger by many orders of magnitude. The question would be if it is possible to embed an artificial intelligent system into a world and a culture or society in a way to enable it to reach human-like intelligence. (This also raises ethical questions.)

In other words, the total complexity of innate knowledge of humans can hardly extend the amount of information contained in an average book, and is probably much smaller. It cannot be very sophisticated or complex.

(The picture, displaying the genetic code, is from https://commons.wikimedia.org/wiki/File:GeneticCode21-version-2.svg)

Some Ideas about Evolution

File:Gray960.png

Some draft notes on some ideas about evolution:

In sexually reproducing species where genes can be exchanged and where partial solutions to problems dispersed in a population can come together in some individual, evolution should not be thought of as a linear process.

Instead, evolution may be viewed as co-evolution of genes within a species. Just like in a symbiosis, varieties of the organisms taking part in it will select each other if they fit together to produce an overall system that works well, the genes within a species may be seen as co-evolving species if the species has sexual reproduction, enabling those genes to be combined in different ways. An organism can thus be viewed as a simbiosis of on-gene-species that co-evolve.

Even single genes might be the result of coevolution if a process of crossing over, as the analogue of sexuality on the level of single genes, allows them to exchange bits of genetic sequences that code for different domains of proteins.

For a new feature to evolve, what is needed then is an initial coupling of co-operating genes. This initial coupling, i.e. their co-operation in producing some functionality in the phenotype, might be very weak. But as soon as it is there, mutual selection of gene variants that fit together might set in, resulting for the population to quickly “zoom in” or “converge” on an optimized version of the new feature. One can think of this as a process of mutual filtiering of gene variants.

For example, in a wind-polinating plant species, there is a co-evolution between the genes controlling the properties of the pollen and the properties of the pollen-catching organs of the female flower. While in this example, you literally have some “mutual filtering” of genes, the idea can be applied much more widely.

As a result of a co-evolution of genes starting with simple genes, new features may evolve very quickly, within a few generations, while the resulting forms might then be stable for long times since “aberrations” (diverging from the optimal cooperation) will be selected away by the other genes belonging to the group of co-operating genes. The whole group of genes forms part of the evolutionary environment of every gene takeing part. This stability will last as long as the environmental conditions remain stable or until a genetic innovation creates a new coupling of features that will drive the process somewhere else.

New genes might be included into the process even if they offer only tiny optimizations. It is even possible that pieces of genetic material that at one point where non-coding, like introns, become coding, even if their protein products have only very tiny effects initially, and then develop into important genes by being “guided” in a co-evolutive process of mutual filtering of gene variants.

The co-operating and co-evolving genes will together form some aspect of the phenotype. As a result, in many instances properties of organisms will not be controlled by a single gene but by a multitude of genes.

The genes coding for a feature might be replaced by others in such a cooperation. Some genes might be drawn into a cooperative complex and others disappear from it. As a result, similar phenotypical features in closely related species might have a very different genetic basis (comparable to the reimplementation of a feature in a software system where the surface remains similar although the implementation might become completely different). If a feature is lost in evolution due to an environmental change, but the overal structure remains, it might be redeveloped later on the basis of other genes (e.g. secondary shells in some sea turtles).

In organisms that have a culture, i.e. learned behavior that is passed from one generation to the next (e.g. migratory birds learning a route of migration by following the flock), this learned behavior can become part of a genetic coupling as if there was an underlying gene causing it. One could think of it as a virtual gene becoming part of the coupling of a group of genes or even starting a new coupling. In this way, invented and culturally transmitted behavior can trigger new spurts of evolution (and as a result, the behavior might become genetic by the selection of genes that make its learning easier).

In the evolution of humans, such processes might have played a role in driving the development of the human brain. However, the direction taken by evolution here was not towards the development of specialized behaviors but towards de-specialization, through alternating increases in the complexity of culture (including language) and in the cognitive capacity of the brain. The trigger might have been a rather unspecialized body with a versatile hand that enabled the development of a large diversity of behaviours.

Language development might have started only based on general intelligence (plasticity) without any language-specific adaption in the brain or in any other structure (note that all the organs involved have another function initially (the tongue, lips, teeth etc.). Even the glottis, although already used for communicative sound production in apes, initially might just have had a function in coughing, i.e. cleaning the bronchial tubes. Secondary adaption to language lead to a more elaborate fine motor skills of the speech organs, higher resolution of the auditive system in the range of language frequencies and volume, and probably a higher processing capacity of some brain areas. There might also have been some specific adaptions to handling complex grammar, but I guess these are overestimated in classical Chomskyan linguistics. In any case, language was invented first and then the neural system, auditive system and speech organs adapted to it. It did not emerge at once as a fully developed whole by a single genetic mutation. In any case, there might have been a co-evolution of a group of genes optimizing the language skills and thus the bandwidth of communication. The culturaly invented language might have played the role of the phenotype of a virtual gene (or a piece of environment) in this coupling of cooperating genes.

(The picture, showing an old anatomical drawing of the human larynx, is from https://commons.wikimedia.org/wiki/File:Gray960.png)

Ripping Apart the Vacuum

And another one of those brain waves: I am not convinced that what appears as the big bang to us was really the beginning. We might never know, it seems to be an event that destroys any information of what was there before. However, here is one idea I had about it. I am not a physicist, so my ideas about this topic might be total nonsense.

There is this theory that the world will end in a “big rip”, with everything flying apart faster and faster, because dark energy, whatever that is, is rising. Now, if things like planets, stars and atomic nuclei are ripped apart, energy has to be put into them in order to do so. So the big rip is putting energy into the universe (one could view this dark energy as an external source of energy). At some points, when the level of dark energy has reached a high enough value, virtual particles separated by the ripping force will be endowed with enough energy to become real particles. So instead of the universe becoming emptier and emptier, new matter and radiation would be created, starting with long wavelength, low energy stuff and then moving to higher and higher energies. One could think of this as the vacuum itself being ripped apart.

As a result, the universe would be filled with extremely hot and dense radiation and matter. So the big rip would lead to a refill of the universe. Any information from the previous universe would be diluted and dispersed by the expansion, so you get a state with very low entropy. Also, the resulting universe is very flat.

If there are fluctuations in the dark energy, it could drop to near zero in some areas. These areas would suddenly cease to expand quickly. From the inside, such areas would look like universes after a big bang. So the big rip would lead to a similar result as the eternal inflation scenario, with universes arising as bubbles in a quickly expanding environment of radiation and matter.

Another way of looking at this, that I guess is equivalent, is this: the further away something is from us in the expanding universe, the faster it moves, from our point of view, because the space in between is expanding. At some point, the speed of moving away is higher than the speed of light. From our point of view, there is an event horizon at that distance. If the speed of expansion increases, this event horizon would come nearer. It would behave similar to the event horizon of a black hole. Some pairs of virtual particles are separated, with one of the pair disappearing behind the event horizon and the other one moving our way. The nearer the event horizon is moving, the hotter it becomes. In the big rip, the event horizon around each point becomes very near and very small (in fact, microscopic) and the radiation (akin to Hawking radiation) becomes very intense. Information (entropy) from the previous universes disappears behind the event horizons, so you get a high temperature, high density, flat, low entropy starting condition.

I guess both descriptions are equivalent. The expansion creates energy and matter and does so ever more intensely, the faster it is getting. Dark energy (of which there seems to be an inexhaustible amount) turns into normal energy. As a result, new universes are created. The big bang would then be a consequence of the big rip of the previous universe.

Warning: I am not a physicist. I arrived at these ideas in a completely intuitive, non-mathematical way and I am not god enough in math to try to turn this into a mathematical theory. In fact, these ideas might turn out to be nonsense. Also, the big rip idea put forward by some physicists might turn out to be totally wrong, so one should not take me too serious. It is possible as well that these ideas are not new at all, but I have not seen them anywhere. I am outside my scope of expertise here and these ideas might be rooted in misunderstandings. I leave it to the physicists to decide if this is crap.