The Universon

File:The incomplete circle of everything.svg

Modern science has restricted the area where one can wildly speculate to some very narrow fields. Gone are the days when natural philosophers could allow their imagination to roam unhindered. Somebody like Leibniz (see my previous post) could still come up with rather crazy ideas. New ways to open up windows into reality, like the microscope invented by Antoni van Leeuwenhoek, allowed Leibniz to come up with new speculations about microscopic universes. So while in the long run, such inventions put ever tighter restrictions on imagination, for some time they even extended it.

Today, however, only small pockets of speculation are left. One is the era of extremely high energies in the first instances of time after the big bang. Energy was so high that it will always remain impossible to reach it in any experiment. If there are different ideas about how the world functioned in that first instance, we will never be able to distinguish between them by any experiment and rule out the wrong ones. Here the borderline between physics and metaphysics is blurred and beyond it, you can still indulge in speculation.

The following idea certainly belongs into that realm of speculation. It certainly belongs into the “crazy idea of the month” category and I am not really serious about it.

An unsolved puzzle is why there is matter, why matter and antimatter did not cancel out each other completely. Perhaps there is some asymmetry between them. Well, it looks like there is one, but it seems to be not big enough. So what happened?

Perhaps matter and antimatter where created in equal amounts and then separated. What could have led to such a separation? One idea that came to my mind is that, although we think of elementary particles as being small, there is, to my knowledge, no known reason why the mass of an elementary particle should not be very large. That the ones we know are small is due to the fact that our particle accelerators, the technology we have to generate them, is limited in its ability to concentrate a large amount of energy in a very small space. But maybe some “elementary particles” are possible that have a mass that is much, much larger. Consider that there is such a particle whose mass equals the mass of the whole observable universe, and perhaps more. Consider a pair of such a particle and its antiparticle are generated in the first instance of the universe, perhaps even many such particle-antiparticle-pairs. They are then separated by the expansion of the universe and then they decay into lighter particles. On of them decays into all the particles that make up our universe, its antiparticle decays equally into the antimatter-particles of an anti-universe. Since all the matter is bundled together into one particle initially and all the antimatter into another one means that matter and antimatter are cleanly separated. The result is a set of universes, each consisting predominantly of one type of matter.

I call this speculative kind of particle the “Universon”.

Granted, this is not science but speculative metaphysics. I don’t believe in it, but it is fun to speculate unabated like that. Maybe the old natural philosophers also did not really believe in all the crazy stuff they invented and published. Did Leibniz believe in his monads and microcosms? Maybe; maybe not. Maybe it was just fun. The restrictions imposed by religion had loosened and the restrictions imposed by science had not yet set in.

Thinking underwent a phase transition, from the solid state of the Scholastic-Aristotelian doctrine to a somewhat liquid or even gaseous state. Later it condensed again and crystallized into the new solid state of modern science. In between everything was possible. The discoveries of the time created enough new information to blow apart the old certainties, but not yet enough constraints to force thinking into new ones.

Speculating about the Universon gives me a glimpse of the intellectual fun possible back in those days.

(The pictures are from and The first picture shows a  “Graphic representation of the standard model of elementary particles”. It is interesting here that the arangement of information in the form of a circle is attempted, not unlike attempts of old alchemists to arange everything into a neat order, while the circle and its segments do not have any clear semantics in this graphic representation.  The second picture shows a depiction of a version of the old geocentric worldview. There was a considerable degree of speculation and variation even back in medieval times (e.g. the idea of the bishop Robert Grosseteste that the universe started as a point of pure light that expanded into everything), but religion put constraints on speculation).


Why Philosophy is Going to Stay

In a nutshell, philosophy deals with those subjects that cannot be completely formalized. The sciences are about areas of knowledge for which complete formal theories are possible. Scientism includes the belief that what is formalizable and what is real is the same, i.e. everything can be described in terms of formal theories. Analytical philosophy is trying to turn philosophy into a science, but if everything can be formalized, philosophy is (as some scientists state) unnecessary, so analytical philosophy is making itself obsolete.

However, if, as I think, reality cannot be completely formalized, science is inherently incomplete and philosophy is not going away. Especially, human cognitive processes cannot be completely formalized in principle. Each formal description of such processes is incomplete and partial. Cognition develops historically (something formalizable systems don’t do). Cognitive science then turns out not to be a science but a historical discipline. Human thinking does not follow fixed laws.

As a result, there is no complete and at the same time exact formal description of cognition and of its products, like society, culture, and even science itself, cannot be described completely in terms of a single formal theory. Philosophy is not going away. As long as you do “normal science” in the Kuhnian sense, you don’t need philosophy, but if you are working in any field of the humanities, or psychology or “social sciences”, you permanently need philosophy. Here, you do not have a fixed methodology. You have to be reflexive and look at what you are doing from a meta- (and meta-meta-…level) all of the time. You have to look at what you are doing critically all of the time. In the sciences, you also have to do that, but only occasionally, if you bump into anomalies and you have to shift your paradigm.

In mathematics, there are entities for which we can prove that a complete formal description is impossible. If such entities exist in mathematics, there is no a-priory reason why they should not also exist in physical reality. Human beings and their societies and cultures seem to be such entities for which a complete formalization is impossible. If that is so, philosophy is not going to go away.

Philosophical Excavations

I have started a new blog to publish some research into the history of philosophy as well as some reflections and meta-level thoughts about the results of that reserach. I have published an introductory article Starting to Dig and a first research article on the little-known Austrian philosopher Karl Faigl (more articles on him are planned). My first “philosophical digging campaign” is concentrating on some (predominantly right wing) philosophy from eraly 20th century Germany and Austria. If you are interested in this project, just follow that blog. I will only publish occasionally there since the time I can spend on this project is currently quite limited, but I hope that bit by bit I will be able to present some interesting stuff here (about this particular direction of philosophy as well as some others).

There is no Science of Science

There is history of science and there is philosophy of science. There is no science of science. The reason for this is that every formal method is limited (as can be demonstrated mathematically, in computability theory) and as a result, there is no formal or algorithmic way to produce arbitrary scientific knowledge. Every methodology of science must necessarily be incomplete. Therefore, the methodology of science involves creativity, i.e. the ability to go from one formal system to another, and the totality of these processes cannot be described within any single formal theory or algorithm.

As a result, the meta-discipline of science is a branch of philosophy and will remain so, and science develops historically. The meta-disciplines of science are thus necessarily inside the humanities, and will always remain so. There are no fixed laws describing what scientists do. Science, if understood as the description of systems following fixed laws, is not applicable to itself.

The Core of Philosophy

In a way (that I am going to explore in some articles on Creativistic Philosophy), one could say that computability theory (which could be called “formalizability theory”), as one can find it in the works of Post, Kleene, Turing, Church and Gödel, forms the very core of philosophy. From here, one can investigate why philosophy still exists, why it will not go away and what is the nature of the analytic/continental divide and the science/humanities divide.

Estimating the Complexity of Innate Knowledge


The following is a very crude estimate of the informational complexity of the innate knowledge of human beings. To be more exact, it is a crude estimate of an upper limit to the information content of this knowledge. It might be off by an order of magnitude or so. So this is a “back of an envelope” or “back of a napkin” kind of calculation. It just gives a direction into which to try to get a more accurate calculation.

According to the human proteome  project (, about 67 % of the human genes are expressed in the brain. Most of these genes are also expressed in other parts of the body, so they probably form part of the general biochemical machinery of all cells. However, 1223 genes have an elevated level of expression in the brain. In one way or the other, the brain-specific structures must be encoded in these genes, or mainly in these genes.

There are about 20.000 genes in the human genome. So the 1223 genes. So about 6.115 % of our genes are brain specific. Probably, we share many of these with primates and other animals, like rodents, so the really human-specific part of the brain-specific genes might be much smaller. However, I am only interested here in an order-of-magnitude-result for an upper limit.

I have no information about the total length of these brain-specific genes, so I can only assume that they have average length.

According to, the human genome has  3,095,693,981 base paris (of course, there is variation here).  Only about 2 % of this is coding DNA. There is also some non-coding DNA that has a function (in regulation, or in production of some types of RNA) but let us assume that the functional part of the genome is maybe 3%. That makes something in the order of 92 – 93 million base pares with a function (probably less). That makes 30 million to 31 million triplets. If the brain genes have average length, 6.115 % of this would be brain specific. That makes that is something like 1.89 million triplets.

The triplets code for 20 amino acids. There are also start- and stop-signals. The exact information content of a triplet would depend on how often it appears, and they are definitely not equally distributed, but let us assume that each of them codes for one out of 20 possibilities (calculating the exact information content of a triplet will require much more sophisticated reasoning and specific information, but for our purposes, this is enough). The information content of a triplet can then be estimated as the dual logarithm of 20 (you need 4 bits to encode 16 possibilities and 5 bits to encode 32 possibilites, so this should be between 4 and 5 bits). The dual logarithm of 20 is 4.322. So we multiply this with the number of triplets and get  8.200.549 bits. This is  1.025.069 bytes, or roughly a megabyte (something like 200 – 300 times the length of this blog article).

So the information content of the brain coding genes that determine the structure of the brain is in the order of a megabyte (smaller than many pieces of commercial software). The structure of the brain is generated by the information contained in these genes. This is probably an overestimate because many of these genes might not be involved in the encoding of the connective pattern of the neurons, but, for example, in the glial immune system of the brain or other brain specific, “non-neuronal” stuff.

If the brain’s structure is encoded in these genes, the information content of these structures cannot be larger than the information content of these genes. Since there are many more neurons, a lot of their connectivity must be repetitive. Indeed, the cortex consists of neuronal columns that show a lot of repetitive structure. If one would describe the innate brain circuitry, i.e. that found in a newborn (or developing in the small child in processes of ripening), and you compress that information to the smallest possible size, determining its information content, that information content cannot be larger than the information content of the genes involved in its generation. The process of transcribing those genes and building the brain structures as a result can be viewed as a process of informtion transformation, but it cannot create new information not contained in those genes. The brain structure might contain random elements (i.e. new information created by random processes) and information taken up from the environment through processes of perception, experimentation and learning, but this additional information is, by definition, not part of the innate structures. So the complexity of the innate structures or the innate knowledge, i.e. the complexity of the innate developmental core of cognition, must be limited by the information content of the genes involved in generating the brain.

The above calculation shows that this should be in the order of magnitude of a megabyte or so.

This means also that the minimum complexity of an artificial intelligent system capable of reaching human-type general intelligence cannot be larger than that.

We should note, however, that human beings who learn and develop their intelligence are embedded in a world they can interact with through their bodies and senses and that they are embedded into societies. These societies are the carriers of cultures whose information content is larger by many orders of magnitude. The question would be if it is possible to embed an artificial intelligent system into a world and a culture or society in a way to enable it to reach human-like intelligence. (This also raises ethical questions.)

In other words, the total complexity of innate knowledge of humans can hardly extend the amount of information contained in an average book, and is probably much smaller. It cannot be very sophisticated or complex.

(The picture, displaying the genetic code, is from

Some Ideas about Evolution


Some draft notes on some ideas about evolution:

In sexually reproducing species where genes can be exchanged and where partial solutions to problems dispersed in a population can come together in some individual, evolution should not be thought of as a linear process.

Instead, evolution may be viewed as co-evolution of genes within a species. Just like in a symbiosis, varieties of the organisms taking part in it will select each other if they fit together to produce an overall system that works well, the genes within a species may be seen as co-evolving species if the species has sexual reproduction, enabling those genes to be combined in different ways. An organism can thus be viewed as a simbiosis of on-gene-species that co-evolve.

Even single genes might be the result of coevolution if a process of crossing over, as the analogue of sexuality on the level of single genes, allows them to exchange bits of genetic sequences that code for different domains of proteins.

For a new feature to evolve, what is needed then is an initial coupling of co-operating genes. This initial coupling, i.e. their co-operation in producing some functionality in the phenotype, might be very weak. But as soon as it is there, mutual selection of gene variants that fit together might set in, resulting for the population to quickly “zoom in” or “converge” on an optimized version of the new feature. One can think of this as a process of mutual filtiering of gene variants.

For example, in a wind-polinating plant species, there is a co-evolution between the genes controlling the properties of the pollen and the properties of the pollen-catching organs of the female flower. While in this example, you literally have some “mutual filtering” of genes, the idea can be applied much more widely.

As a result of a co-evolution of genes starting with simple genes, new features may evolve very quickly, within a few generations, while the resulting forms might then be stable for long times since “aberrations” (diverging from the optimal cooperation) will be selected away by the other genes belonging to the group of co-operating genes. The whole group of genes forms part of the evolutionary environment of every gene takeing part. This stability will last as long as the environmental conditions remain stable or until a genetic innovation creates a new coupling of features that will drive the process somewhere else.

New genes might be included into the process even if they offer only tiny optimizations. It is even possible that pieces of genetic material that at one point where non-coding, like introns, become coding, even if their protein products have only very tiny effects initially, and then develop into important genes by being “guided” in a co-evolutive process of mutual filtering of gene variants.

The co-operating and co-evolving genes will together form some aspect of the phenotype. As a result, in many instances properties of organisms will not be controlled by a single gene but by a multitude of genes.

The genes coding for a feature might be replaced by others in such a cooperation. Some genes might be drawn into a cooperative complex and others disappear from it. As a result, similar phenotypical features in closely related species might have a very different genetic basis (comparable to the reimplementation of a feature in a software system where the surface remains similar although the implementation might become completely different). If a feature is lost in evolution due to an environmental change, but the overal structure remains, it might be redeveloped later on the basis of other genes (e.g. secondary shells in some sea turtles).

In organisms that have a culture, i.e. learned behavior that is passed from one generation to the next (e.g. migratory birds learning a route of migration by following the flock), this learned behavior can become part of a genetic coupling as if there was an underlying gene causing it. One could think of it as a virtual gene becoming part of the coupling of a group of genes or even starting a new coupling. In this way, invented and culturally transmitted behavior can trigger new spurts of evolution (and as a result, the behavior might become genetic by the selection of genes that make its learning easier).

In the evolution of humans, such processes might have played a role in driving the development of the human brain. However, the direction taken by evolution here was not towards the development of specialized behaviors but towards de-specialization, through alternating increases in the complexity of culture (including language) and in the cognitive capacity of the brain. The trigger might have been a rather unspecialized body with a versatile hand that enabled the development of a large diversity of behaviours.

Language development might have started only based on general intelligence (plasticity) without any language-specific adaption in the brain or in any other structure (note that all the organs involved have another function initially (the tongue, lips, teeth etc.). Even the glottis, although already used for communicative sound production in apes, initially might just have had a function in coughing, i.e. cleaning the bronchial tubes. Secondary adaption to language lead to a more elaborate fine motor skills of the speech organs, higher resolution of the auditive system in the range of language frequencies and volume, and probably a higher processing capacity of some brain areas. There might also have been some specific adaptions to handling complex grammar, but I guess these are overestimated in classical Chomskyan linguistics. In any case, language was invented first and then the neural system, auditive system and speech organs adapted to it. It did not emerge at once as a fully developed whole by a single genetic mutation. In any case, there might have been a co-evolution of a group of genes optimizing the language skills and thus the bandwidth of communication. The culturaly invented language might have played the role of the phenotype of a virtual gene (or a piece of environment) in this coupling of cooperating genes.

(The picture, showing an old anatomical drawing of the human larynx, is from