The Universon

File:The incomplete circle of everything.svg

Modern science has restricted the area where one can wildly speculate to some very narrow fields. Gone are the days when natural philosophers could allow their imagination to roam unhindered. Somebody like Leibniz (see my previous post) could still come up with rather crazy ideas. New ways to open up windows into reality, like the microscope invented by Antoni van Leeuwenhoek, allowed Leibniz to come up with new speculations about microscopic universes. So while in the long run, such inventions put ever tighter restrictions on imagination, for some time they even extended it.

Today, however, only small pockets of speculation are left. One is the era of extremely high energies in the first instances of time after the big bang. Energy was so high that it will always remain impossible to reach it in any experiment. If there are different ideas about how the world functioned in that first instance, we will never be able to distinguish between them by any experiment and rule out the wrong ones. Here the borderline between physics and metaphysics is blurred and beyond it, you can still indulge in speculation.

The following idea certainly belongs into that realm of speculation. It certainly belongs into the “crazy idea of the month” category and I am not really serious about it.

An unsolved puzzle is why there is matter, why matter and antimatter did not cancel out each other completely. Perhaps there is some asymmetry between them. Well, it looks like there is one, but it seems to be not big enough. So what happened?

Perhaps matter and antimatter where created in equal amounts and then separated. What could have led to such a separation? One idea that came to my mind is that, although we think of elementary particles as being small, there is, to my knowledge, no known reason why the mass of an elementary particle should not be very large. That the ones we know are small is due to the fact that our particle accelerators, the technology we have to generate them, is limited in its ability to concentrate a large amount of energy in a very small space. But maybe some “elementary particles” are possible that have a mass that is much, much larger. Consider that there is such a particle whose mass equals the mass of the whole observable universe, and perhaps more. Consider a pair of such a particle and its antiparticle are generated in the first instance of the universe, perhaps even many such particle-antiparticle-pairs. They are then separated by the expansion of the universe and then they decay into lighter particles. On of them decays into all the particles that make up our universe, its antiparticle decays equally into the antimatter-particles of an anti-universe. Since all the matter is bundled together into one particle initially and all the antimatter into another one means that matter and antimatter are cleanly separated. The result is a set of universes, each consisting predominantly of one type of matter.

I call this speculative kind of particle the “Universon”.

Granted, this is not science but speculative metaphysics. I don’t believe in it, but it is fun to speculate unabated like that. Maybe the old natural philosophers also did not really believe in all the crazy stuff they invented and published. Did Leibniz believe in his monads and microcosms? Maybe; maybe not. Maybe it was just fun. The restrictions imposed by religion had loosened and the restrictions imposed by science had not yet set in.

Thinking underwent a phase transition, from the solid state of the Scholastic-Aristotelian doctrine to a somewhat liquid or even gaseous state. Later it condensed again and crystallized into the new solid state of modern science. In between everything was possible. The discoveries of the time created enough new information to blow apart the old certainties, but not yet enough constraints to force thinking into new ones.

Speculating about the Universon gives me a glimpse of the intellectual fun possible back in those days.

(The pictures are from and The first picture shows a  “Graphic representation of the standard model of elementary particles”. It is interesting here that the arangement of information in the form of a circle is attempted, not unlike attempts of old alchemists to arange everything into a neat order, while the circle and its segments do not have any clear semantics in this graphic representation.  The second picture shows a depiction of a version of the old geocentric worldview. There was a considerable degree of speculation and variation even back in medieval times (e.g. the idea of the bishop Robert Grosseteste that the universe started as a point of pure light that expanded into everything), but religion put constraints on speculation).


There is no Science of Science

There is history of science and there is philosophy of science. There is no science of science. The reason for this is that every formal method is limited (as can be demonstrated mathematically, in computability theory) and as a result, there is no formal or algorithmic way to produce arbitrary scientific knowledge. Every methodology of science must necessarily be incomplete. Therefore, the methodology of science involves creativity, i.e. the ability to go from one formal system to another, and the totality of these processes cannot be described within any single formal theory or algorithm.

As a result, the meta-discipline of science is a branch of philosophy and will remain so, and science develops historically. The meta-disciplines of science are thus necessarily inside the humanities, and will always remain so. There are no fixed laws describing what scientists do. Science, if understood as the description of systems following fixed laws, is not applicable to itself.

The Core of Philosophy

In a way (that I am going to explore in some articles on Creativistic Philosophy), one could say that computability theory (which could be called “formalizability theory”), as one can find it in the works of Post, Kleene, Turing, Church and Gödel, forms the very core of philosophy. From here, one can investigate why philosophy still exists, why it will not go away and what is the nature of the analytic/continental divide and the science/humanities divide.

Project Sketch

Sketch of the line of argumentation, to be developed in a sequence of articles. The plan is to write each article in such a way that it appears to be almost trivial. The argument is broken up into very small steps that can be understood without special knowledge of mathematics or computer science. The line of thought should be presented in a form that shows it is actually simple and trivial (which it is).

Programs as finite texts over finite alphabets. Each program only contains a finite amount of information.

Programming languages – Interpreters – Special purpose languages – Universal programming languages – Turing machines and other mathematical “programming languages”

Computable functions. Programs computing functions. Functions as (infinite) lists of input-output pairs. Programs of computable functions as compressed representations of such lists. Regularity in such lists expressed by the programs.

Representation of arbitrary data as natural numbers. Representation of Programs by natural numbers. Gödel numbers. Results valid for functions (and programs) of natural numbers are valid for functions (and programs) of arbitrary data.


Programs computing total functions of natural numbers are not Turing-enumerable. Proof of this by the diagonal method. Constructive nature of this proof. So every algorithm producing programs computing total functions is incomplete. The diagonalization method can always be used to produce another computable function and the program computing it, but although this operation is Turing-computable itself, integrating it into an algorithm yields an incomplete program again. So it must be applied “from the outside”, not under the control of the algorithm itself.

Side-step: Turing-enumerability of programs of a programming language (programming languages are decidable). Halting-problem for Turing machines. Impossibility to prove equivalence of arbitrary programs with an algorithm. Impossibility to prove correctness of arbitrary software by an algorithm. Programming is always risky and error-prone.

Set of Programs producing programs computing total functions is again not Turing-enumerable. Sketch of Proof. Productive sets and productive functions. The set of such programs is a productive set. Trying to integrate the productive function into the algorithm does yield an incomplete program. So again, the extension process must be applied from the outside, not under the control of the algorithm itself.

Definition of creative systems. Creative systems cannot be algorithms.

Because of the possibility of Gödelization (mapping of data onto natural numbers) all these results are valid for programs processing arbitrary types of data.

Any kind of knowledge can be viewed as programs calculating total functions or programs producing such programs. Declarative knowledge can be viewed as programs formulated in a special purpose programming language and interpreted by some procedures that act as the interpreter. Applying such knowledge can be viewed as the production and subsequent execution of programs. All these programs halt after some time, so they can be viewed as programs computing total functions.

Creativity (adding new programs to a set of programs that is not Turing-enumerable) is the core of general intelligence. A generally intelligent system cannot be an algorithm but must be a creative system. Any algorithm (even an algorithm producing programs) is limited. It contains a limited amount of knowledge that has a limited reach. General intelligence requires a mechanism to extend the set of programs (the knowledge) but this cannot be part of the system as far as it can be viewed as an algorithm.

Algorithms and formal theories are equivalent notions. There cannot be formal theories of creative systems. If science is about describing systems with fixed laws, creative systems are outside its scope. They are inside the scope of a wider area of “Wissenschaft”, however.

Artificial intelligence may be possible but truly intelligent systems cannot be algorithms. They must contain an extension mechanism not under the control of their algorithmic part.

It is interesting to note that the basic results from computability theory where already known in the 1950s and 1960s (and even earlier) when the traditional AI paradigm was created. The traditional AI paradigm ignored these insights. This is the reason it developed into a dead track. All contemporary “AI” systems can be described as algorithms. Where they contain learning mechanisms, these are limited. It would be interesting to work out the history of early AI to see how this happened. Why where the results of people like Gödel, Turing, Kleene etc. ignored by AI, instead of turning them into the core of the discipline and defining the aim of the discipline as developing creative systems, i.e. systems that can go beyond algorithms? Has this been worked out by any historian of science already?

Estimating the Complexity of Innate Knowledge


The following is a very crude estimate of the informational complexity of the innate knowledge of human beings. To be more exact, it is a crude estimate of an upper limit to the information content of this knowledge. It might be off by an order of magnitude or so. So this is a “back of an envelope” or “back of a napkin” kind of calculation. It just gives a direction into which to try to get a more accurate calculation.

According to the human proteome  project (, about 67 % of the human genes are expressed in the brain. Most of these genes are also expressed in other parts of the body, so they probably form part of the general biochemical machinery of all cells. However, 1223 genes have an elevated level of expression in the brain. In one way or the other, the brain-specific structures must be encoded in these genes, or mainly in these genes.

There are about 20.000 genes in the human genome. So the 1223 genes. So about 6.115 % of our genes are brain specific. Probably, we share many of these with primates and other animals, like rodents, so the really human-specific part of the brain-specific genes might be much smaller. However, I am only interested here in an order-of-magnitude-result for an upper limit.

I have no information about the total length of these brain-specific genes, so I can only assume that they have average length.

According to, the human genome has  3,095,693,981 base paris (of course, there is variation here).  Only about 2 % of this is coding DNA. There is also some non-coding DNA that has a function (in regulation, or in production of some types of RNA) but let us assume that the functional part of the genome is maybe 3%. That makes something in the order of 92 – 93 million base pares with a function (probably less). That makes 30 million to 31 million triplets. If the brain genes have average length, 6.115 % of this would be brain specific. That makes that is something like 1.89 million triplets.

The triplets code for 20 amino acids. There are also start- and stop-signals. The exact information content of a triplet would depend on how often it appears, and they are definitely not equally distributed, but let us assume that each of them codes for one out of 20 possibilities (calculating the exact information content of a triplet will require much more sophisticated reasoning and specific information, but for our purposes, this is enough). The information content of a triplet can then be estimated as the dual logarithm of 20 (you need 4 bits to encode 16 possibilities and 5 bits to encode 32 possibilites, so this should be between 4 and 5 bits). The dual logarithm of 20 is 4.322. So we multiply this with the number of triplets and get  8.200.549 bits. This is  1.025.069 bytes, or roughly a megabyte (something like 200 – 300 times the length of this blog article).

So the information content of the brain coding genes that determine the structure of the brain is in the order of a megabyte (smaller than many pieces of commercial software). The structure of the brain is generated by the information contained in these genes. This is probably an overestimate because many of these genes might not be involved in the encoding of the connective pattern of the neurons, but, for example, in the glial immune system of the brain or other brain specific, “non-neuronal” stuff.

If the brain’s structure is encoded in these genes, the information content of these structures cannot be larger than the information content of these genes. Since there are many more neurons, a lot of their connectivity must be repetitive. Indeed, the cortex consists of neuronal columns that show a lot of repetitive structure. If one would describe the innate brain circuitry, i.e. that found in a newborn (or developing in the small child in processes of ripening), and you compress that information to the smallest possible size, determining its information content, that information content cannot be larger than the information content of the genes involved in its generation. The process of transcribing those genes and building the brain structures as a result can be viewed as a process of informtion transformation, but it cannot create new information not contained in those genes. The brain structure might contain random elements (i.e. new information created by random processes) and information taken up from the environment through processes of perception, experimentation and learning, but this additional information is, by definition, not part of the innate structures. So the complexity of the innate structures or the innate knowledge, i.e. the complexity of the innate developmental core of cognition, must be limited by the information content of the genes involved in generating the brain.

The above calculation shows that this should be in the order of magnitude of a megabyte or so.

This means also that the minimum complexity of an artificial intelligent system capable of reaching human-type general intelligence cannot be larger than that.

We should note, however, that human beings who learn and develop their intelligence are embedded in a world they can interact with through their bodies and senses and that they are embedded into societies. These societies are the carriers of cultures whose information content is larger by many orders of magnitude. The question would be if it is possible to embed an artificial intelligent system into a world and a culture or society in a way to enable it to reach human-like intelligence. (This also raises ethical questions.)

In other words, the total complexity of innate knowledge of humans can hardly extend the amount of information contained in an average book, and is probably much smaller. It cannot be very sophisticated or complex.

(The picture, displaying the genetic code, is from

Thoughts about Form and Emptyness

Cognition has no fixed form. Its form is transient and in flux. New cognitive structures (methods of thinking and representations of information) might arise all of the time (with “representation” I do not necessarily mean a conceptual structure, that is just one possibility).  When we are born, we start with some very simple form of cognition. This is then modified and dissolved, so that in later life, we will probably no longer use most of the cognitive structures we used when we were born. So it does not make much sense to talk of a “human nature” or a “nature of the mind”. There is no fixed core structure because every structure can be changed. The core of cognition is empty. The core of language is empty. The core of perception is empty although there might be some “hard-wired” structures in the lower stages of procession of sensual information. When I “say the core of perception is empty”, I mean that if these “hard-wired” – or better: “pre-formed” structures were not there, we could develop them, although then it would take us more time to learn anything initially. We do not need pre-existing categories and sensual forms in the Kantian sense, we could develop them from experience (and to an extent, we probably do). A lot of what Kant thought off as necessary prerequisites of gaining knowledge can be gained from experience. Pre-formed structures might exist in the new-born or might become active later in our development, but if they are not there, we could do without them, they just give us a faster start. And they do not remain unchanged during our development. The quest of cognitive science to find the core of cognition is misled because such a core does not exist. Form exists, but only at a given moment. At any moment, the mind is implemented in terms of a certain physical system. But if one tries to abstract over all cognitive processes, nothing remains. The overall form of all cognition is empty (this might or might not be connected to the Buddhist teachings about the non-duality of form and emptiness – I am not an expert on Buddhist and related philosophies but maybe there is a connection here).

The classical model of cognitive science and AI is to assume that there is a fixed and unchanging core of cognitive processes, an unchanging machine with an unchanging language, and you can find out its structure. This does not work. It does not even work for technical systems like computers. You might have a fixed hardware core, but its structure is irrelevant. You can use a computer with a certain machine language to implement the language and interfaces of another one, i.e. you can implement virtual machines. A system described on a higher level can be implemented on machines with different physical structures. You can invent new representations (think of all the different graphics- and sound formats) and you can invent new ways of using the machine. Computing has no fixed form and it has an empty core. In a way, at the core you have a programming language, but you can use that to implement any other programming language. This does not necessarily mean that we have to think of cognition as something computational (although that is a possibility) but we might use this as a metaphor: a system that has no fixed structure, so any momentary structure it has can be changed, and that, as a result, has an empty core (if you want to call that a core). Computation does not have a fixed form, its core is empty. Technology does not have a fixed form, its core is empty. Science does not have a fixed form, its core is empty. The internet has no fixed structure, it has an empty core. Human language does not have a fixed form, its core is empty. Human culture does not have a fixed form, its core is empty. There is no complete formal theory for any of these things.

Some Ideas about Evolution


Some draft notes on some ideas about evolution:

In sexually reproducing species where genes can be exchanged and where partial solutions to problems dispersed in a population can come together in some individual, evolution should not be thought of as a linear process.

Instead, evolution may be viewed as co-evolution of genes within a species. Just like in a symbiosis, varieties of the organisms taking part in it will select each other if they fit together to produce an overall system that works well, the genes within a species may be seen as co-evolving species if the species has sexual reproduction, enabling those genes to be combined in different ways. An organism can thus be viewed as a simbiosis of on-gene-species that co-evolve.

Even single genes might be the result of coevolution if a process of crossing over, as the analogue of sexuality on the level of single genes, allows them to exchange bits of genetic sequences that code for different domains of proteins.

For a new feature to evolve, what is needed then is an initial coupling of co-operating genes. This initial coupling, i.e. their co-operation in producing some functionality in the phenotype, might be very weak. But as soon as it is there, mutual selection of gene variants that fit together might set in, resulting for the population to quickly “zoom in” or “converge” on an optimized version of the new feature. One can think of this as a process of mutual filtiering of gene variants.

For example, in a wind-polinating plant species, there is a co-evolution between the genes controlling the properties of the pollen and the properties of the pollen-catching organs of the female flower. While in this example, you literally have some “mutual filtering” of genes, the idea can be applied much more widely.

As a result of a co-evolution of genes starting with simple genes, new features may evolve very quickly, within a few generations, while the resulting forms might then be stable for long times since “aberrations” (diverging from the optimal cooperation) will be selected away by the other genes belonging to the group of co-operating genes. The whole group of genes forms part of the evolutionary environment of every gene takeing part. This stability will last as long as the environmental conditions remain stable or until a genetic innovation creates a new coupling of features that will drive the process somewhere else.

New genes might be included into the process even if they offer only tiny optimizations. It is even possible that pieces of genetic material that at one point where non-coding, like introns, become coding, even if their protein products have only very tiny effects initially, and then develop into important genes by being “guided” in a co-evolutive process of mutual filtering of gene variants.

The co-operating and co-evolving genes will together form some aspect of the phenotype. As a result, in many instances properties of organisms will not be controlled by a single gene but by a multitude of genes.

The genes coding for a feature might be replaced by others in such a cooperation. Some genes might be drawn into a cooperative complex and others disappear from it. As a result, similar phenotypical features in closely related species might have a very different genetic basis (comparable to the reimplementation of a feature in a software system where the surface remains similar although the implementation might become completely different). If a feature is lost in evolution due to an environmental change, but the overal structure remains, it might be redeveloped later on the basis of other genes (e.g. secondary shells in some sea turtles).

In organisms that have a culture, i.e. learned behavior that is passed from one generation to the next (e.g. migratory birds learning a route of migration by following the flock), this learned behavior can become part of a genetic coupling as if there was an underlying gene causing it. One could think of it as a virtual gene becoming part of the coupling of a group of genes or even starting a new coupling. In this way, invented and culturally transmitted behavior can trigger new spurts of evolution (and as a result, the behavior might become genetic by the selection of genes that make its learning easier).

In the evolution of humans, such processes might have played a role in driving the development of the human brain. However, the direction taken by evolution here was not towards the development of specialized behaviors but towards de-specialization, through alternating increases in the complexity of culture (including language) and in the cognitive capacity of the brain. The trigger might have been a rather unspecialized body with a versatile hand that enabled the development of a large diversity of behaviours.

Language development might have started only based on general intelligence (plasticity) without any language-specific adaption in the brain or in any other structure (note that all the organs involved have another function initially (the tongue, lips, teeth etc.). Even the glottis, although already used for communicative sound production in apes, initially might just have had a function in coughing, i.e. cleaning the bronchial tubes. Secondary adaption to language lead to a more elaborate fine motor skills of the speech organs, higher resolution of the auditive system in the range of language frequencies and volume, and probably a higher processing capacity of some brain areas. There might also have been some specific adaptions to handling complex grammar, but I guess these are overestimated in classical Chomskyan linguistics. In any case, language was invented first and then the neural system, auditive system and speech organs adapted to it. It did not emerge at once as a fully developed whole by a single genetic mutation. In any case, there might have been a co-evolution of a group of genes optimizing the language skills and thus the bandwidth of communication. The culturaly invented language might have played the role of the phenotype of a virtual gene (or a piece of environment) in this coupling of cooperating genes.

(The picture, showing an old anatomical drawing of the human larynx, is from