Notes on Conditions for the Possibility of Knowledge

Kant thought that certain categories and certain „Anschauungsformen“ (forms of intuition, lit. “forms of looking at”) had to exist in the human mind as a precondition of the possibility of gaining knowledge. One of the “forms of intuition” he thought must be there is that of space.

Think of an artificial system that combines sensors with some processing of the information yielded by those sensors. For example, let us think of a self-driving car. Such a system contains a representation of space. Spatial information, both for the “map”-information the car has and for what it “perceives” with its cameras and other sensors (e.g. GPS sensors, radar, accelerometers), is represented by means of some data structures.  These data structures are interpreted by some programs. These programs determine how the information stored is interpreted and thus they make it a representation of space.

So in such a system, a representation of space is a piece of software. It can be programmed into the system (it might be implemented in hardware to gain speed, but it could be implemented as software). The system can be brought from a state where it does not contain this structure to a state where it does contain it by a process of programming. This means that some information or data (the program) is entered into it, changing the way the system works. We can think of the program as a piece of knowledge.

If it can be programmed, then it should be possible for such a structure to be learned. We can think of a learning process as a process where a learning system finds regularities in some data and generates a bunch of knowledge capturing that regularity. By applying the knowledge, the data can be simplified (i.e. compressed). A spatial representation, for example, can be used to interpret a stream of input data as two-dimensional images and to interpret these in turn as projections of a three-dimensional world. If the spatial representation is a piece of software in an artificial system like the self-driving car, it is possible to view it as a piece of knowledge in an animal or human being. Like any piece of perceptive knowledge, it captures some regularity in the sense data. I can see no reason why such a piece of knowledge should not be the result or a learning process.

This means I have no problem imagining a system that learns a spatial representation starting from a state of development where it does not have one. For example, it should be possible to build a learning self-driving robot that drives around in a sufficiently rich environment, generates a sequence of inputs that way that allow for a spatial interpretation, and develops a spatial representation through a learning process.

In humans and many animals, some form of spatial representation is probably already pre-formed in the brain. Learning it from scratch would take too much time and would be dangerous. However, such a learning process would be possible. If such a structure is there already by means of evolution, i.e. if it could be discovered by a process of evolution, then it should be possible to discover it in a learning process instead.

I would, therefore, state the following postulates:

  1. Cognitive structures that can emerge through evolution can also emerge through learning.
  2. In artificial systems, cognitive structures that can be programmed into the system can also emerge through learning.

A remark about 1: in evolution, learning and evolution could cooperate since learnt structures that are passed on culturally become part of the environment in which evolution takes place. Learnt structures might therefore be underpinned by genetically formed structures afterwards.

A second remark about 1: What is needed for the emergence of humans from non-human animals is then not the development of a set of special purpose cognitive structures for higher thinking processes but only the development of a larger processing capacity of unspecialized, plastic neural networks. Everything else may happen by means of cultural development (including the development of language). Roughly speaking, the brains just had to become larger.

A remark about 2: If any programmable structure can be learnt than there is absolutely no necessary, indispensable basis for cognition, no necessary core structure and no necessary special knowledge representation language other than a programming language (or other than networks of neurons in the case of neuronal systems (and neuronal networks seem to have an expressional power comparable to that of programming languages)). Kant would have been absolutely wrong in stating that certain structures (“Anschauungsformen” and “Verstandesbegriffe”) have to be there a priori. A system might start its development with a given pre-formed or pre-programmed core, but this core could be modified and fall out of use in later developments of the system (so it is not stable) and it would be possible to start with an even simpler (although less efficient) core that can learn whatever was programmed into the more elaborate core. In this sense, cognition has an empty core and no general form. Every formal theory of it is incomplete. The forms of intuition and the categories are learnable and are not necessary preconditions of learning processes.

What is required is a sufficiently rich environment with which the learning/evolving system interacts. Information from that environment is integrated into the knowledge of the system.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s