The essays of Montaigne show, in their structure, an echo of the scholastic treatise. The authors of a scholastic treatise first compiled the opinions and teachings of earlier authors before explaining his own position on the topic. Montaigne often also starts with citations of several classic authors, before developing his own ideas. Perhaps the mottos or citations at the start of some modern essays are a reflection of this tradition.
I call a system (natural or artificial, organic or not) that can store information about itself or about another system an observer. Such a system might have a layered structure with several layers of implementation, so there are several layers of description. The observing system defines an ontology or phenomenology of the observed system. The entities defined by it are real from its point of view. If such a system can observe itself in terms of a higher layer of implementation, then the system on that level of description exists from its own point of view. My hypothesis (if I can call it so) is that consciousness is the reality thus established. There is no higher or bird’s eye view from which one level of description or of reality has precedence over another.
Viewed from the outside, such an observer might not exist, but just a lower-level description of the system is real (e.g. an electronic or neuronal outside-view). From its own point of view, such an observer would be real, however. I call this an internal observer.
See also https://creativisticphilosophy.wordpress.com/2015/05/03/homo-in-machina-ethics-of-the-ai-soul-a-response/ and the discussions following that article, https://creativisticphilosophy.wordpress.com/2014/07/02/generating-objects-towards-a-procedural-ontology/, and https://creativisticphilosophy.wordpress.com/2013/04/08/an-open-letter-to-the-human-brain-project/.
(I don’t expect anybody to understand this rather cryptic and half-backed stuff, more work is needed to make it clearer, just a note for myself).
In https://creativisticphilosophy.wordpress.com/2015/05/16/how-intelligent-can-artificial-intelligence-systems-become/ I have argued that there is a limit to intelligence since a creative system can only process a very limited amount of information at any time without running into a combinatorial explosion. Large amounts of data require pre-existing knowledge on how to process them, i.e. algorithms, but if such knowledge does not exist yet and the system is trying to find out how to process some novel data, only small amounts of information can be processed.
Since our senses produce large amounts of data that tend to contain some novel information, we can only analyze small parts of that information. Automatic processing of the primary sense data using known algorithms (or neuronal networks that can be modelled as algorithms) can only provide a limited depth of analysis. As a result, only a small fraction of the information can be analyzed deeply at any given time, so the phenomenal overflow discussed, for example in https://scientiasalon.wordpress.com/2015/05/18/ned-block-on-phenomenal-consciousness-part-i/ and https://scientiasalon.wordpress.com/2015/05/20/ned-block-on-phenomenal-consciousness-part-ii/ might be inevitable for any such system, natural or artificial. A mechanism of attention might be necessary.