Let A be any information processing system (ab animal, a human being, a computer, a group of people, an institution…). If A takes up a bit of information, let’s call it b and then processes any bit of information, let’s call it c, and b influences the way A processes c, then A (or the part of it that is involved in applying b to c) can be said to be an embodiment or implementation of a semantics of b. This semantics of b describes the effect of b on c (or other information), as processed by A.
b might, for example, be an utterance or text heard or read by A, a piece of art, or the view of a passing car or a tree, whatever. c might be any information taken up by A later on or already stored inside A. It might influence subsequent actions of A, or whatever.
A can be viewed as an interpreter (like an interpreter of a programming language in computer science). The semantics of a programming language is the specification of its interpreter. Generalizing this, one may view any system in which one information (b) influences the processing of another (c) as an “interpreter” which embodies a semantics for a “language” to which b belongs. However, the semantics does not need to be pre-defined (in the sense of a formal specification to which the interpreter is built). Rather, it might be a description of what the system is doing that can only be given afterwards and not necessarily completely for all possible b (i.e. the “language” to which b belongs does not need to be fixed or well defined).
b might be viewed as a modification of A, so A together with b forms a new, extended system, B. Likewise, the processing of c might result in a further modification, and so on. In this sense, A is programmed by A. It is a programmable or extensible system and the “language” it represents is not fixed but extensible.
The “meaning” of b (with respect to a) is the totality of its potential effects of other bits of information c. It might be unstable or vague since it can be modified any time by other bits of information. However, it is possible under such an approach to describe semantics without using a term like “meaning” at all.
(There are simple examples without c, where b just causes a (re-)action, (e.g. clicking the shut down button on a computer).
Propositions or other forms of statements are only special cases. b and c can be any kind of data.
If there are several agents A1, A2, A3 and so on who are exchanging information, they could develop something like a language, i.e. a system of conventions (more or less vague and potentially shifting) about the effects of certain pieces of data b on their actions, utterances, their processes of information processing etc. “Natural” languages are instances of such systems. The development of such a shared system of communication would require a shared world of some kind.