| Dreamsite

| main page

| links

| contact info |



(by Misha Naimark)

Comments and examples written in italics can be skipped when reading.

 

Model and definitions


 

Let us consider a model of brain (later in the text denoted BN) as a set of N elementary agencies {A1 , A2 , A3 , ... , AN }, numbered by integers from 1 till N . By "elementary" we mean that the agencies are binary units -- each of them can be found in one of the two states S(A)-- either excited (St(A) = 1) or depressed (St(A) = 0) . An agency Ak can have a certain number of in-going links, called "detectors" {D1k , D2k , D3k , ... , Dmk } ; each of these detectors Dmk is either connected to one of the other elementary agencies -- Ap (in this case Dmk transmits to its host agency Ak the information about the current state of Ap ) , or it detects presence or absence of a certain factor F in the external world (EW) ( and in this case we are going to call this Dmk a "receptor" for the factor F).

One can visualize EW factors by some practical analogies: it can be presence or absence of a certain chemical in the air, as detected by a single receptor ending in the nose; or presence/absence of a certain frequency in the sound spectrum, or a certain color in a certain spot of the vision field. We now assume that a factor carries just binary information (presence/absence), all amplitudes and quantitative differences are represented either by number of receptors excited simultaneously or by time frequency of a same receptor excitements.

The state of Ak (0 or 1) is always uniquely determined by the current information supplied by all its detectors and by its own history. Then, the state of the whole brain BN at a given moment can be described by binary word WN of length N, having 0 or 1 at n-th position accordingly to the current state of each An . So, WN is also uniquely determined by a current set of EW factors and their history.

This model is , of course, very common in science; these "elementary agencies" are often called "ideal neurons" and the like, their "detectors" representing dendrites with receipting parts of synapses. The body of "ideal neuron" is thought to collect all the information from its in-going synapses and process this information, reducing it to one-bit output. But here I would better call them " elementary agencies" to emphasize that their nature is just the same as the nature of bigger non-elementary agencies they are capable of organizing themselves into still bigger and more complex agencies. (The whole brain BN is , under normal circumstances, a big agency, too. But in some mental disorders, such as personality splitting, a brain can contain two or more separate top-level agencies )

The whole picture we are going to study now is totally still and frozen we are not taking the flow of real time into consideration; the states of agencies are not changing. This allows us to suppose that state of an agency An does not depend on its history, but only depends on current signals coming from its detectors. This is certainly not the case in a real brain, and the only reason that can justify such an incredible simplification is that we are going to study, in fact, just one very peculiar agency called "mathematics", (MATH) responsible for mathematical thinking. The rest of the brain outside this agency we are going to consider external world EWMATH with respect to MATH, all An's whom MATH is connected to with its detectors are now considered factors in EWMATH . In a way, we are now thinking of MATH as of a smaller brain inside the bigger one BN ( in this sense we can call any agency inside a brain a "subbrain").

This technique of marking out an agency of our interest and assigning the rest of the brain to EW is a common and convenient thing with the present approach: for example, when we watch two brains communicating by means of oral language, we might be interested in "meanings", "senses", etc., so we mark out the agencies responsible for these things in both brains; but, say, the concrete way a brain controls pharynx or deciphers sound frequency spectrum does not interest us now. We don't even care if they speak English or Russian, we only know they speak the same language and understand each other. In this case we can conveniently assign all language agencies in both brains to the EW; it will be quite an amazing EW "meaning" and "sense" agencies in different brains will find themselves connected to each other by a kind of virtual links just as if they have suddenly developed real synapses connecting some of their neurons to the partner's neurons. Excitement of a certain meaning agency in one brain will automatically lead to the excitement of a certain other agency in the other brain, just like by a synapse connection. The physical nature of this link can be different it could be diffusion of acetylholine, or it could be sound waves plus work of pharynx and ears and some brain parts, or, if the people are talking by mobile phones via satellite, it can also involve a lot more physical phenomena as well as work of other people's brains. The result is just the same, we can represent it on a graph by directed links between the agencies; boarders between different brains become quite formal things, we can now depict all agencies in all brains of a society of people on a single graph as a continuous neuron network. We can expect some agency-like structures in this immense network to go far beyond single brain, and to involve many agencies from different brains these are nothing else but social organizations and foundations, or AGENCIES in the social sense and meaning of the word.

There are two considerations justifying our taking up the still-frozen model of MATH agency:

First: As opposed to concrete concepts like "apple" (which usually have some properties connected to the concept of "time" lifetime, time when it grew or have been eaten, etc.) all abstract mathematical concepts and images (i.e. agencies belonging to MATH) have absolutely no time properties they exist beside the time. Concepts like "two", or "three", or a "number", or a "triangle" cannot have nothing like "lifetime", they never change and have no history. One cannot even say when an absolutely concrete triangle ABC, appearing in an absolutely concrete problem No.123 in the textbook, came into existence and when is it going to disappear again. This is one of the greatest abstractions forming the foundation of mathematics. These belonging to MATH agencies seem to have no direct links to the agency of TIME. (Of course, some of them have links to mathematical model of time which agency is organized in a strikingly similar way as the agencies "real number" or "straight line").

Second: When we separate the rest of the brain together with real EW out into the new EWMATH with respect to MATH , we are getting a quite remarkable environment for the MATH to work in; among other things, EWMATH is capable of artificially keeping all factors influencing the MATH (i.e. all EWMATH's agencies, to whom MATH is connected by its detectors; these detectors can now be called "receptors" of MATH in this new external world EWMATH) stable and frozen as long as necessary for solving of the current problem. For example, when we try to solve the problem No.123 , we artificially keep imagining the triangle ABC as long as necessary for our "mathematical thinker" to come to a necessary conclusion. In terms of our model, it means that EWMATH keeps the states all MATH's receptors unchanging for long enough time for all internal links in the MATH to work off and for all MATH's agencies to come to stable states.

So, as the states of all internal agencies in MATH are not liable to change with the flow of time on their own, have no history and are uniquely determined by states of MATH's receptors; as, on the other hand, the states of these receptors are artificially kept stable by EWMATH during whole process of a problem solution so we can try that frozen model of MATH .

The above speculations are extremely important for all following theories, but, they lack mathematical rigorousness so far. The problem is very delicate and wants further development. Now we are going to leave this question open and try to check if this approach can bring any practical results.

Thus we are taking up the following model of MATH to study some questions of theory of probabilities:

It is a set MATHK of K numbered elementary agencies { MATH1 , MATH2 , MATH3 , ... , MATHK } , whose states do not depend on their history and do not change with the time. Their states are uniquely determined by signals coming from their detectors; state of MATHK as a whole can be described by a binary word WMATH of length K . Outside this set, (i.e. in EWMATH ), there exist another set of L numbered elementary agencies FL , called "factor agencies" they play the role of model factors, and their states are kept constant during any of our mental experiments. State of this whole set is to be described by the binary word WF . Any of MATHi can only have its detectors connected to some of these factor agencies (these we call MATHK's receptors) , and some other members of MATHK itself. So the state of MATHK is uniquely determined by the state of factor agencies WMATH is uniquely determined by WF .

Each of the model factors Fi appears (i.e. gets excited) with a certain probability P(Fi). Let us require the factors to be independent from each other accordingly to the

classic definition of independent events -- conditional probability of a factor on condition of presence of any other factor is equal to its unconditional probability :

P( Fi / Fj ) = P(Fi) for all permissible i ¹ j;

or by equivalent formula :

P( Fi & Fj ) = P(Fi) * P(Fj ) ;

For instance, we can take for these L factors the results of L coin-tossing experiments -- front in the i-th experiment means presence of i-th factor, reverse means its absence (the coin here is not necessarily symmetrical, and the probability can differ from 1/2) .

But in the above case the factors are not physically independent -- we were using one and the same coin in all experiments, they are in a way physically linked through the coin itself. This link results into the equate of probabilities of all our factors: P(Fi) = P(Fj ) for all permissible i , j . Now if we take L different asymmetrical coins for our tossing experiments, we will have the probabilities of the factors, generally, different from each other. For our convenience, let us write down the

definition of physically-independent events: two events A, B are physically-independent, if

  1. they are independent in the classic sense: P(A/B) = P(A);
  2. they have no material links -- no change in conditions of the event A can cause the probability of B to change, and vise versa.

This definition lacks rigorousness, too. Moreover, the first condition about their classic independence is, most probably, excessive two events having no physical links must be always independent in the classic sense.

Looks evident, but not so easy to state rigorously and prove. No use considering physical links in the real world they are too many and intricate, it would rather help to study the links between model factors inside brain. Following text contains some attempts to clear this up, but it cannot avoid a set of physically-independent basic factors, taken without rigorous definition.

Below, we always assume our factors to be physically-independent. Also, we need a

definition of what is an event : event Ei is a situation when agency MATHi is excited. In other words, our agencies { MATH1 , MATH2 , MATH3 , ... , MATHK } are responsible for the concepts of events {E1 , E2 , E3 , ... , EK } correspondingly.

Some of these agencies can have their detectors imposing contradictory conditions upon the states of the factors; these agencies can never be excited, we call them impossible events.

Two agencies can impose identical conditions on the factor's states (even though their links look different on the graph), they are always excited or depressed simultaneously we call them equal events.

Set of the factors FL can be found in one of 2L different states, described by different words WF . We can easily build an agency MATHi that will be responsible for one and only one of the states WF -- just connect MATHi 's detectors to all factor agencies and only to them, and let the body of MATHi perform logic operation "AND" or "AND NOT" with detector signals, accordingly to the state ( 1 or 0 ) of the corresponding factor in the WF . Such an agency we name a simple event agency; there can exist 2L unequal simple events, we denote them { WF1 , WF2 , WF3 , ... , WF2L }, since they unambiguously correspond to the words WF .

Any thinkable event E either happens or not when a certain simple event WFi takes place. So we can also build an agency MATHE responsible for this event just by connecting its detectors to all of the simple event agencies { WF1 , WF2 , WF3 , ... , WF2L }, (and only to them) , and letting the MATHE's body perform "AND" or "AND NOT" operations in accordance to E's happening or not happening together with this WFi . So any thinkable event can be represented by an agency; the total number of unequal events cannot exceed 2 in power 2L .

The told above can be also visualized as a "truth tables" for each event E , having 2L columns for all simple events and a row of zeros and ones indicating the corresponding E's states.

More definition: a factor Fi is called influencing factor for an event E , if there exist at least one such a pair (WFm and WFn ) of simple events, coinciding by all factors but Fi , that the St(E) in the case of the elementary event WFm is not equal to St(E) in the case of WFn . In other words, Fi is an influencing factor for E , if there exist at least one situation when a change in St(Fi) and only in it necessarily causes a change in St(E) . In terms of graphs it would mean that in order to build a graph of the agency E one have to connect at least one detector to the factor agency Fi .

Last definition : Two events A and B are called truly-independent from each other, if P(A/B) º P(A) , while the probabilities of factors P(Fi ) assuming all possible values between 0 and 1 . (here we always suppose the factors to be physically-independent).

This is a kind of variation for the definition of physically-independent events. Less general, but perfectly rigorous this time.

Perhaps, the only practical outcome of this approach so far is that we have realized that the formula P(A/B) =P(A) can mean two entirely different things: when it is understood as an identity of functions, it means that the graph, depicting the two events can be split into two separate graphs having no links between each other. Since we assumed the factors to have no physical links between each other, this means total absence of links between the events both in the brain and in real physical world. Here is an example of two such events in a truth table:

Number of simple event

1

2

3

4

5

6

7

8

F1

1

1

1

0

0

0

0

1

F2

1

1

0

0

0

1

1

0

F3

1

0

0

0

1

1

0

1

A

1

1

1

0

0

0

0

1

B

1

0

0

0

1

1

0

1

For the three factors one can take three coin tossing experiments with three different asymmetrical coins. Then A will represent front side in the first experiment, and B front in the third experiment. Direct calculation will show their independence whatever the coins asymetricity.

When the formula P(A/B) = P(A) is understood as a numerical equality, the events can be linked together on the graph, their independence being just an accidental coincidence of numbers. If we take one and the same symmetrical coin for our factor experiments, we can build an example of such two events here P(A/B) = P(A) = P(B) = P(B/A) =1/2 . It is shown in the truth table:

Number of simple event

1

2

3

4

5

6

7

8

F1

1

1

1

0

0

0

0

1

F2

1

1

0

0

0

1

1

0

F3

1

0

0

0

1

1

0

1

A

1

1

1

1

0

0

0

0

B

0

0

1

1

1

1

0

0

First statement to be proved: If an event A and a factor event EFi (whose agency we denote MFi ; it has only one "AND" detector connected to the factor agency Fi so that the events EFi and MFi are equal: EFi = MFi ) are truly-independent, then the factor Fi is not influencing the event A .

Proof : Let us suppose the contrary: Fi is influencing A. Then, let us consider the pair of simple events WFm and WFn differing only at Fi ; for certainty, we assume WFm(i) = 0 and WFn(i) = 1 . We are allowed to choose any values for the probabilities of the factors, so let

P(Fk) = 1 for every k ¹ i when WFm(k) = WFn(k) = 1 ;

and

P(Fk) = 0 for every k ¹ i when WFm(k) = WFn(k) = 0 ;

and

P(Fi) ¹ 0 ; P(Fi) ¹ 1 ;

Then the probability of all simple events except for WFm and WFn will be zero:

P(WFk) = 0 for every k ¹ m, k ¹ n ;

and

P(WFn) = P(Fi) ; P(WFm) = 1 - P(Fi) ;

At first , we may study the case when A(WFn) = 1 and A(WFm) = 0 ; then,

P( A / EFi ) = 1 ;

P(A) = P(WFn) = P(Fi) ;

so, P( A / EFi ) ¹ P(A) ; {Q.E.D}

(the case when A(WFn) = 0 and A(WFm) = 1 can be studied in exactly the same way).

Second statement to be proved : If two events A and B are truly-independent, then two such non-intersecting subsets FA and FB can be separated out of the set of factors FL , that A is influenced only by factors belonging to FA , and B is influenced only by factors belonging to FB .

 

 

Proof : Probability P(E) of any event E can be calculated as a sum of probabilities of all simple events favorable for E ; Probability P(WFi) of a simple event, in its own turn, can be found as a following product:

P(WFi) = k=1P k=L P(Fk)WFi(k) * k=1P k=L(1 - P(Fk))(1 - WFi(k)) ;

so, P(E) is a polynomial of degree L in L variables P(Fk) , where each of these variables appears in degree not higher then one.

Supposing the contrary: there exist a factor Fi , influencing both these events. Then P(Fi) must appear in both polynomials P(A) and P(B) in the degree one. Let us calculate probability of the event A&B ; Since A and B are independent,

P(A&B) = P(A) * P(B) ;

The degree of this product P(A) * P(B) in variable P(Fi) is two. !!Contradiction!!

This proof belongs to Vladimir Hinich, Katherine Naimark, Maria Gorelik, in Weizmann Institute of Science. So far this Society of Mind approach was only able to supply us with leading notions, all proofs had to be done in a classical way. Some attempts to do it by graphs or something were too bulky and unpractical. They at least require a complete theory of what are numbers, sums and products in term of agencies, and what kind of agencies are responsible for "probabilities" and why a probability can be represented by a number. Some ready parts of such a theory look good enough, but we cannot claim we have it complete.




| Dreamsite

| main page

| links

| contact info |

Hosted by uCoz