Multiple Dynamics in Causal Systems and Their Relevance
for Cognition
George Kampis
Fujitsu Visiting Professor of Complex Systems
Japan Advanced Institute of Science and Technology
and
Chairman, Department of History and Philosophy of Science
Eotvos University, Budapest, Hungary
Abstract
Dynamic models of the mind enjoy recent popularity and they promise
to converge to brain models. Yet the concept of dynamics, if understood
in terms of a single set of equations, or as a behavior generated by some
single rule system, is too narrow for both. Causal systems are more complex:
they can support multiple dynamics that changes over time so that one system
is replaced by another. I discuss the relevance of this idea for cognition,
together with some pieces of the evidence that supports the existence of
related systems in the mind and in the brain.
This is part of an ongoing project that probably leads to a book.
Part of this material was presented in various other meetings recently.
'Brain for Mind' Thesis
By thesis I mean a set of statements which I do not want to prove
here or even argue for. Rather, I take it as
granted, and I only want to motivate it.
We study the brain because of the mind.
This should be commonplace but surprising few people
share this view.
Yet it cant't otherwise, or there would be little
reason to study this organ so closely.
More specifically, here is this:
Brain science looks for the mind in the brain.
Brain theory is a version of 'mind theory'.
The mind is (introspecively, but largely cross this
out, and) functionally known.
The brain is, in the first place, structurally known;
the bridge is by functional brain theory which goes for functions we know.
And we know them from the psychological and philosophical
theories of the mind: i.e. memory, learning, perception, etc etc.
Theory of mind first; build brains for that mind.
Conceptions on learning, representation, perception,
or even consciounsess determine (or condition) the style of brain research.
Active Representations Thesis
The mind is a device that uses active representations.
Representations in cogsci and AI are "passive".
Representation is a 1000 years old topic from Lullus to ... Andy
Clark and other proponents of neural nets for the mind.
(More precisely, it's 2500 years old from Plato and the Old Testament.)
What is usually understood by a representation is a passive representation.
But what is usually meant by the properties of a representation, are
properties of
active representations.
Some typical examples for representations
(representation def.
something that stands for something else)
representations in classical AI/cogsci - inherently passive like text
in the mind, needs reader/processor
e.g. explicit representations Simon – Newell,
Fodor etc.
linguistics
in mind/brain – Pinker
representations in NN - they are not much livelier..
‘stored’ in parameter space of a dyn. system (i.e.
in slow variables)
A key feature of representations which every passive representation
misses, is irreversibility
The Dollo's Law of representations, cannot be erased or undone by later
events
Representation in minds give the psychological sense to paradigms.
In other words, representations in the mind ‘work’ in the sense that
they determine,
what future representations are possible (cf. usually
this is what we mean by
‘knowledge’)
[cf Chomsky against behaviorism, ie associationism]
Expressed in philosophical jargon:
New representations should be consequences of the semantic properties
of existing ones
[well-known, cf. Fodor inferential role semantics
etc]
Are ‘active’representations possible? For future use I take dynamical
systems as example.
All dynamic models (i.e. NN, neurodyn etc). have
a deficit here,
anything is learnable in them ('associations'),
there no feedback from the learning process to the
dynamics itself.
Active representations must exist.
Active representations are like philosopher's stone. Nobody has ever
seen them but they are intensivelty hunted.
The reason is that it's difficult to imagine how to do without them.
Mind as Matter: A Basic Model
A model of the mind which supports active representations together with
a specific class of dynamical systems
(1) Objects
(2) Mental models
(3) Causality
combined together
Objects
A philosophical and AI concept that heps understand perceprtion and
reasoning.
in the technical sense, an object
(or ontology) is a re-usable representation
e.g. a database of context
independent (cross-contextual) concepts, entitites etc.
Machine translation as an example that requires them.
Machine translation is impossible
beyond a certain level if based on grammar and lexicon alone
The famous and truly standard
example is: metaphor, or non-literal meaning.
Take the fact that recent
theories (G. Lakoff, M. Johnson, D. Draaisma, E. Thelen etc) assume
that most (if not all) meanings are metaphoric.
In order to process metaphors and other complex linguitic forms,
the mind needs a built-in knowledge of real world.
In terms of its objects and
properties, in a detailed, text-independent form, an embedded ontology
Mental ontologies have combinatorial properties.
As real world objects do: two
apples are a pair (as a specific set), an apple and a pear are two fruits
a set of stones arranged
in a bow is a bridge etc.
Combination is not just hierarchy
but also context that need to be reproduced.
E.g. to translate the expression
‘on the top’ involves the fact that a bridge is self-supporting etc.;
people ‘see’ or ‘imagine’ this
(more about mind's eye later).
Combinatorial properties lead to the explosion, or open-endedness
of the system.
The usual solution is (generative)
language;
sounds like we are about to recapitulate
one of the notorious arguments of Fodor.
But we are aiming at a different
game here.
Can we build such ontologies in
a dynamical system?
Mental Models
“Mental model” - origin of the notion
K. Craik 1943, N. Goodman
1978, P.N. Johnson-Laird 1983 etc.
A mental model is a token "small-scale universe" in the mind.
e.g. Johnson-Laird:
his approach is to natural deduction and problem solving
based on representations
of sets by their members (i.e. tokens)
the perhaps more didactical
notion would be ‘pebbles of the mind’
example: sentence “All
A are B” is represented in the mind by several copies of A's and B's
A mental model constitutes
a mock-up world (an analogy is a toy train).
Mental processes such as reasoning involve using parts of mental
models in different arrangements.
The success of the J-L theory is the explanation of ‘failed’ deductions
by this feature.
You can arrange A's and B's in a number of ways; some support correct
readout, some don't.
How do mental models work in the J-L theory?
- subconsciously
- despite vividness (such as impersonificiation of
Bankers etc.) taken as abstract entities
- subject to formal operations
- require separate processing
Causality
Causality is one of the most important concepts in science.
not equations etc! which are empty without a natural,
ie. causal background
fundamental yet notoriously difficult to characterize
Some candidates:
- "function" (B. Russell) - but it can boil down to mere
correlation (E., Sober, contra Reichenbach and others)
- counterfactual dependence (if not A then not B) to express
"A evokes B", but subject to same problem: maybe correlation
- manipulability of B by A - different idea, note that it
is no more formal! an anthropocentric definition, how about nature?
- depth:
Causality is a relationship
between (the occurrence of) events which involves more events
for which the same (i.e. an equivalent)
relation holds.
Of special interest is not causal chains (i.e. temporally
disjoint sets of consequences), but the existence of simultaneous causal
relations.
C1 R1 E1
C = cause R = relation E = effect
which involves:
C2 R2 E2
.. . .. . .. .
.. . .. . .. .
Cn Rn En
such that R1 holds iff R2......Rn holds
A simple example is causality in static supervenient
levels.
Taking medicine to heal an illnes "is the same" as
blocking receptors to fend off antigens.
These are different ways of approaching "the same
phenomenon" - in fact it is NOT the same phenomenon but the same, but deep,
causal relation.
Supervenience is but an example. It has asymmetric
dependence bw constituing relations, fixed (typically 2) layers such as molecular/mental
etc.
Depth is a modal (indexical) property - which a euphemism
for "implicit", "hidden", "tacit", "contextual", "non-formal".
But, then, causality has never been entirely formal.
Judea Pearl
(2000): Causality, Models, Reasoning, and Inference, Cambridge U.
Press.
Ian Hacking
(1983): Representing and Intervening, Cambridge University Press.
Mind as Matter: A summary
The mind functions as a complex system, capable of supporting special
structures,
which are: object-like, having combinatorial (i.e.
generative) properties; arrange in small-scale world models;
animated or self-executing with irreversible consequences.
In other words, in every important respect, the mind behaves like a
material system.
Skeptical interlude: what else?
This is the wrong question. Virtually every theory
of the mind, including the most radically anti-Cartesian,
assume that the mind is purely, or mostly, functional;
probably it's because the mind is functionally construed.
Chemistry of the mind: an analogy from molecules.
a chemical system is a changing set of components
where history is past-dependent, ie. irreversible.
it's not one single dynamical system, but many
different (I will discuss this).
molecules are "representational"
(especially within, or together with, the larger system that integrates
them)
productive (i.e. active, in a
strong, immediate sense)
causal
A Byway : The Torchlight Model of Consciousness
Consciousness is a sequential scanning and triggering activity supervening
on the massively parallel mental process.
What is Consciousnesss that can support a causal mind?
No category mistake here. A theory is relevant if
it is both adequate and interpretable. Adequacy means, by and large, replying
to the
right question; interpretability means fitting
into what is otherwise known.
A relevant, hence interpretable theory of consciousness
must stay in agreement with theory of mind, in paricular, causal mind.
Consciousness can have no specific causal effect (mind needs and
allows no consciouness).
- the causal mind is causally complete (which a children's
tale of overdetermination, but there are true tales, like Hansi und Gretel)
- cf qualia inversion, cf pain ("it hurts but I don't
mind"), etc.
- timing and other neurological data
- multiplicity (parallalel causation, linear subjective
time imply concsiousness is selective of a massive number of autonomous
processes)
- problem solving: the mental process happens autonomously
and informs consciousness (e.g. visually - but it is not consciousness that
sets up the stage)
- importance of the unconscious / subconscious in
the light of later mental events
- etc.
Consciousness may increase or intensify mental activity, or certain
mental atvities.
Much as light can have effect on chemical processes,
to complete the crude metaphor
STM, STM/LTM transcription and several other mental
functions.
Dynamical Systems
From the philosophy of science point of view, dynamical systems need
justification.
Why consider dynamical systems at all?
trendy, strongly developing -
there is the recent "dynamical hypothesis"
support anti-repesentationist
representations (there are representations but...)
"emergent", i.e. inherently bound to the
idea of temporal unfolding, which is necessary for.. this and that
but above all, closest candidate
for the mind in the style of the brain (the strongest alternative,
computers, are structurally dissimilar)
A (not so) side remark
Consider the "Brain for the Mind" Thesis for a while. It implies that
brain science is conditioned by concepts about the mind.
Now here is a stronger version. Science in general is strongly conditioned
by, and done in the style of, concepts about the mind -
which are traditionally believed to be symbolic, abstract, "description-like".
Recall expressions like "complete description of
the world", or just that "science describes the world". Hence a temptation
to directly link temporal processes to dynamic descriptions,
and ultimately, dynamical brain models to dynamical models of the mind.
How well are dynamical systems suited for the task? The inadequacy
of dynamics.
Part of this is already behind us. In a dynamical system there is:
no active representation
no explicit representation structure (or difficult
to achieve)
no depth
no irreversibility, in most cases (past attractors'
reachability is not constrained, or does not depend on representation content)
Multiple Dynamics
What follows here is variation of an old theme (Kampis, G. 1991: Self-Modifying
Systems, Pergamon, Oxford), to which I thought
I will not return from causality and other, more recent interests.
Dynamical systems (as we know them) are special structures for describing
temporal behavior.
A dynamical system is a master description of a set of homogeneus/homogenized
events.
It is an example for a covering law in a deductive-nomological explanation
scheme.
A deductive-nomologicval explanation scheme is a scheme for complexity
reduction:
given a set of explananda, find a single rule that
generates them.
There is some unsharpness here, because of a lacking definition of what
is a single rule.
The traditional view is the equivalence of singular and multiple
dynamics.
Multiple dynamics (segment dynamics): for n time segments, n
(or, 1 <k < n) pieces of separate dynamics
Singular dynamics (in lack of a better word; it has nothing to do with
singularities): encompassing master system for every t.
Is there a difference? Epistemologically, yes, otherwise, no. It is
always possible to lump together the segments into
one larger dynamical system, with a higher dimensional
state space.
The manipulability of segments is different from that of master dynamics.
Example: start a chemical system with 6 (kinds of) molecules, end up
with the biosphere.
Chemical systems are efficient ways of generating new dynamical systems,
i.e. new reactions.
No a priori master equation is possible, because of path-dependence
(i.e. the future depends on past combination).
A phenomenon of interest is the path-dependent control of the production
of these new dynamical systems.
This phenomenon is lost from sight in an a posteriori master
equation.
However, this is a real phenomenon which can be used in the operational
sense.
Comparison of the representational power of singular
and multiple dynamics - a second look.
Representational power is identical with respect to deductive explanation
and time prediction
e.g. given a set of time instances t1..... tn,
and given a set of multiple dynamics, or a covering dynamical law.
And the task is to describe/explain/predict systems
states at ti.
But not identical with respect to the kind of dependence exemplified
by controllability.
Conclusion:. there are different kinds of dependency relations.
Dynamics (singular dynamics, or, ordinary dynamics)
is universal with respect to one of these.
(Namely, temporal dependency within a mechanism =
a fully constrained system, but that is a different story.)
Epistemic difference is usually transformed out in science, e.g.
Leibnizian time (real numbers) in dynamical systems.
It is clear that Leibnizian time does not reflect
natural time's every property but this causes no error.
Canonical interpretation theory ensures correctness
(e.g. time reversibility in model vs orbit reversion in experiment).
The case now proves to be different in segments versus
master dynamics.
Causal systems support multiple dynamics
The previous section was told, non-accidentally, from the perspective
of causality.
Causality can easily be shown to support multiple dynamics. Epistemic
difference has scientific, not just philosophical, content.
Evolution is an example for a causal system with multiple dynamics.
(the reason for the example is
that am no expert on the brain)
Consider a fox - hedgehog - parasitic insect ecosystem.
Foxes moved into a territory, exerted pressure on
small hogs, which developed thorns, providing shelter and leading to increase
in mites & fleas.
(There are several crude simplifications here, as
in all popular bedtime tales of evolution. Let us now keep to these simplifications.)
foxes move in
----------------------------
> -------------------------
event that thorns appear *
----------------------------
....
----------------------------
.....
----------------------------
-------------------------
thorns provide cavities between > ------------------------
fleas catch up
dynamics1
>
dynamics
2
(selection equation)
(population
dynamics)
Brain Theory
Brain theory should integrate mental models and other elements of
the "material mind".
It is difficult to see, how this could be done within (singular) dynamical
systems.
The proposal is that brain theory should "go causal" - phenomenologically
speaking, it should embrace multiple dynamics.
An extending remark: in general, science should go causal; recall it'
assocation with theories of the mind.
A more Baconian science is needed.
(A Speculative Outlook
My recent main interest is in active perception, episodic representation
and the causal nature of narratives.
Implies that there is, in the mind, a simulated world,
full of objects.
An immediate task is the characterization of causal
and representational structures
My interest is twofold: in special causal systems
called mechanisms, and in causal itentionality and agency.
Here is my layman speculation.
What we call visual representation is grossly misleading.
This is just how consciousness is informed. Note
most emphatically, that consciousness is visual almost by definition.
To speak about visual representations is a typical
fallacy of the first person look, which unduly permeates the theory of
mind.
Really, it is just representations, plus visualization-by-consciusness.
It is this where objects etc are constructed in the
mind.
(Clearly there is nothing like
objects, not even images of objects, in the perceptual input).
Responds to the puzzle why "visual" parts of the brain
are so extended and central. Unrelated to vision (works in the same way
in the blind)
In other words, the suggestion is that what we see
is directly the representation, the mental model itself.
Consciousness, awareness etc, "the mind's eye", is
looking at the internal, and not the external world.
(This has nothing to do with the
issue of idealism versus realism. Since Kant, the constructed world of perspectivist
realism is just the real world.)
)
Conclusions
- causality is a different way of looking at dynamical phenomena that
'go beyond' the usual notion of dynamics
- justifies talk about flexible, changing dynamic structures
- multiple dynamics is a tool to capture this in te context of mind --->
brain