22 Feb 2013

Andy Clark. 8.2 of Being There, “What is this Thing Called Representation?,” summary


summary by
Corry Shores
[
Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]

[Posthumanism Entry Directory]

[Andy Clark, Being There, entry directory]



Andy Clark

Being There:
Putting Brain, Body, and World Together Again

Ch.8
Being, Computing, Representing


Part 8.2
What is this Thing Called Representation?



Brief Summary:

There is a view in cognitive science that believes ‘thinking’ systems such as brains and thinking machines have internal representations that correlate with the external world but are also processed without the direct causal influence of external factors.



Summary

Clark will discuss “internal representations”. Cognitive scientists often refer to them as being housed by brains and computer models. The concept helped bridge connectionism and classical artificial intelligence. Both camps agreed there are such internal representational systems, however they disagreed on the precise nature of them. (142d) The classicists thought mental contents were “tokened as strings of symbols that could be read, copied, and moved by some kind of inner central processing unit” (143a); they believe then in a “chunky symbolic” inner economy of mental contents. Connectionists however “believed in a much more implicit style of internal representation: one that replaced strings of chunky, manipulable symbols with complex numerical vectors and basic operations of pattern recognition and pattern transformation” (144a)


Both views see mental contents as internal representations. Haugeland lists the criteria for an internal representational system. [Quoting Clark:]

(1) It must coordinate its behaviors with environmental features that are not always "reliably present to the system."

(2) It copes with such cases by having something else (in place of a signal directly received from the environment) "stand in" and guide bahavior in its stead.

(3) That "something else" is part of a more general representational scheme that allows the standing in to occur systematically and allows for a variety of related representational states (see Haugeland 1991, p. 62). [144b]

Now consider plants that track the sun with their leaves. The sun’s changing position itself guides the leaves’ motion. So because the plant system is controlled by an environmental feature that is reliably present to the system [perhaps, to the system’s behavior], it would not satisfy the first criterion. [Consider instead a solar panel programmed to turn with the sun’s position. It need not even ‘see’ where the sun is. It can be programmed according to astronomical predictions of its position. These coordinates ‘stand-in’ for the direct control of the sun’s actual position on the plant leaf movement.] Point two says that instead of the environmental feature, something else stands-in for it and guides the system’s behavior. [Now consider how before eating, we see the food, and our stomach produces gastric juices, then in their place comes food. In a way,] our gastric juices stand-in for the food, and in that way represent the future presence of food. However, the third criterion says that the something else that stands-in must be a part of a larger representational system, and the gastric juice does not. [Yet the sun’s coordinates fit within a larger system of geometrical representations.] Clark thinks that the role of the decouplability of the inner and outer states in determining behavior is overplayed. (144c.d)


But consider the way that the neurons in a rat’s brain process coded signals for which way the head is pointed. The system uses a general representational scheme, but Clark wonders if really this part of the system can function even if it were decoupled from “the continuous stream of proprioceptive signals from the rat’s body.” (145)


A strict application of Haugeland’s criteria will not help us understand the flows of information in such neuronal systems that come from the body. (145)


So Haugheland’s criteria is a bit too restrictive; nonetheless we still need a way to constrain the applicability of the concept of internal representation. For, we need to rule out simple cases of environmental control over the system’s behavior. Also, internal complexity in a system is also alone not enough to qualify as inner representation. In addition, a correlation between an inner state and some environmental parameter is insufficient for inner representation [recall the gastric juice example]. “It is thus important that the system uses the correlations in a way that suggests that the system of inner states has the function of carrying specific types of information.”  (146a)


So the fact that the tides correlate with the moon’s position does not mean that either represents the other; for, the correlation was neither designed nor evolved for the purpose of carrying information about the other’s variations. On the other hand, the neuronal activity in the rat’s brain does seem to have the purpose of carrying information about the head’s position. (146b.c)


So what will qualify an inner state as a representation will have to do with the role it plays in the system.

It may be a static structure or a temporally extended process. It may be local or highly distributed. It may be very accurate or woefully inaccurate. What counts is that it is supposed to carry a certain type of information and that its role relative to other inner systems and relative to the production of behavior is precisely to bear such information. (146cd)


So Clark proposes that we

call a processing story representationalist if it depicts whole systems of identifiable inner states (local or distributed) or processes (temporal sequences of such states) as having the function of bearing specific types of information about external or bodily states of affairs. (147a, boldface mine)

Consider such adaptive hook-ups as the sunflower tracking the sun’s position, or a robot seeking light. Clark thinks there is little to gain by calling such adaptive hook-ups representational.

Representation talk gets its foothold, I suggest, when we confront inner states that, in addition, exhibit a systematic kind of coordination with a whole space of environmental contingencies. In such cases it is illuminating to think of the inner states as a kind of code that can express the various possibilities and which is effectively "read" by other inner systems that need to be informed about the features being tracked. Adaptive hookup thus phases gradually into genuine internal representation as the hookup's complexity and systematicity increase. At the far end of this continuum we find Haugeland's creatures that can deploy the inner codes in the total absence of their target environmental features. Such creatures are the most obvious representers of their world, and are the ones able to engage in complex imaginings, off-line reflection, and counterfactual reasoning. Problems that require such capacities for their solution are representation hungry, in that they seem to cry out for the use of inner systemic features as stand-ins for external states of affairs. (147bc)


Those who like dynamic systems theory tend toward rejecting information-processing accounts “that identify specific inner states or processes as playing specific content-bearing roles.” (148b) They thus might endorse this radical thesis:

Thesis of Radical Embodied Cognition  Structured, symbolic, representational, and computational views of cognition are mistaken. Embodied cognition is best studied by means of noncomputational and nonrepresentational ideas and explanatory schemes involving, e.g., the tools of Dynamical Systems theory. (148c)


Many scientists already hold this view. (49a)


But Clark thinks such a strong reaction is unwarranted, and he will explain why in the following sections. (49b)




Andy Clark. Being There: Putting Brain, Body, and World Together Again. Cambridge, Massachusetts/London: MIT, 1997.

No comments:

Post a Comment