My Academia.edu Page w/ Publications

19 Feb 2013

Andy Clark. Momento’s Revenge: The Extended Mind, Extended, summary


summary by
Corry Shores
[
Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]

[Posthumanism Entry Directory]

 


Andy Clark

Momento’s Revenge:
The Extended Mind, Extended
 

Brief Summary:

The debate over Clark and Chalmer’s extended mind hypothesis indicates certain misunderstandings of their basic premises and also how little consensus there is over what qualifies as mental.



Summary

Clark recalls the premise of the movie Memento. The main character, Leonard, has “anterograde amnesia,” which means he can no longer record new memories. But he wants to find his wife’s killer despite this limitation. To find the killer, he needs “to build up a stock of new beliefs”, and so he uses notes, annotated photos, and body tattoos to serve a function that biological memory would have served. Another character tells Leonard that he does not know anything and will forget this very conversation in ten minutes. But Leonard believes that he does come to know things each day with his photos, tattoos and other techniques. (43)


Clark’s and Chalmer’s “The Extended Mind” deals with matters of this nature: can mental processes extend into external systems? [Menary discusses the article and its argumentation here.]

Is the mind contained (always? sometimes? never?) in the head? Or does the notion of thought allow mental processes (including believings) to inhere in extended systems of body, brain, and aspects of the local environment? The answer, we claimed, was that mental states, including states of believing, could be grounded in physical traces that remained firmly outside the head. (43, boldface mine)

 

In this article Clark will defend extended mind against various critiques. (43-44)


First he will review the argument from the paper.


1. Tetris and Otto


One example in the Clark & Chalmers essay “Extended Mind” is tetris. A player can rotate pieces [1] with their mind, [2] with a button, or [3] with something like a computer chip implant. [1] is clearly mental, [2] is clearly external, [3] is hard to classify, but consider example [4] a Martian with natural “biotechnological machinery’ that does what [3] can do, and in that case we would consider it mental.


The Parity Principle:

If, as we confront some task, a part of the world functions as a process which, were it to go on in the head, we would have no hesitation in accepting as part of the cognitive process, then that part of the world is (for that time) part of the cognitive process. (44)


The parity principle says that all the different cases are on par, even though there are noted differences, as [2] involves perception. But Clark is not concerned so much with whether the rotation tool is external or not but how available it is for use, as in [2] it is limited to the tetris console but in [3] and [4] it is carried around along with the user. (44)


Clark & Chalmer’s paper then gave the example of Otto and Inga to illustrate how beliefs can be found in external systems.


Inga remembers the Museum of Modern Art is on 32nd street, then heads there. Otto has Alzheimer’s, so before going to MOMA, he must first consult the notebook he always carries, where he stores such information. Both Inga and Otto believed MOMA was on 32nd street, only Otto’s long-term belief was recorded in his trusty notebook rather than his head. So the two memory/belief systems are on par. (45)


Also in the paper, C&C argued that their externalism is not like the Putnam/Burge passive, reference-based externalism. (45)


The authors also allowed that the external parts of cognitive systems might not be conscious, however they can still be part of the cognitive system. (45-46)


The also offered criteria for the reliability of the external part of the cognitive system. (quoting:)

1. That the resource be reliably available and typically invoked. (Otto always carries the notebook and won’t answer that he “doesn’t know” until after he has consulted it).

2. That any information thus retrieved be more or less automatically endorsed. It should not usually be subject to critical scrutiny (unlike the opinions of other people, for example). It should be deemed about as trustworthy as something retrieved clearly from biological memory.

3. That information contained in the resource should be easily accessible as and when required. (46)



So a book in a library would not fulfill these conditions, but the brain implant would. And while mobile access to google would fail criterion [2], Otto’s notebook satisfies all criteria. (46)


Clark address the ‘Otto two-step’ critique, which says there are two beliefs and not one: the first belief is that the information about MOMA is in the notebook, leading to a second belief, that MOMA is on 32nd street. (46)


But couldn’t the same be said of Inga’s process of info retrieval? (46)


The difference might be that for Inga, the extra step would add unnecessary complexity to the description of the process. So her biological memory storage is transparent to her. But it is transparent for Otto too. And also, in both cases this account adds unnecessary complexity. (46-47)


Thus

Inga’s biological memory systems, working together, govern her behaviors in the functional ways distinctive of believing. Otto’s biotechnological matrix (the organism and the notebook) governs his behavior in the same sort of way. So the explanatory apparatus of mental state ascription gets an equal grip in each case, and what looks at first like Otto’s action (looking up the notebook) emerges as part of Otto’s thought. Mind, we conclude, is congenitally predisposed to seep out into the world. (47, boldface mine)


2. Intrinsic Content


Clark then addresses Adams and Aizawa’s [A&A] criticisms of extended mind. They call C&C’s extended mind hypothesis a ‘transcranial’ position, “the view that ‘cognitive processes extend in the physical world beyond the bounds of the brain and the body’ (Adams and Aizawa 2001, p. 43).” [47] They think that while it is logically possible and also conceivable for alien species, it is false for humans, and instead take the position of ‘contingent intracranialism about the cognitive.’


A&A think that Otto’s notebook depends on derived content, but inner symbols in the mind have intrinsic content. Clark quoting A&A:

strings of symbols on the printed page mean what they do in virtue of conventional associations. . . . The representational capacity of orthography is in this way derived from the representational capacities of cognitive agents. By contrast, the cognitive states in normal cognitive agents do not derive their meanings from conventions or social practices. (Ibid., p. 48)

And later on:

Whatever is responsible for non-derived representations seems to find a place only in brains. (Ibid., p. 63) [47]

 

Clark is disinclined to think that neural representations have intrinsic contents not found in external inscriptions. But for the sake of argument he will suppose this, and will examine the “fundamental distinction between inscriptions whose meaning is conventionally determined and states of affairs (e.g., neural states) whose meaning-bearing features are thus not parasitic.” (48) Clark does not see the reason to think that all aspects of our mental processing are composed of intrinsic contents. (48)


Clark gives the example of imagining Venn diagrams when solving a problem. Here the contents are a matter of convention. (48)


A&A might object that the Venn diagram image in the head triggers mental processes using intrinsic contents, and that is where the understanding consists. However, Otto’s notebook does the same for his head. A&A might also say that Inga’s stored memory has continuously intrinsic content. (48)


To answer, Clark offers this thought experiment. Consider Martians recalling from their heads bitmapped images of words stored there, like how those with photographic memories first imagine the page then secondly read from that image. (48)


In light of this we see that Otto’s use of the notebook uses no less intrinsic content. (48-49)


In fact, A&A even concede that it is unclear to what extent cognitive processes use non-derived content. (49)


3. Scientific Kinds and Functional Similarity


A&A think that there can be no unified science of the extended mind, because the causal processes involved in external and internal cognitive systems is too dissimilar for science to lump together in their analyses of cognition. (49)


Consider how in the tetris example the external system uses electrons firing on a cathode ray tube and muscular activity to press the button. Similarly, Otto’s notebook system uses cognitive motor processing not found in Inga’s system. There are not a lot of similarities between different external memory systems, and also transcranial systems probably do not have interesting scientific law-like regularities, but neurological systems do. (49-50)


But complexity theory explains a wide range of phenomena. (50)


So there could some day be a systematic description for transcranial systems. (50)


The problem in A&A’s argument lies in their assertion that the cognitive must be discriminated on the basis of underlying causal processes. (50) But A&A have not proven this impossible for extended mind. They also have not shown the necessity for all cognitive processes to exhibit the same law-like behavior.


In fact, inner cognitive processes might be heterogeneous.

It is quite possible, after all, that the inner goings-on that Adams and Aizawa take to be paradigmatically cognitive themselves will turn out to be a motley crew, as far as detailed causal mechanisms go, with not even a family resemblance (at the level of actual mechanism) to hold them together. It is arguable, for example, that conscious seeing and nonconscious uses of visual input to guide fi ne-grained action involve radically different kinds of computational operation and representational form (Goodale and Milner 1992 ; Milner and Goodale 1996) [51, emphasis mine]


For example, watching sports involves motor elements, but imagining a lake does not. (51)

Thus

In the light of all this, my own suspicion is that the differences between external-looping (putatively cognitive) processes and purely inner ones will be no greater than those between the inner ones themselves. But insofar as they all form parts of a flexible and information-sensitive control system for a being capable of reasoning, of feeling, and of experiencing the world (a “sentient informavore,” if you will) the motley crew of mechanisms have something important in common. It may be far less than we would require of any natural or scientific kind. (51, boldface mine)


So the argument from scientific kinds is doubly flawed: [1] it has a limited conception of what makes a proper scientific enterprise and [2] it wrongly thinks there cannot be some higher-level unification of extended cognitive systems. (51)


In fact such a unifying science has been developing for a while now. Consider for example HCI (human-computer interaction) and also HCC (human-centered computing). (52)


A&A create a false dilemma: either [a] C&C are mistaken about the causal facts of cognition (since they equate two causally-different systems), or (more likely) [b] they are closet behaviorists (because they say that the cognitions between Inga and Otto are the same on account of the fact that their outward behaviors [of going to MOMA] are the same.) [52]


But C&C are not saying that Otto’s and Inga’s cognitive processes are identical, in fact, they are very dissimilar in many respects. And they are not behaviorists, but rather functionalists, because what is important is the way that information guides reasoning. (52)


Dartnall objects that the Otto example uses an outdated model of memory as being static. (52-53)


The problem with this objection is that it regards the notebook itself as a cognitive system rather than a part of a greater dynamic process. (53)


Clark suggests another example, rote learning, which is a lot like a ‘static’ component of one’s memory system. (53)


The functional similarity of Otto’s notebook and Inga’s biological memory is apparent in the role the retrieved info plays in guiding current behavior. (54)


Information obtained from the notebook guides Otto’s current behavior the same way biological memory does. (54)


So even if the information is false, it still guides Otto’s behavior. (54)


Also, back when there was the text and rule based image of human cognition, no one thought that humans were not cognizers. (54)


But the bigger issue is the problem of extending the notion of cognition beyond the normal human expression of it. C&C say that our cognition can extend to external systems. (54-55)


4. On Control


Keith Butler says that computational and cognitive control lie in the head, and this lacks in external systems. (55)


So the brain has the final say, even if there are external aids in its functioning. (55)


There are two issues here: [1] neural computation is the locus of computational and cognitive control and [2] the neural processes at issue here are quite distinct from their external counterparts. (55)


But what if only one part of the brain, like the frontal lobes, has the final say. Does that limit cognition just to that part? (55-56)


And also, we should not divorce the control part of cognition from the stored memory part, because the stored memory part influences our beliefs and it shapes our behaviors. (56)


So the argument from ultimate control does not touch upon the mark of the mental or upon the source of the self. (56)


5. Perception and Development


Another problem is that Otto needs to use perception to ‘read in’ the information. (56)


So Butler says that because Otto must perceive the world rather than just introspect like Inga, that means the two systems are too dissimilar to group together. (56)


But in C&C’s view, Otto’s whole system includes the notebook, thus making his perception of it internal to the system.

But from our point of view, Otto’s inner processes and the notebook constitute a single, extended cognitive system. Relative to this system, the flow of information is wholly internal and functionally akin to introspection. (57, boldface mine)


Davies says that Otto can misperceive the notebook information. But Inga can likewise misremember her info. (57)


Davies also notes that perception is of things in public access, but memory just personal access. (57)


Clark also notes that Otto has a special relationship to his notebook, because he automatically endorses what is written in it. (57)


But what if in the future technology allows someone else to tap into your memories? Does that then make them any less your own? (57)


Chrisley observes that as a child we do not regard our memories as objects or resources, because we do not encounter or memories perceptually. “Might it be this special developmental role that decides what is to count as part of the agent and what is to count as part of the (wider) world?” (58)


But a child also perceives its body parts as objects in the world. (58)


If we doubt that the child experiences her hands as external but rather as internal, then we can also imagine that one would experience their eyeglasses the same way (58).


Thus

The developmental point, though interesting, is thus not conceptually crucial. It points only to a complex of contingent facts about human cognition. What counts in the end, though, is the resource’s current role in guiding reasoning and behavior, not its historical positioning in a developmental nexus. (58)


6. Perception, Deception, and Contested Space


Sterelny’s critique is that our external cognitive resources operate in a common and often contested space, that is, a shared space apt for sabotage and deception by other agents.” (58) Also, the development and functional poise of the perceptual systems involved in extended systems are radically different from that of biologically internal channels of info flow. (59)


But Sterelny does accept that external epistemic artifacts aid our cognition. He just denies that the external parts reduce the brain’s load and also he denies we can couple with them to produce a larger system. (59)


Sterelny thinks that within our biological cognitive systems, information flows within a community of cooperative and coadaptive parts. These systems evolved such that signals became clearer, less noisy, and more reliable. But information gained from the public sphere involves working with unreliable sources, as others can sabotage our information through deception. (59)


To deal with deceptions in the world we perceive, we often use caution or “tools of folk logic.” (59-60)


But we do not always treat our perceptions so cautiously, and when we do not, this is where extended mind seems to have a place. (60)


Magician’s illusions tell us that our minds are extending into the world, because we are relying on the external scene as being a stable and reliable substitute for our internally stored memories. We consider the possibility of being tricked low enough that we are willing still to rely on external factors for our cognitions. (60)


If Otto began to mistrust his notebook, then it would cease to be a part of his cognitive system. (60d)


Thus Sterelny’s critique really in the end supports extended mind.

To decide, in any given case, whether the channel is acting more like one of perception or more like one of internal information flow, look (in part) to the larger functional economy of defenses against deception. The lower the defenses, the closer we approximate to an internal flow. (61)

 

But Sterelny might respond by emphasizing not our guarding against deception and more our vulnerability to it. Yet we could conceive of an alien magician who instead of tricking our perceptions messes with our synapses to get us to believe a falsehood. So we can be internally vulnerable too. There is something like a threshold of tolerance for reliability; Otto thinks the risk of someone messing with his notebook is low enough that he can rely on it. (61)


Clark does admit that publically stored beliefs are quite different from internally stored ones; however, there is still not adequate reason to discount the external beliefs from ones own dispositional beliefs. (62)


What is essential for the functional poise of stored information to count as the individual’s stock of dispositional beliefs is that

the information be typically trusted and that it guide gross choice, reason, and behavior in roughly the usual ways. To unpack this just a tiny bit further, we can say that it should guide behavior, reason, and choice in ways that would not cause constant misunderstandings and upsets if the agent were somehow able to join with, or communicate with, a human community. (62)


7. An Alternative Ending?


We recall Clark’s response to A&A’s criticism that inner and outer cognitive systems are too dissimilar for a unified scientific analysis of them both as one system. Clark now wonders if maybe this means that “the realm of the mental is itself too disunified to count as a scientific kind”. (62)


Clark and Prinz had abandoned a paper where they explored this possibility. They were going to argue that there is no unified and coherent understanding of the idea of ‘mind’ that is used in various philosophical and scientific projects. (62-63)


They noted how mental predicates have been assigned to a wide variety of cases, spanning from thermostats to language-less animals to computers. Hence there is no consensus on where mentality is located. (63)


The concept of mind is now understood as being both rooted in conscious experience and occurrent thoughts and as well in its extension into the realm of non-conscious processes and long-term stored knowledge. (63) Those who stress occurrent processes shrink the mind too small (because it might even rule out parts of the brain) but those who don’t like extended mind probably find it expands the mental too large. Clark wonders if we can eliminate the concept of the mind. (63)


But he does not think so.

For as I noted in section 3, despite the mechanistic | motley, we may still aspire to a science of the mind. Granted, this will be a science of varied, multiplex, interlocking, and criss-crossing causal mechanisms, whose sole point of intersection may consist in their role in informing processes of conscious reflection and choice. It will be a science that needs to cover a wide variety of mechanistic bases, reaching out to biological brains, and to the wider social and technological milieus that (I claim) participate in thought and reason. It will have to be that accommodating, since that very mix is what is most characteristic of us as a thinking species (see Clark 2003 ). If we are lucky, there will be a few key laws and regularities to be defined even over such unruly coalitions. But there need not be. The science of the mind, in short, won’t be as unified as physics. But what is? (63-64, emphases mine)

 

In sum, I am not ready to give up on the idea of minds, mentality, and cognition any day soon. The extended mind argument stands not as a reductio but as originally conceived: a demonstration of the biotechnological openness of the very ideas of mind and reason. (64, emphases mind)


Conclusions


Those who reject extended mind think the mind is merely the activity of the brain. Those who accept might be ones who “identify the mind with an essentially socially and environmentally embedded principle of informed agency (i.e., the fans of situated cognition).” (64) Clark thinks we have not hit bottom with this debate. There are still matters to explore like the role of emotions and the body in cognition. It is still unclear what qualifies as mental. (64)


Thus the matter is not settled enough to determine with certainty if Leonard from Memento increases his stock of beliefs with every new tatoo. (65a)



Andy Clark. “Memento’s Revenge” in (ed.) Richard Menary The Extended Mind, Cambridge/London: MIT Press, 2010.



No comments:

Post a Comment