My Academia.edu Page w/ Publications

11 Mar 2013

Some Recent Scientific Developments in Brain Machine Interface (for Robotic Prosthesis), Neuroplasticity, Neurocomputation, and Whole Brain Emulation

summary by Corry Shores
[
Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]
[Posthumanism Entry Directory]

 

[All boldface is my own]




Some (mostly) Recent Scientific Developments in Brain Machine Interface (for Robotic Prosthesis), Neuroplasticity, Neurocomputation, and Whole Brain Emulation



Brief Summary: New scientific advances support the posthuman vision of robotically enhanced and reconstructed post-humans. Neuroplasticity and brain machine interface (also brain computer interface) empower brains to control robotic parts just like biological ones. Whole brain emulation and cognitive prosthetics could allow brain implanted chips to replace or enhance our brain functioning, perhaps even completely “uploading” our brain onto a computerized simulation. Progressive replacement of bodily and neural parts with robotic and computerized ones could enable one to make a complete and continuous transition from human to robot.

 




"Brain" In A Dish Acts As Autopilot Living Computer

Explore: Research at the University of Florida

Spring 2005 Vol. 10 No.1

http://www.research.ufl.edu/publications/explore/v10n1/extract2.html


Thomas DeMarse has created a miniature living brain on a dish. He placed neurons that grew connections to form a network, and it can perform tasks in a virtual world.

“It’s essentially a dish with 60 electrodes arranged in a grid at the bottom,” DeMarse said. “Over that we put the living cortical neurons from rats, which rapidly begin to reconnect themselves, forming a living neural network — a brain.”

The brain and the simulator establish a two-way connection, similar to how neurons receive and interpret signals from each other to control our bodies. By observing how the nerve cells interact with the simulator, scientists can decode how a neural network establishes connections and begins to compute, DeMarse said.

When DeMarse first puts the neurons in the dish, they look like little more than grains of sand sprinkled in water. However, individual neurons soon begin to extend microscopic lines toward each other, making connections that represent neural processes. “You see one extend a process, pull it back, extend it out — and it may do that a couple of times, just sampling who’s next to it, until over time the connectivity starts to establish itself,” he said. “(The brain is) getting its network to the point where it’s a live computation device.”

To control the simulated aircraft, the neurons first receive information from the computer about flight conditions: whether the plane is flying straight and level or is tilted to the left or to the right. The neurons then analyze the data and respond by sending signals to the plane’s controls. Those signals alter the flight path and new information is sent to the neurons, creating a feedback system.

“Initially when we hook up this brain to a flight simulator, it doesn’t know how to control the aircraft,” DeMarse
said. “So you hook it up and the aircraft simply drifts randomly. And as the data come in, it slowly modifies the (neural) network so over time, the network gradually learns to fly the aircraft.”

Although the brain currently is able to control the pitch and roll of the simulated aircraft in weather conditions ranging from blue skies to stormy, hurricane-force winds, the underlying goal is a more fundamental understanding of how neurons interact as a network, DeMarse said.

“There’s a lot of data out there that will tell you that the computation that’s going on here isn’t based on just one neuron. The computational property is actually an emergent property of hundreds or thousands of neurons cooperating to produce the amazing processing power of the brain.”



Monkeys Think, Moving Artificial Arm as Own

By Benedict Carey

New York Times

Published: May 29, 2008

http://www.nytimes.com/2008/05/29/science/29brain.html?_r=0


Two monkeys with brain-controlled prosthetics successfully use their robotic arms to reach for food and feed it to themselves.

Two monkeys with tiny sensors in their brains have learned to control a mechanical arm with just their thoughts, using it to reach for and grab food and even to adjust for the size and stickiness of morsels when necessary, scientists reported on Wednesday.

The report, released online by the journal Nature, is the most striking demonstration to date of brain-machine interface technology. Scientists expect that technology will eventually allow people with spinal cord injuries and other paralyzing conditions to gain more control over their lives.


ALSO reported at MIT Technology Review

Monkey Thinks Robot into Action

A monkey is able to feed itself with a robotic arm.

    By Emily Singer

MIT Technology Review

May 28, 2008

http://www.technologyreview.com/news/410189/monkey-thinks-robot-into-action/


It’s the first time a monkey–or a human–is directly, with their brain, controlling a real prosthetic arm,” says Krishna Shenoy, a neuroscientist at Stanford University who was not involved in the research. (Singer)




TED

Henry Markram: A brain in a supercomputer
Filmed Jul 2009 • Posted Oct 2009 • TEDGlobal 2009

http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html


Supercomputers are being used to simulate brain activity. They began with animals and are moving up to human brain. They first catalogued neurons and described their interactive behavior. They can simulate human neuronal activity on a small scale. Also see:

http://en.wikipedia.org/wiki/Blue_Brain_Project




Rat memory under computer simulation

Eric Mankin

Public release date: 17-Jun-2011

Restoring memory, repairing damaged brains
Biomedical engineers analyze -- and duplicate -- the neural mechanism of learning in rats

Eureka Alert

http://www.eurekalert.org/pub_releases/2011-06/uosc-rmr061211.php


Scientists have developed a way to turn memories on and off—literally with the flip of a switch.

Using an electronic system that duplicates the neural signals associated with memory, they managed to replicate the brain function in rats associated with long-term learned behavior, even when the rats had been drugged to forget.

"Flip the switch on, and the rats remember. Flip it off, and the rats forget," said Theodore Berger of the USC Viterbi School of Engineering's Department of Biomedical Engineering.” (Mankin)



ALSO reported in The New York Times

Memory Implant Gives Rats Sharper Recollection

By Benedict Carey

The New York Times

Published: June 17, 2011

http://www.nytimes.com/2011/06/17/science/17memory.html?_r=0


The authors said that with wireless technology and computer chips, the system could be easily fitted for human use.
(Carey)




New horizons in auditory prostheses

Zeng, Fan-Gang PhD

Hearing Journal

November 2011 - Volume 64 - Issue 11 - pp 24,26,27

http://journals.lww.com/thehearingjournal/Fulltext/2011/11000/New_horizons_in_auditory_prostheses.5.aspx


There are many recent developments in cochlear implants.

All contemporary cochlear implants use similar signal processing that extracts temporal envelope information from a limited number of spectral bands, and delivers these envelopes successively to 12-22 electrodes implanted in the cochlea. As a result, these implants produce similarly good speech performance: 70-80 percent sentence recognition in quiet, which allows an average cochlear implant user to carry on a conversation over the telephone. Interestingly, though, sentence recognition in quiet has essentially remained at this same level since 1994. (Figure 1.)




Active tactile exploration using a brain–machine–brain interface

Joseph E. O’Doherty, Mikhail A. Lebedev, Peter J. Ifft, Katie Z. Zhuang, Solaiman Shokur, Hannes Bleuler & Miguel A. L. Nicolelis

Nature 479, 228–231 (10 November 2011)

http://www.nature.com/nature/journal/v479/n7372/full/nature10489.html


Monkeys operating virtual robotic arms had their brains given touch stimulations.



ALSO reported by The Huffington Post

Is It Possible To Feel Textures Using Just Brain Waves? New Study Shows How

The Huffington Post

Amanda Chan Posted: 10/07/11 11:49 AM ET

http://www.huffingtonpost.com/2011/10/07/brain-touch-texture-feelings-senses_n_996844.html

 

This is basically one of the holy grails of this field," study researcher Miguel Nicolelis, a neurobiology professor and co-director of the Duke Center for Neuroengineering, told Bloomberg. "No other study has provided an artificial sensory channel directly to the brain of animals. This is really needed to restore in patients that have a spinal cord injury not only their mobility, but their sense of touch." (Chan)




Going mental: Study highlights brain’s flexibility, gives hope for natural-feeling neuroprosthetics

By Sarah Yang, Media Relations

UC Berkeley News Center

March 4, 2012

http://newscenter.berkeley.edu/2012/03/04/brain-flexibility-gives-hope-for-neuroprosthetics/


Researchers at the University of California, Berkeley have shown that neurons used for physical tasks can be retrained for brain machine interface usage. This shows that neuro-prosthetics can feel natural.

“Their new study, to be published Sunday, March 4, in the advanced online publication of the journal Nature, shows that through a process called plasticity, parts of the brain can be trained to do something they normally do not do. The same brain circuits employed in the learning of motor skills, such as riding a bike or driving a car, can be used to master purely mental tasks, even arbitrary ones.

[…]

To clarify these issues, the scientists set up a clever experiment in which rats could only complete an abstract task if overt physical movement was not involved. The researchers decoupled the role of the targeted motor neurons needed for whisker twitching with the action necessary to get a food reward.

The rats were fitted with a brain-machine interface that converted brain waves into auditory tones. To get the food reward – either sugar-water or pellets – the rats had to modulate their thought patterns within a specific brain circuit in order to raise or lower the pitch of the signal.

Auditory feedback was given to the rats so that they learned to associate specific thought patterns with a specific pitch. Over a period of just two weeks, the rats quickly learned that to get food pellets, they would have to create a high-pitched tone, and to get sugar water, they needed to create a low-pitched tone.

If the group of neurons in the task were used for their typical function – whisker twitching – there would be no pitch change to the auditory tone, and no food reward.

“This is something that is not natural for the rats,” said Costa. “This tells us that it’s possible to craft a prosthesis in ways that do not have to mimic the anatomy of the natural motor system in order to work.”





Simulated brain scores top test marks

First computer model to produce complex behaviour performs almost as well as humans at simple number tasks.

    Ed Yong

Nature | News

29 November 2012

http://www.nature.com/news/simulated-brain-scores-top-test-marks-1.11914


A computer simulated brain with 2.5 million virtual neurons can perform simple mathematical calculations.




Mind-controlled robot arms show promise

People with tetraplegia use their thoughts to control robotic aids.

    Alison Abbott

Nature | News

16 May 2012

http://www.nature.com/news/mind-controlled-robot-arms-show-promise-1.10652

[AP Report here]

Two tetraplegics use brain machine interface to gain some lost abilities.

Neurosurgeons implanted tiny recording devices containing almost 100 hair-thin electrodes in the motor cortex of their brains, to record the neuronal signals associated with intention to move.” (Abbott)

Cathy can use her thoughts to direct the motion of a robotic arm. She is able to direct it to grab a bottle of coffee and lift it to her lips. Bob as well operates the arm successfully. There is also a subject who operates a computer cursor using this interface, as if operating a computer mouse. The subjects used the BrainGate2 brain implant system [image below from the BrainGate wiki page.]

Braingate model wiki
(Thanks wiki)




Paralyzed Man Uses Thoughts Alone to Control Robot Arm, Touch Friend's Hand, After Seven Years

Science Daily

Feb. 8, 2013 —

http://www.sciencedaily.com/releases/2013/02/130208124818.htm

Based on this journal article

Wei Wang et al.

An Electrocorticographic Brain Interface in an Individual with Tetraplegia. PLoS ONE, 2013; 8 (2): e55344

http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0055344


Researchers at the University of Pittsburgh School of Medicine and UPMC describe in PLoS ONE how an electrode array sitting on top of the brain enabled a 30-year-old paralyzed man to control the movement of a character on a computer screen in three dimensions with just his thoughts. It also enabled him to move a robot arm to touch a friend's hand for the first time in the seven years since he was injured in a motorcycle accident. (Science Daily)


ALSO reported by AP

Paralyzed Man Uses Mind-Powered Robot Arm To Touch
Tim Hemmes

By Lauran Neergaard  

10/10/11 10:04 AM ET  

AP

http://www.huffingtonpost.com/2011/10/10/mind-powered-robot-arm_n_1003204.html


"It wasn't my arm but it was my brain, my thoughts. I was moving something," Hemmes says. (Neergaard)




Bionic Eye Implant Approved by U.S. for Rare Disease
By Anna Edney

Bloomberg

Feb 15, 2013 7:01 AM GMT+0200

http://www.bloomberg.com/news/2013-02-14/bionic-eye-implant-approved-by-u-s-for-rare-disease.html


New neuroprosthetic eye implant restores some visual capabilities.

While the $100,000-plus system won’t restore sight, it gives patients the ability to perceive the difference between light and dark. The device consists of a video camera, a transmitter mounted on a pair of eyeglasses and a processing unit that transforms images into electronic data sent to an implanted retinal prosthesis, the FDA said.

[…]

Konstantopoulos, of Glen Burnie, Maryland, said he was diagnosed with retinitis pigmentosa when he was in his early 40s and became completely blind about six months ago. He can see shadows now with the device and tell if the sun is behind a tree. Argus II is comfortable and the surgery was painless, he said.

[…]

A clinical study of 30 people showed the eye device helped patients recognize large letters or words, detect street curbs, walk on a sidewalk without falling and match black, gray and white socks.




Rats With Linked Brains Work Together
Megan Gannon, News Editor

Live Science

Date: 28 February 2013 Time: 12:23 PM ET

http://www.livescience.com/27544-rats-with-linked-brains-work-together.html


Brain plasticity so great that brains can use information from other brains.

Scientists have engineered something close to a mind meld in a pair of lab rats, linking the animals' brains electronically so that they could work together to solve a puzzle. And this brain-to-brain connection stayed strong even when the rats were 2,000 miles apart.

The experiments were undertaken by Duke neurobiologist Miguel Nicolelis, who is best known for his work in making mind-controlled prosthetics.

"Our previous studies with brain-machine interfaces had convinced us that the brain was much more plastic than we had thought," Nicolelis explained. "In those experiments, the brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?"

For the new experiments, Nicolelis and his colleagues trained pairs of rats to press a certain lever when a light went on in their cage. If they hit the right lever, they got a sip of water as a reward.

When one rat in the pair called the "encoder" performed this task, the pattern of its brain activity — something like a snapshot of its thought process — was translated into an electronic signal sent to the brain of its partner rat, the "decoder," in a separate enclosure. The light did not go off in the decoder's cage, so this animal had to crack the message from the encoder to know which lever to press to get the reward.

The decoder pressed the right lever 70 percent of the time, the researchers said.

[…]

"We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right," Nicolelis explained in a statement. "

[…]

The connection was not lost even when the signals were sent over the Internet and the rats placed on two different continents, 2,000 miles (3,219 kilometers) apart.”





.

10 Mar 2013

Frederick B. Mills “A Phenomenological Approach to Psychoprosthetics”, summary

summary by Corry Shores
[
Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]
[Posthumanism Entry Directory]


[My own commentary is in brackets. All boldface and underlining is my own. Extra spacing between paragraphs follow the paragraph divisions in the original text.]


Frederick B. Mills


“A Phenomenological Approach to Psychoprosthetics”


Summary

 

Brief Summary

The integration of prosthetic devices into users’ bodies and activities can be seen in terms of Merleau-Ponty’s phenomenology of the body.



Abstract [Quoting]


The phenomenology of human embodiment can advance the practitioner’s understanding of the lived human body and in particular, what it means to incorporate a prosthetic device into one’s body. In order for a prosthesis to be incorporated into the lived body of the patient, the prosthesis must arguably be integrated into the body schema. This article uses the phenomenology of Maurice Merleau-Ponty and others to identify three of the necessary conditions of embodiment that determine the body schema: corporeal understanding, transparency and sensorimotor feedback. It then examines the structure of each of these conditions of embodiment and how they impact the lived body’s incorporation of prostheses and other artifacts. (Mills p.1)



Summary


Introduction


The aim of Mills’ article is to “offer an interpretation of how some of the insights about embodiment contained in Maurice Merleau-Ponty’s work relates to the relatively new health science field of psychoprosthetics. These insights show the value of the phenomenological method to the understanding of human embodiment and introduce practitioners to the practice of phenomenology.” (1Ac)



Methodological considerations


Mills draws from Gallagher et al.’s definition of psychoprosthetics. Psychoprosthetics is “the study of ‘the psychological aspects of prosthetic use and of rehabilitative processes in those conditions that require the use of prosthetic devices’” (1Bd). Psychoprosthetics is concerned with “prosthetic technologies and the behavior and experience of the prosthesis user” (2Aa). Psychoprosthetics makes use of two complementary methodologies: 1) (object-orientation) evidence-based empirical research, which is thus mostly concerned with measurable outcomes; and 2) (subject-orientation) phenomenological efforts, normally concerned more with the qualitative lived experiences of the prosthesis user, and it “includes the systematic use of introspection, testimonials and questionnaires as research tools. Such phenomenological research, when applied to psychoprosthetics, is inter-subjective, empathetic, and focuses on what it is like to be the user of an artificial limb as he or she progresses through the rehabilitative process”. (2Aa.b)


Mills notes the importance of phenomenology in the work of Craig Murray, who  claims that clinical intervention that is informed by phenomenological research will make it more likely for prosthesis users to not give up until their device becomes integrated into their body schema; “until the prosthetic device is integrated into their bodies. This is because the qualitative experience of prosthesis use determines to a significant degree whether a prosthesis is really being progressively integrated into the user’s body. The work of Merleau-Ponty is relevant to this task because it helps us to systematically identify and study the basic experiential features of the incorporation of artifacts into the lived body.” (2Ab citing Murray 2008)


Merleau-Ponty does not ignore the importance of empirical research, and he deals with the question of what about human nature that opens it to both subjective and objective study. Merleau-Ponty breaks with the Cartesian substance dualism of mind and extension, which “poses an irresolvable problem of how the activities of the mind and body are nevertheless systematically correlated.” (2AD)


Merleau-Ponty thinks we need to “reach back to our lived experience prior to reflecting about our bodies as objects, we do not find ourselves divided into two separate worlds (a mind and a body) but as living, sensing and moving bodies.” (2Ba)


Our everyday experiences tell us that we do not normally separate mind and body, as for example when we see someone smile, we do not see it merely as flesh moving nor merely as an imperceptible emotion. “With this expressive nature of human behavior in mind, we can engage in a phenomenological investigation into the basic features of human embodiment and its extension in artifacts and in particular, prostheses.” (2Bb)



Basic features of embodiment


The body image and the body schema


Human embodiment’s structure includes both body image and body schema. There is no consensus on their meanings, but Mills will work through some distinctions and relations between them. (2Bc)


Our body image is the way we think we look to others. (2Bc)


Body image is also important for prosthesis integration, but this paper focuses more on body schema. (2Bd)


Shaun Gallagher defines body schema as ““a system of sensory-motor processes that constantly regulate posture | and movement – processes that function without reflective awareness or the necessity of perceptual monitoring” (Gallagher qt in Mills pp.2|3)


Mills broadens this definition. Body schema is the general idea of schema and body schemas are the system or plurality of schemas. Mills also broadens the concept of body schemas to include “processes of which we can be marginally and even focally aware.” (3Aa)


Body schemas are what allow us to engage skillfully with the world. Neuronal descriptions are not enough to describe this; we need to see how the structure and meaning of phenomenal experience correlates with neurophysiological processes in the brain. (3Ab)


Body image helps in the development of our body schema. (3Ac)


To elaborate the concept of body schema, Mills quotes Merleau-Ponty from Phenomenology of Perception [p.160-167]

A movement is learned when the body has understood it, that is, when it has incorporated it into its ‘world’, and to move one’s body is to aim at things through it; it is to allow oneself to respond to their call, which is made upon it independently of any representation. (Merleau-Ponty qtd in Mills 3Ac)

Mills gets three basic features of the body schema from the paragraph this quote comes from: 1) corporeal understanding, 2) transparency, and 3) sensory-motor feedback.



Corporeal understanding


Corporeal understanding is not reflexive because “unless we are learning a new skill, we do not normally represent a situation to ourselves prior to enacting the intended behavior; we merely aim at our purpose and the behavior unfolds in | its very enactment”. (3A-B) But our corporeal understanding is not reflexive either, because our behaviors are normally not mechanical responses to stimuli but instead “Objects call our attention because they have a certain value for us. We are active players in generating our behaviors.” (3Ba)


Corporeal is not something cognitive but is more like as Hubert Dreyfus calls it ‘know how’. We know how an action comes about in the context of a wider activity, like hitting the ‘h’ key while typing, even though we do not have an explicit visual map of the keyboard. (3Bb)


We need to reenact a behavior to recall it, because “corporeal understanding is activated at the lived body-world interface. At the body-world interface, we do not experience our bodies per se, as separate from the world; we experience our bodies as joined with and challenged by the world.” (3Bc)



Transparency (absence and presence)


Cognitive understanding requires that the part of the body that is perceptually and kinesthetically engages with the world be transparent to us. Drew Leder notes that insofar as our body brings some part of the world to presence, revealing it to us, the part of the body that does the revealing withdraws from our view, like our our eyes are absence from our field of vision. (3Bc)


Mills quotes Merleau-Ponty to elaborate [Phenomenology of Perception p.104]:

I observe external objects with my body, I handle them, examine them, walk round them, but my body itself is a thing which I do not observe; in order to be able to do so, I should need the use of a second body which itself would be unobservable. (3Bd)


Transparency is a part of all our kinetic and perceptual behaviors. (3-4)


Michael Polanyi notes how our the motile and perceptive parts of our body recede and in a sense become transparent; Drew Leder “calls this type of absence focal disappearance”. (4Aa)


Also, according to Leder, we experience the background disappearance of our body; through most of the day we have just a marginal awareness of it. Mills will use ‘transparency’ to refer to both kinds of phenomenal absence. (4Ab)


Intentionality is thinking’s manner of being directed towards objects. For Merleau-Ponty, it is the whole lived embodied person who does the thinking, perceiving and behaving.

As living bodies, we are both perceptually and kinesthetically directed towards and engaged with the world. Merleau-Ponty conceptualizes the ways in which we are directed towards our world as rays of intentionality projecting out of the body schema. These rays constitute our many purposes and invest our surroundings with the meaning of possible behaviors. The door is the way out of the room. The keyboard is the potential to enter data. The light switch is the potential to turn the lights on or off. The road is the way out of the neighborhood. The rays of intentionality are correlated with a network of utilitarian relationships between objects in the world. Merleau-Ponty refers to the totality of these rays as an “arc of intentionality” (4Ac)


Normally much in our arc of intentionality remains below our thematic awareness until something brings them into focus. (4A.B)

Things can come into focus for a variety of reasons, depending on the context. (4B.ab)


Changes of thematic awareness can be like Gestalt figure/ground shifts. (4B.b)



Sensorimotor feedback


From the physiological perspective, “sensations are neural events and sensorimotor feedback is a dynamic relationship between the adaptive body and environmental stimuli”; but from the phenomenal perspective “sensations are qualitative experiences.” Mills will focus on the phenomenal aspect. (4B.c)


Sensorimotor feedback can be marginal and focal and involve more than one perceptual Gestalt at a time. (4D) This helps us attend to multiple tasks at one time. (5Aa)



Incorporation of artifacts into the lived body


On these bases will will conceptualize the incorporation of artifacts, such as prosthetic devices, into the lived body.This requires a modification of the three basic features of embodiment. (5Ab)


Incorporation of artifacts involves integrating them into our body schema.

When an artifact is integrated into the body schema such that it modifies the corporeal understanding, the body and artifact merge in such a way that together they interface with the world to generate behaviors that are neither reflexive nor conceptually guided but rather skillful or habitual. This means that the body schema comes to include the artifact in its arc of intentionality. (5Abc)


Incorporation also involves a change in the absence presence dynamic where the artifact becomes absent or transparent.

Ideally the prosthesis can become a part of the body from which a worldly gestalt becomes present and a skillful or habitual behavior becomes possible. Such integration can occur when sensorimotor feedback appears to come from the interface between body-artifact as a unified whole and the world rather than the interface between the body and the artifact. (5Ac)



Examples of the embodiment of artifacts


Merleau-Ponty uses the examples of the walking stick and the typewriter. [At the end of this entry we look at the examples and the passages they come from.] Mills quotes Merleau-Ponty [Phenomenology of Perception 165-166]

The blind man’s stick has ceased to be an object for him, and is no longer perceived for itself; its point has become an area of sensitivity, extending the scope and active radius of touch, and providing a parallel to sight. In the exploration of things, the length of the stick does not enter expressly as a middle term: the blind man is rather aware of it through the position of objects [p.165 | p.166] than of the position of objects through it. (5Ba)

So the walking stick eventually is incorporated into the arc of intentionality and becomes like an extension of her body.

The user literally extends her reach, the stick being lived as part of the extended arm. The stick becomes transparent because the blind person is not focused on her grasp of the walking stick; she is directed towards the ground through the hand-stick combination. Sensorimotor feedback is experienced not as a relation between the hand and movements of the stick but rather as an experience of the texture and location of the ground and other items through the stick as if it were an extension of her arm. (5Bb)


Mills also quotes Merleau-Ponty’s typewriter example [Phenomenology of Perception 166-167]

It is possible to know how to type without being able to say where the letters which make the words are to be found on the banks of keys. To know how to type is not, then, to know the place of each letter among the keys, nor even to have acquired a conditioned reflex for each one, which is set in motion by the letter as it comes before our eye. If habit is neither a form of knowledge nor an involuntary action, what then is it? It is knowledge in the hands, which is forthcoming only when bodily effort is made, and cannot be formulated in detachment from that effort. The subject knows where the letters are on the typewriter as we know where one of our limbs is…. When the typist performs the necessary movements on the typewriter, these movements are governed by an intention, but the intention does not posit the keys as objective locations. It is literally true that the subject who learns to type incorporates the key-bank space into his bodily space. (5Bb.c)

The key, the finger, and other aspects of the body’s positioning and behavior fall into the background. (5Bc)


The corporeal understanding of the skilled typist is not reflexive, because the keys are not stimuli causing her fingers to move. And the typing does not involve representation, because the typist does not make use of a mental model representing the keyboard. “The corporeal understanding is in the hands.”  (5Bc.d)



Embodiment of prosthetic devices


So the incorporation of prosthetic devices requires a modification in our corporeal understanding, transparency, and sensorimotor feedback. (5-6) Mills draws from Craig Murray’s empirical studies of prosthetic incorporations.


For there to be transparency, the device must be affixed properly so that no pain or discomfort is felt, for otherwise it will be noticed rather than disappear from awareness. Practice is as well critical for attaining transparency. [Following first quotes Mills, then has Mill’s quote of Murray 2004]

Murray notes that some respondents testify that with practice, walking starts to become natural again:

Walking becomes pretty intuitive after the age of three or four; you don’t think about it, you just do it. Now, I do have to think occasionally, such as when I stand up from a chair. I have to think which foot, is that foot in the right position, is it going to hit anything? You do still have to check for things like that. Occasionally, I’ll get it trapped under a chair as I stand up. So a couple of times it brings it back to you that you have a problem there. But once moving, in general, it’s pretty much a matter of well I want to go from here to there, and I just walk. It’s intuitive now. (6Abc)

Mills continues [again first Mills then Murray 2004]

The prosthesis, in one report, became so transparent, that the user got up from bed without realizing that the artificial limb was not on: “I fell on the floor, landing on the distal end of the stump. It was a very frightening thing. Scary. It hurt like hell, and I stayed off it for about a week. So I guess I have reached a point where I am capable of such foolish acts as that and forget my leg was not on.” (6Ad)


Incorporation could be more likely if sensorimotor feedback occurs “at the interface of the prosthesis and the environment”. (6Ad) Mills notes [second is quote from Murray 2004]

One user in the Murray study reported: “I do sense it [the ground] with the prosthesis on. It is a general awareness of the ground. As I walk, I can feel my heel land, and the foot move forward to the toes”. (6Ba)

There are technologies that can make the feedback appear to come from the environment.


For example, prosthetics fixed to bones rather than fitting in sockets tend to give better feedback. [citing Hagberg et al.]

For example, the sensorimotor feedback provided by boneanchored (osseointegrated) prostheses (OI), osseoperception, seems to be more vivid and detailed than that attained by socket technology. In one study, several patients reported more control over their artificial limb (than with a socket style prosthesis) and the ability to identify the material of the surface they are walking on. Other users report that the osseointegrated prosthesis feels more like a part of the body than did the socket prosthesis. (6Bb)


Another technique for enhancing sensorimotor feedback is targeted re-innervation, which “improves the communication of the surface of the prosthesis with intact nerves on the residual limb.” (6Bb) [When a limb is amputated, it seems in these cases that the nerves of that limb are removed and reattached to the remaining limb nerve channels, so that sensations normally given in the limb are still givable through these nerves. Then, sensors in the prosthetic limb create signals that stimulate the attached nerves so that the user feels as if the sensation is coming from within the prosthetic device.]

There are a variety of technologies that seek to improve the communication of the surface of the prosthesis with intact nerves on the residual limb to improve sensorimotor feedback. One of these strategies is targeted re-innervation. In a 2010 study conducted at the Rehabilitation Institute of Chicago and led by Paul D. Marasco, an artificial sense of tactile sensation was created for a prosthetic limb so it felt, to the user, as though sensations were coming from the prosthesis and not merely mediated by the prosthesis. Here is how it works. Sensors are placed on the artificial hand. These sensors send a message to a robotic device that is located on the residual limb in proximity to areas sensitized by the re-innervated nerves. The robotic device responds to the sensor information by stimulating “surgically redirected cutaneous sensory nerves... that once served the lost limb”. I want to emphasize that Marasco used both evidence-based medicine and phenomenology. In particular, he made use of questionnaires and testimony as well as temperature changes in the residual limb. He found that the illusion that sensations were coming from the prosthesis were vivid. Marasco suggests that “this may help amputees to more effectively incorporate an artificial limb into their self image, providing the possibility that a prosthesis becomes not only a tool, but also an integrated body part”. If we couple this technology with recent advances in the kinesthetic response of prosthetic devices, users may benefit even more from enhanced sensorimotor feedback. It is likely that the more such devices mimic organic limbs and provide sensorimotor feedback, the more a corporeal understanding and a natural feel can be achieved. [Mills 6Bb citing Marasco et al.]



Conclusion


Not all prosthesis users attain transparency. “However, in those cases where integration is the goal, the insights of Merleau-Ponty on the lived body help us to understand what it means for a prosthetic device to be incorporated into a patient’s lived body.” (6Bd)


Mills concludes “Those rehabilitative strategies that begin to make the prosthesis more transparent to the user; provide finer grained sensorimotor feedback as coming from the (body-prosthesis) – world interface; and restore increasingly more skillful functionality are likely to achieve the maximal prosthetic incorporation.”  (7Ab)



Frederick B. Mills. “A Phenomenological Approach to Psychoprosthetics.” Disability & Rehabilitation, 2012; Early Online: 1–7 © 2012 Informa UK, Ltd. ISSN 0963-8288 print/ISSN 1464-5165 online


Gallagher P, Desmond D, MacLachlan M. Psychoprosthetics: an introduction. In: Gallagher P, Desmond D, MacLachlan M, editors. Psychoprosthetics. London: Springer-Verlag Limited; 2008. pp 1–10.


Gallagher S. How the body shapes the mind. New York: Oxford University Press; 2005.]


Hagberg K, Häggström E, Jönsson S, Rydevik B, Brånemark R. Osseoperception and osseointegrated prosthetic limbs. In: Gallagher P, Desmond D, MacLachlan M, editors. Psychoprosthetics. London: Springer-Verlag Limited; 2008. pp 131–140.


Leder D. The absent body. Chicago: The University of Chicago Press; 1990.

Marasco PD, Kim K, Colgate JE, Peshkin MA, Kuiken TA. Robotic touch shifts perception of embodiment to a prosthesis in targeted reinnervation amputees. Published by Oxford University Press on behalf of the Guarantors of Brain; 2011, January 20.


Merleau-Ponty, Maurice. Phenomenology of Perception. Transl. Colin Smith. London/New York: Routledge, 1958.


Murray CD. An interpretative phenomenological analysis of the embodiment of artificial limbs. Disabil Rehabil 2004;26:963–973. 3.


Murray CD. Embodiment and prosthetics. In: Gallagher P, Desmond D, MacLachlan M, editors. Psychoprosthetics. London: Springer-Verlag Limited; 2008. pp 119–129.







8 Mar 2013

Andy Clark. Ch2 Supersizing the Mind “The Negotiable Body”


summary by Corry Shores
[
Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]
[Posthumanism Entry Directory]
[Andy Clark, Entry Directory]
[Andy Clark, Supersizing the Mind, entry directory]


[My own commentary is in brackets. All boldface and underlining is my own. Extra spacing between paragraphs follow the paragraph divisions in the original text.]



Andy Clark


Supersizing the Mind:

Embodiment, Action, and Cognitive Extension


Ch.2
The Negotiable Body




Very Brief Summary:
On account of neuroplasticity, our brains can rewire so that our bodily systems may incorporate tools and other technologies, which can then act as extensions of our body and mind.

Brief Summary:
Our minds and bodies are not locked into their current form and manner of operation but can rather incorporate tools and technologies so to extend our cognitive, sensory, and motor systems.

Interfaces are points of contact in a system, but in certain systems the contact is so intimate as to blur the boundary between those parts of the system, which in our case blurs the boundaries between body and world.

There are examples of robotic appendages affixed to humans and other primates where practiced usage led to them becoming transparent equipment. So while there was an interface between body and robotics, they together produce a new systemic whole. They become integrated because the brain’s neuroplasticity allows it to rewire itself so to function as if the tool were a part of the body it controls.

These integrations can also happen with our senses. Blind people can function as if seeing their surroundings by using devices that make a map of what their head is pointed at by using a grid of tactile sensations. Such sensory extensions become transparent equipment, and the users with their technology form new systemic wholes.

Some might object that transparency need not be a matter of creating new systemic wholes but rather of someone using something else as a tool. Clark notes that when using a stick, brains rewire so what is seen as the space around the stick is processed as if spatially immediate to the hand holding the stick. The stick then becomes incorporated into the body schema.

Primates (ourselves included) are deeply embodied, which means that we constantly seek opportunities to make the most of our body and world and the relation between them, by integrating resources deeply into our body-schema, and this creates whole new agent-world circuits. Our body is critically important for our problem solving but because of neuroplasticity and tool-incorporation our body is negotiable as well.

 



Summary


2.1 Fear and Loathing


Science fiction writer Bruce Sterling notes how forthcoming robotic technologies can aid the aging in their mobility, but the people operating these machines will be senile. (30)

Clark thinks technologies will be incorporating into our bodily and cognitive systems.

But such fears are rooted in a fundamentally misconceived vision of our own humanity: a vision that depicts us as “locked-in agents”— as beings whose minds and physical abilities are fixed quantities, apt (at best) for mere support and scaffolding by their best tools and technologies. In contrast to this view, I believe that human minds and | bodies are essentially open to episodes of deep and transformative restructuring in which new equipment (both physical and “mental”) can become quite literally incorporated into the thinking and acting systems that we identify as our minds and bodies (see, e.g., Clark 1997a, 2001b, 2003). [30-31]


When we use a stick [especially a blind person for ‘seeing’], our place of sensation extends past our hands to the stick’s end.

The typical human agent, circa 2008, feels herself to be a bounded physical entity in contact with the world through a variety of standard sensory channels, including touch, vision, smell, and hearing. It is a common observation, however, that the use of simple tools can lead to alterations in that local sense of embodiment. Fluently using a stick, we feel as if we are touching the world at the end of the stick, not (once we are indeed fluent in our use) as if we are touching the stick with our hand. The stick, it has sometimes been suggested, is in some way incorporated, and the overall effect seems more like bringing a temporary whole new agent-world circuit into being rather than simply exploiting the stick as a helpful prop or tool (see Merleau-Ponty 1945/1962 and Gibson 1979; for some more recent explorations of this theme, see Burton 1993; Reed 1996; Peck et al. 1996; Smitsman 1997; Hirose 2002; Maravita and Iriki 2004; Wheeler 2005). [31a.c]


Such enhancements can create new agent-world circuits.

In thinking about the case of stick-augmented perception, there would seem to be two key interfaces at play: the place where the stick meets the hand and the place where the extended system “biological agent + stick” meets the rest of the world. When we read about new forms of human–machine interface, we are again confronted by a similar duality and an accompanying tension. What makes such interfaces appropriate as mechanisms for human enhancement is, it seems, precisely their potential role in creating whole new agent-world circuits. But insofar as they succeed at this task, the new agent-tool interface itself fades from view, and the proper picture is one of an extended or enhanced agent confronting the (wider) world. [31c]


Clark will begin with the notion of an interface.



2.2 What’s in an Interface?


Clark begins with Haugeland’s (1998) explanation of interfaces. When analyzing interfaces, the “ goal is to uncover the underlying principles ‘for dividing systems into distinct subsystems along | nonarbitrary lines’ (211).” [31-32]
components: “those parts of a larger whole that interact through interfaces”. [32a]
interface: “ ‘a point of interactive ‘contact’ between components such that the relevant interactions are well-defined, reliable and relatively simple’ ” [32a]
systems: “ ‘relatively independent and self-contained’ composites of such interfaced components.” “(Haugeland 1998, 213).” [32a]

Clark agrees that interfaces are locations of contact between independent parts.

Haugeland is right to point to the nature of interactions as the key to the location of an interface. We discern an interface where we discern a kind of regimented, often deliberately designed, point of contact between two or more independently tunable or replaceable parts. [32]

But Haugeland is mistaken to say that the flow across the interface is simple. He needs this point so that he can say that human sensation is too complex for there to be interfaces between mind, body, and world and hence there is intimate intermingling of the three. (32b)


Clark agrees that sensation involves direct agent-environment couplings, but Clark is not ready to conclude that there are no interfaces. Haugeland thinks that sensation involves high-bandwidth communications, and interface low-bandwidth. But in a computer network with high-bandwidth connections, we have both interface and such intimate intermingling that the connected computers work like a single unified resource.

Nonetheless, we still think of it as a web of distinct but interfaced devices. And we do so not because the point of each machine’s contact with the grid is narrow (it isn’t) but because there exist, for each machine on the grid, very well-defined points of potential detachment and reengagement. We discern interfaces at the points at which one machine can be easily disengaged | and another engaged instead, allowing the first to join another grid or to operate in a stand-alone fashion. (32-33)


Thus we can have distinct entities that are nonetheless so intimately interactive that in their operations their boundaries are blurred. This means that the boundary between mind and world can likewise be blurred.

An interface, I conclude, is indeed a point of contact between two items across which the types of performance-relevant interaction are reliable and well defined. But there is no requirement that such interfaces be narrow-bandwidth bottlenecks. The way to argue for cognitive extensions and blurrings of the mind-world boundary is not by casting doubt on the presence of genuine interfaces (there are plenty of these within the brain, too, and that doesn’t stop us from distinguishing parts and roles) but by displaying special features of the flow of information across those interfaces and by stressing the novel properties of the new systemic wholes that result. It is to these tasks that we now turn. (33a.b)



2.3 New Systemic Wholes


Clark gives the example performance artist Sterlarc, who uses a robotic third arm, and has become so fluent in using it that it has become transparent equipment. [for more on transparent equipment, see
this section in
Natural Born Cyborgs]

Biological systems, from lampreys to primates, display remarkable powers of bodily and sensory adaptability (see Mussa-Ivaldi and Miller 2003; Bach y Rita and Kercel 2003; Clark 2003). The Australian performance artist Stelarc routinely deploys a “third hand,” a mechanical actuator controlled by Stelarc’s brain through commands to muscle sites on his legs and abdomen. Activity at these sites is monitored by electrodes that transmit signals (via a computer) to the artificial hand. Stelarc reports that, after some years of practice and performance, he no longer feels as if he has to actively control the third hand to achieve his goals. It has become “transparent equipment” (recall chap. 1), something through which Stelarc (the agent) can act on the world without first willing an action on anything else. In this respect, it now functions much as his biological hands and arms, serving his goals without (generally) being itself an object of conscious thought or effortful control. (33c.d)


Clark then discusses another example, an experiment in brain-machine interface (BMI) with a monkey and a robotic arm. [To clarify this experiment, we will draw both from Clark’s description and also from the paper itself.]

image
(from fig. 1 from
Carmena et al. 2003)

We see there is a monkey that is moving a joystick while looking at a screen. The joystick measures both grip and position. This translates into motions and changes in the position and size (grip intensity) of dot cursors on a computer screen.

image
(from fig. 1 from Carmena et al. 2003)

The first task has the monkey using the joystick’s pole to move the yellow dot to a green target dot. In the second task, the monkey need not move the pole, only squeeze it with the targeted amount of pressure. So its grip needed to be strong enough to make the  yellow circle expand outside the center circle, but not so hard it goes beyond the larger circle. Task three combines the first two: the monkey had to both move the cursor to the targeted location, and then afterward use the targeted amount of grip pressure. All the while, the researchers recorded the neural activity of the monkey’s brain, so that they could see what neural behaviors correlate with particular changes in the cursor. All this training happened during the “pole control” mode. So during this period, the monkey improves its abilities to manipulate the symbols. Eventually its ability reaches a maximal level, and its behaviors are coordinated consistently with its neural patterns. Then, the researchers disconnect the joystick wiring (while leaving the joystick in place), but now let the monkey’s neural activities control the cursor, on the basis of the correlations they found. When the monkey soon learns that the joystick is not working, the researchers remove it, and the monkey controls the dots using just its cognitive processes. After the monkey becomes accustomed to using just its brain, the researchers then add a robotic arm into the loop. The monkeys then are no longer directly controlling the screen cursors. Instead, they are controlling the position and grip of the robotic arm, whose parameters are then secondarily read and displayed on the screen. The results below show how the pole control period involved an increase in fluency, then when switching to brain control, fluency initially dropped a little but gradually reached maximal levels. But most notably when the robotic arm was introduced there was a steep drop in initial performance, but quick increase to maximal levels.

image

image
image
(from fig. 1 from
Carmena et al. 2003)

[Note that the authors write: “Figure 1C shows that because the intrinsic dynamics of the robot produced a lag between the pole movement and the cursor movement, the monkeys' performance initially declined.” But previously seemed to say they removed the pole and the monkey only used brain control. I will quote from the relevant passages.

In each recording session, an initial 30-min period was used for training of these models. During this period, monkeys used a hand-held pole either to move a cursor on the screen or to change the cursor size by application of gripping force to the pole. This period is referred to as “pole control” mode. As the models converged to an optimal performance, their coefficients were fixed and the control of the cursor position (task 1 and 3) and/or size (task 2 and 3) was obtained directly from the output of the linear models. This period is referred to as “brain control” mode. During brain control mode, animals initially produced arm movements, but they soon realized that these were not necessary and ceased to produce them for periods of time. To systematically study this phenomenon, we removed the pole after the monkey ceased to produce arm movements in a session. In each task, after initial training, a 6 DOF (degree-of-freedom) robot arm equipped with a 1 DOF gripper was included in the BMIc control loop. In all experiments, visual feedback (i.e., cursor position/size) informed the animal about the BMIc's performance. When the robot was used, cursor position indicated to the animal the X and Y coordinates of the robot hand. The cursor size provided feedback of the force measured by the sensors on the robot's gripper. The time delay between the output of the linear model and the response of the robot was in the range of 60–90 ms.

[…]

Behavioral Performance during Long-Term Operation of a BMIc

[…] In all three tasks, the levels of performance attained during brain control mode by far exceeded those predicted by a random walk model (dashed and dotted lines in Figure 1C–1E). Moreover, both animals could operate the BMIc without any overt arm movement and muscle activity, as demonstrated by the lack of EMG activity in several arm muscles (Figure 1G). The ratios of the standard deviation of the muscle activity during pole versus brain control for these muscles were 14.67 (wrist flexors), 9.87 (wrist extensors), and 2.77 (biceps).

A key novel feature of this study was the introduction of the robot equipped with a gripper into the control loop of the BMIc after the animals had learned the task. Figure 1C shows that because the intrinsic dynamics of the robot produced a lag between the pole movement and the cursor movement, the monkeys' performance initially declined. With time, however, the performance rapidly returned to the same levels as seen in previous training sessions (Figure 1C). It is critical to note that the high accuracy in the control of the robot was achieved by using velocity control in the BMIc, which produced smooth predicted trajectories, and by the fine tuning of robot controller parameters. These parameters were fixed across sessions in both monkeys. The controller sent velocity commands to the robot every 60–90 ms. Each of these commands compensated for potential position errors of the robot hand that resulted from previous commands. (Carmena et al.)

] The authors write in their conclusion:

Overall, the present findings demonstrate that it is reasonable to envision that a cortical neuroprosthesis for restoring upper-limb movements could be implemented in the future, following the basic BMIc principles described here. We propose that long-term operation of such a device by paralyzed subjects would lead, through a process of cortical plasticity, to the incorporation of artificial actuator dynamics into multiple brain representations. Ultimately, we predict that this assimilation process will not only ensure proficient operation of the neuroprosthesis, but it will also confer to subjects the perception that such apparatus has become an integral part of their own bodies. (Carmena et al.)

Hence we see the affinity between this experiment and Clark’s notion of transparence. Clark describes the experiment by writing:

Recent experimental work reveals more about the kinds of mechanisms that may be at work in such cases. A much publicized example is the work by Miguel Nicolelis and colleagues on a brain-machine interface (BMI) that allows a macaque monkey to use thought control to move a robot arm. In the most recent version of this work, Carmena et al. (2003) implanted 320 electrodes in the frontal and parietal lobes of a monkey. The electrodes allowed a monitoring computer to record neural activity across multiple cortical ensembles while the monkey learned to use a joystick to move a cursor across a computer screen | for rewards. As in previous work, the computer was able to extract the neural activity patterns corresponding to different movements, including direction and grip. Next, the joystick is disconnected. But the monkey is still able to use its neural activity, interpreted through the intervening computer, to directly control the cursor for rewards, and it learns to do so. Finally, these commands are diverted to a robot arm whose actual motions are then translated into on-screen cursor movements, including an on-screen equivalent of forceful gripping. This closes the loop. Instead of the monkey merely moving an unseen robot arm by thought control alone, the movement of the distant unseen arm now yields visual feedback in the form of on-screen cursor motion. (33-34)


As we noted, there was a drop in performance when the monkey began working through the robotic arm. Yet over time it gained fluency, because the monkey’s brain rewired (there was “cortical reorganization” as Carmena et al. term it) so that the two worked seamlessly together. This is neuroplasticity. Clark writes:

When the robot arm was inserted into the control loop, the monkey displayed a striking degradation of behavior. It took two full days of practice to reestablish fluent thought control over the on-screen cursor. The reason was that the monkey’s brain now had to learn to factor in the mechanical and temporal “friction” created by the new physical equipment: It had to factor in the mechanical and dynamical properties of the robot arm and the time delays (which were substantial, in the 60–90 millisecond range) caused by interposing the motion of the arm between neural command and on-screen feedback. By the time full fluency was achieved, it is reasonable to conjecture that these properties of the still unseen distant arm were in some sense incorporated into the monkey’s own body schema. In support of this, the experimenters were able to track real long-term physiological changes in the response profiles of frontoparietal neurons following use of the BMI, leading them to comment that

the dynamics of the robot arm (reflected by the cursor movements) become incorporated into multiple cortical representations . . . we propose that the gradual increase in behavioral performance . . . emerged as a consequence of a plastic reorganization whose main outcome was the assimilation of the dynamics of an artificial actuator into the physiological properties of fronto-parietal neurons. (Carmena et al. 2003, 205) [Clark 34b.d]


Certain creatures can incorporate new bodily structures in this way, and Clark calls such creatures “profoundly embodied agents.” They are able to “constantly to negotiate and renegotiate the agent-world boundary itself.” (34d)


But this is natural anyway, as evidenced in child development.

The human | infant must learn (by self-exploration) which neural commands bring about which bodily effects and must then practice until skilled enough to issue those commands without conscious effort. This process has been dubbed “body babbling” (Meltzoff and Moore 1997) and continues until the infant body becomes transparent equipment (see 1.6). Because bodily growth and change continue, it is simply good design not to permanently lock in knowledge of any particular configuration but instead to deploy plastic neural resources and an ongoing regime of monitoring and recalibration […]. (34-35)



2.4 Substitutes


Clark offers another example of such neuroplasticity. Blind subjects have a grid of nails fixed to their backs and parts of the grid stimulate the subjects back depending on information received from a video camera. Over time it is as if the subjects are able to see the things around them.

As a second class of examples of recalibration and renegotiation, consider the plasticity revealed by work in sensory substitution. Pioneered in the ‘60s and ’70s by Paul Bach y Rita and colleagues, the earliest such systems were grids of blunt “nails” fitted to the backs of blind subjects and taking input from a head-mounted camera. In response to the camera input, specific regions of the grid became active, gently stimulating the skin under the grid. At first, subjects report only a vague tingling sensation. But after wearing the grid while engaged in various kinds of goal-driven activity (walking, eating, etc.), the reports change dramatically. Subjects stop feeling the tingling on the back and start to report rough, quasi-visual experiences of looming objects and so forth. After a while, a ball thrown at the head causes instinctive and appropriate ducking. The causal chain is “deviant”: It runs via the systematic input to the back. But the nature of the information carried, and the way it supports the control of action, is suggestive of the visual modality. Performance using such devices can be quite impressive. In a recent article, Bach y Rita, Tyler, and Kaczmarek (2003) note that Tactile-Visual Substitution Systems (TVSS) have been sufficient to perform complex perception and “eye”-hand co-ordination tasks. These have included face recognition, accurate judgment of speed and direction of a rolling ball with over 95% accuracy in batting the ball as it rolls over a table edge, and complex inspection-assembly tasks. (287) [35b.d]


What is essential is that the head-mounted camera be under the subjects control, because this allows the brain to experiment by looking around and coordinating the nail stimulations with experiences of things around them.

The key to such effective sensory substitution is goal-driven motor engagement. It is crucial that the head-mounted camera be under the subject’s intentional motor control. This meant that the brain could, in effect, experiment through the motor system, giving commands that | systematically varied the input so as to begin to form hypotheses about what information the tactile signals might be carrying. Such training yields quite a flexible new agent-world circuit. Once trained in the use of the head-mounted camera, the motor system operating the camera could be changed (e.g., to a hand-held camera) with no loss of acuity. The touch pad, too, could be moved to new bodily sites, and there was no tactile–visual confusion: An itch scratched under the grid caused no “visual” effects (for these results, see Bach y Rita and Kercel 2003). [35-36]


These technologies have advanced quite a bit, and now have greater capabilities and are more compact. (36b)


These technologies can also be used for enhancement. They can give us night vision, and all sorts of signals, including television signals, can be directly feed into the brain, bypassing sensory peripheries. There is even a suit invented by the US Navy, where an inexperienced pilot can control a helicopter blindfolded, by reacting to air puffs from the suit that tell the pilot the helicopter’s tilts, so she can stabilize and fly it without vision.

While the pilot wears the suit, the helicopter behaves very much like an extended body for the pilot: It rapidly links the pilot to the aircraft in the same kind of closed-loop | interaction that linked Stelarc and the third hand, the monkey and the robot arm, or the blind person and the TVSS system. What matters, in each case, is the provision of closed-loop signaling so that motor commands affect sensory input. What varies is the amount of training (and hence the extent of deeper neural changes) required to fully exploit the new agent-world circuits thus created. (36-37)


What is important is that the circuits become transparent.

It is important, in all these cases, that the new agent-world circuits be trained and calibrated in the context of a whole agent engaged in world-directed (goal-driven) activity. One sign of successful calibration is, as we noted earlier, that once fluency is achieved, the specific details of the (old or new) circuitry by which the world is engaged fall “transparent” in use. The conscious agent is then aware of the oncoming ball, not (usually) of seeing the ball or (by the same token) of using a tactile substitution channel to detect the ball. In just this way, the tactile-vest-wearing pilot becomes aware of the aircraft’s tilt and slant, not of the puffs of air. (37b)


We see then that humans and other primates are highly capable of the sorts of neuroplasticity that allow for us to extend into external things.

In all these diverse ways, humans and other primates are revealed as constantly negotiable bodily platforms of sense, experience, and (as we’ll see in later chapters) reasoning, too. Such platforms are biologically primed so as to fluidly incorporate new bodily and sensory kit, creating brand new systemic wholes. This is just what one would expect of creatures built to engage in what we earlier (sec. 1.1) called “ecological control”: systems evolved so as to constantly search for opportunities to make the most of the reliable properties and dynamic potentialities of body and world. (37bc)



2.5 Incorporation Versus Use


One might say that tool use transparency is not such a controversial concept; it can be understood as a user in command of a tool rather than creating new systemic wholes. (37d)


Clark will begin by examining research on primate tool use. (37d)


Recently bimodal neurons in primate brains have been discovered. The respond both to somatosensory information from a bodily region and as well to visual information from the space adjacent to it. (38a)

“For example, some neurons respond to somatosensory stimuli (light touches) at the hand and to visually presented stimuli near the hand so as to yield an action-relevant coding of visual space.” The neurons then seem to be able to develop sensitivities extended through objects [so consider if we were to use a stick to feel for things beyond our normal reach. These neurons would coordinate visual information regarding what we see touching and affecting the stick with tactile information that we feel in our hand, such that we recode or reprocess this tactile information so that we can feel at the end of the stick.]

In a series of experiments, recordings were taken from bimodal neurons in the intraparietal cortex of Japanese macaques while the macaques learned to reach for food using a rake. The experimenters found that after just five minutes of rake use, the responses of some bimodal neurons whose original vRFs picked out stimuli near the hand had expanded to include the entire length of the tool, “as if the rake was part of the arm and forearm” (Maravita and Iriki 2004, 79). Similarly, other bimodal neurons, which previously responded to visual stimuli within the space reachable by the arm, now had vRFs that covered the space accessible by the arm-rake combination. After surveying a number of other related findings, including some fascinating work in which similar effects are observed after experience of reaching with a virtual arm in an on-screen display, Maravita and Iriki conclude: “Such vRF expansions may constitute the neural substrate of use-dependent assimilation of the tool into the body-schema, suggested by classical neurology” (2004, 80). (38a.c)


Another scientific study shows how our brains distinguish far space and near space, and when we use a stick as a tool, our brain treats the far space at its end as if it were space near our hand.

In human subjects suffering from unilateral neglect (in which stimuli from within a certain region of egocentrically coded space are selectively ignored), it has been shown that the use of a stick as a tool for reaching actually extends the area of visual neglect to encompass the space now reachable using the tool (see Berti and Frassinetti 2000). Berti and Frassinetti conclude

that the brain makes a distinction between “far space” (the space beyond reaching distance) and “near space” (the space within reaching distance) [and that] . . . simply holding a stick causes a remapping of far space to near space. In effect the brain, at least for some purposes, treats the stick as though it were a part of the body. (2000, 415) [38c.d]


So we see from these studies that there is a difference between mere use of an object and a true incorporation of it into our body scheme. Clark also distinguishes body image (“conscious construct able to inform thought and reasoning about the body”) and body schema (“a suite of neural settings that implicitly (and nonconsciously) define a body in terms of its capabilities for action action, for example, by defining the extent of ‘near space’ for action programs”.) [39a]


Clark has us imagine beings without the capacity to incorporate tools into their body schema. Instead, they would use conscious calculations and representations of the tools features and powers. We might also imagine them being so smart that they can make these calculations so fast they they use the tools just as well as humans would, as if the tools were incorporated into their body schemas when in fact they are not. The difference between humans and these beings would still be that human brains rewire so that the extensions the tool use enables are given automatically.

The contrast that would remain, even in the latter kind of case, would be between (a) the skilled agent’s first explicitly representing the shape, dimensions, and powers of the tool and then inferring (consciously or otherwise) that she can now reach such and such and do such and such and (b) agents whose brains were so constituted that experience with the tool results in, for example, a suite of altered vRFs such that objects within tool-augmented reaching range are now automatically treated as falling within near space. These are surely distinct strategies. The latter strategy might be especially recommended for beings whose bodies (like our own) are naturally subject to growth and change, as it seems designed to support genuine episodes of integration across change: cases that can now be defined as cases in which plastic neural resources become recalibrated (in the context of goal-directed whole agent activity) so as to automatically take account of new bodily and sensory opportunities. In this way, to paraphrase Varela, Thompson, and Rosch (1991), our own embodied activity enacts or brings forth new systemic wholes. [39b.c]



2.6 Toward Cognitive Extension


Clark now addresses the question of whether incorporation really creates new systemic wholes for our cognizing minds or if it is “just the same old mind with a shiny new tool?” (39d)


Clark thinks “we are not just bodily and sensorily but also cognitively permeable agents”, but it is not so clear what to look for in neural changes for cogniive extensions. Clark will just begin looking at instances of physical and sensory augmentation to find clues. (40a)


For one thing, cognitive enhancement does not require that we be aware of the cognitive operations at work. We are not now aware of our own cognitive operations and yet we think. And when our brain structures change, in growth and maturation), we do not need that the operations of the new structure be intelligible to those of the old structure. Rather, changes can create new wholes that are themselves (and not their prior forms) the determiners of what is intelligible to the agent. So nonbiological tools and structures can become sufficiently integrated into our problem-solving activity in such a way that it yields “new agent-constituting wholes.” (40c)


Clark has us consider when our neural systems learn a new complex problem solving routine, and this routine changes how we conceive of information around our body, like changing how our neurons work so we can feel at the end of a tool.

Consider the case when some existing neural system or systems learn a complex problem-solving routine that makes a variety of deep implicit commitments to the robust bioexternal availability of certain operations and/or bodies of information. This is the cognitive equivalent, I suggest, of the implicit commitments to details of bodily shape and potentials for action made (in the case of the rake) by rapidly retuning the receptive fields of key bimodal neurons and (in the case of the robot arm) by retuning key cortical representations (specifically, populations of frontoparietal neurons). (40d)


Clark mentions a study that he details later in the book. After “masking of motion transients” subject were unable to spot large and significant changes, even ones made in their field of focus. But we also think of ourselves as having a “rich visual contact with our surroundings”, so how could we miss such things? It could have something to do with us already feeling as we were are in intimate visual touch with the things we see, and because we can obtain details on demand when we need them, changes can slip our attention if we don’t recognize the need to focus on those details.


Clark also refers to a block-copying experiment from section 1.3. We find from it that

a problem-solving routine is delicately geared to automatically exploit, on pretty much an equal footing, both internal and (bio)external forms of information storage. Rather than drawing a firm line around the inner encodings, we thus expand the relevant forms of storage and retrieval to include inner biological resources, environmental structure, and the data (and operations) made available by cognitive artifacts such as notebooks and laptops. As we move toward an era of wearable computing and ubiquitous information access, the robust, reliable information fields to which our brains delicately adapt their inner cognitive routines will surely become increasingly dense and powerful, perhaps further blurring the boundaries between the cognitive agent and his or her best tools, props and artifacts. (41d)


2.7 The Grades of Embodiment


There are three grades of embodiment: a) mere embodiment, b) basic embodiment, and c) profound embodiment.
a) mere embodiment: “A merely embodied creature or robot is one equipped with a body and sensors, able to engage in closed-loop interactions with its world, but for whom the body is nothing but a highly controllable means to implement practical solutions arrived at by pure reason.” (42a)
b) basic embodiment: “A basically embodied creature or robot would then be one (we saw several in chap. 1) for whom the body is not just another problem space, requiring constant micromanaged control, but is rather a resource whose own features and dynamics (of sensor placement, of linked tendons and muscle groups, etc.) could be actively exploited allowing for increasingly fluent forms of action selection and control. Much (though by no means all) work in contemporary robotics has explored this middle ground of modest embodiment. Such systems are, however, congenitally unable to learn new kinds of body-exploiting solution ‘on the fly,’ in response to damage, growth, or change.” (42b)
c) profound embodiment: “By contrast, as we have seen, biological systems (and especially we primates) seem to be specifically designed to constantly search for opportunities to make the most of body and world, checking for what is available, and then (at various timescales and with varying degrees of difficulty) integrating new resources very deeply, creating whole new agent-world circuits in the process. A profoundly embodied creature or robot is thus one that is highly engineered to be able to learn to make maximal problem-simplifying use of an open-ended variety of internal, bodily, or external sources of order.” (42bc)


We cannot think of profoundly embodied minds as being like disembodied organs of control because they are not in any was disembodied.

Rather, they are promiscuously body-and-world exploiting. They are forever testing and exploring the possibilities for incorporating new resources and structures deep into their embodied acting and problem- solving regimes. They are, to use the jargon of Clark (2003), the minds of “natural-born cyborgs”—of systems continuously renegotiating their own limits, components, data stores, and interfaces. (42d)

The body in this sense is critically important because of its role in problem solving, but it is negotiable, because it is a machine constantly in flux.

On this account, the body is both critically important and constantly negotiable. It is critically important as a key player on the problem-solving stage. It is not simply the point at which processes of transduction pass the real problems (now rendered in rich internal representational formats) to an inner engine of disembodied reason. Instead, much of our successful performance depends | on constant and subtle trade-offs among morphology, real-world action and opportunities, and neural control strategies. But this empowering body is constantly negotiable, constructed moment by moment from the flux of willed action and resulting sensory stimulation. (42-43)


Recall from the first section of this chapter how Sterling was afraid of senile minds in control of sophisticated enabling machines. But this misses the point that our minds are not fixed but are fluid and can thus integrate and expand into the technology that it comes to incorporate within its systems.

Those first waves of fear and loathing now give way to something more rewarding. Sterling (sec. 2.1) saw frightening scenes of a merely superficially augmented agent within whom “the CPU is a human being: old, weak, vulnerable, pitifully limited, possibly senile.” Such fears play upon a deeply misguided image of who and what we already are. They play upon an image of the human agent as doubly locked in: as a fixed mind (one constituted solely by a given biological brain) and as a fixed bodily presence in a wider world. Fortunately for us, human minds are not old-fashioned CPUs trapped in immutable and increasingly feeble corporeal shells. Instead, they are the surprisingly plastic minds of profoundly embodied agents: agents whose boundaries and components are forever negotiable and for whom body, sensing, thinking, and reasoning are all woven flexibly and repeatedly from the accommodating weave of situated, intentional action. (43a)

Andy Clark. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford / New York: Oxford University Press, 2008.


Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, et al. (2003) Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biol 1(2): e42. doi:10.1371/journal.pbio.0000042

http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0000042

Copyright: © 2003 Carmena et al. This is an open-access article distributed under the terms of the Public Library of Science Open-Access License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.