by Corry Shores
[Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]
[Posthumanism, Entry Directory]
[Bostrom & Sandberg's "Roadmap", Entry Directory]
[Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]
[Posthumanism, Entry Directory]
[Bostrom & Sandberg's "Roadmap", Entry Directory]
[My more developed Deleuzean critique of mental uploading can be found here. The following summarizes Nick Bostrom's & Anders Sandberg's new study on whole brain emulation. My commentary is in brackets.]
Nick Bostrom & Anders Sandberg,
"Whole Brain Emulation: A Roadmap"
Part I: The Concept of Brain Emulation
In whole brain emulation (also known as mental uploading or mental downloading), we
take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain. (7a)
Section I: "Emulation and Simulation"
According to the terminology of computer science, when we simulate something, we only mimic its outward behavior or "results." However, when we emulate, we mimic the way that something operates on the inside, its "internal causal dynamics." Yet just like a simulation, the success of an emulation depends on how well it mimics the outward behavior and results of the original. [Later Bostrom & Sandberg define emulation more like how simulation is described here.]
[For example, you want to create a software program that emulates the way your friend plays chess. To test it, first you play a game with your friend, and keep track of all the moves you make, and all the ways she responds. Then you play the software program. If the program responds with the same moves as your friend did, then you successfully emulated her. The program must also mimic the "internal causal dynamics" of your friend's thinking, so we are presupposing here that minds and computers process information by means of the same "mechanical" procedures.]
The theoretical basis for brain emulation is the thesis that any real computer can be replicated as a mathematical model, namely, as a "Turing machine."
Bostrom & Sandberg clarify their terminology:
emulation: a 1-to-1 model presenting all relevant properties of a system [that is, an isomorphic copy]
simulation: a model presenting only some relevant properties
[Imagine now that we wanted to emulate a slot machine that is capable of truly producing random results. Now say we create a model of it. Then, we pull the original's handle numerous times, and record the results. Then, we set the emulation to the same starting position as the original, and pull its handle the same number of times. If we find it produces the same results, would that be an emulation? I ask, because if the original machine truly produced random results, then randomness is its most "relevant property." So we expect its emulation to also to produce random results. This means it should produce totally different outcomes. For if they were the same, then the machine's functioning would seem to be predeterminable. I suggest this example as contrast to when we have two machines that produce predeterminable results that are faced with chaotic or noisy "inputs."]
Bostrom & Sandberg now have us consider when we have emulations that encounter "noise or intrinsic chaos." In these cases, the emulation and the original will behave differently from each other. [So say you are playing chess with your friend, and your dog jumps to the board and eats one of your pawns. Because you hate using substitute objects for their proper pieces, you insist that the game continue without the pawn. Then, while playing the emulation afterward, at the appropriate point in the game, you also remove that piece from play. The question is not if the emulation continues to play exactly the same way that your friend did. Rather, the criteria are that it can at least continue to play under alternate unexpected circumstances and also that it play one among the set of ways that you would expect your friend to play under these new circumstances. If the emulation does something your friend would never do, then it is not technically an emulation.]
As Bostrom & Sandberg write:
Emulations may behave differently from each other or the original due to noise or intrinsic chaos, but behave within what one would expect from the original if it had experienced the same noise or chaos.(7c)
Thus a brain emulator would be software that "models the states and functional dynamics of a brain at a relatively fine‐grained level of detail" (7d).
In particular, a mind emulation is a "brain emulator that is detailed and correct enough to produce the phenomenological effects of a mind" (7d).
And specifically, a person emulation is a "mind emulation that emulates a particular mind" (8a).
Bostrom & Sandberg now clarify what constitutes "relevant properties," and this is critical to both his presentation and our critique. Consider we have an older computer that uses vacuum tubes and wires instead of solid-state circuitry. It will process input mechanically in a much different way than the more sophisticated computers manufactured now. And say we want to use a new computer to emulate the functioning of the much older one. Doing so should not be difficult. And the test for success would be simple. We feed the older computer a series of inputs, and record its outputs. Then, we feed the same inputs to the emulation program on the more sophisticated computer. If it produces the same outputs, then it is a successful emulation. So for Bostrom & Sandberg, the more sophisticated machine need only emulate the input-output functioning of the older computer. It does not also need to emulate the "lower-levels" of its operation. In fact, the inner workings do not interest us as much as the outer behavior. For Bostrom & Sandberg write:
While lower‐level emulation of computers may be possible it would be inefficient and not contribute much to the functions that interest us.
In other words, so long as the emulation isomorphically replicates the original's behavior, it need not replicate its internal mechanisms.
Say we want our software to emulate a human mind's ability to perform mathematical operations. Digital circuitry is already a sort of computation-machine, so we need not attend to much detail when programming the emulation to perform numerical computations. However, something like our vision is not considered a digital process, because there are not always clear yes/no distinctions in our fields of vision. So to create a digital emulation of vision, we need to replicate the details of the process. [for more on the digital/analog debate in artificial intelligence theory, see this entry on analog and digital.]
Sandberg, A. & Bostrom, N. (2008): Whole Brain Emulation: A Roadmap, Technical Report #2008‐3, Future of Humanity Institute, Oxford University.
Available online at:
Nick Bostrom's page, with other publications:
Anders Sandberg's page, also with other publications:
No comments:
Post a Comment