[Search Blog Here. Index-tags are found on the bottom of the left column.]
[Central Entry Directory]
[Corry Shores, Entry Directory]
[Posthumanism, Entry Directory]
The following is from a conference presentation at the University of Twente, for the Society for Philosophy & Technology, June 2009.
Corry Shores
Do Posthumanists Dream of Pixilated Sheep?
Bostrom and Sandberg's Brain Emulation,
Examined and Critiqued
Bostrom and Sandberg's Brain Emulation,
Examined and Critiqued
Enhancement technologies may someday give us capacities far beyond what we dream humanly possible. We could become post-human. Nick Bostrom & Anders Sandberg suggest that we might survive our body's death by living as a computer simulation. They issued a report from a conference where experts in all relevant fields collaborated to determine the path to "whole brain emulation." This technology will in the very least be an effective research tool for the neurosciences. It could even aid philosophical research too. Their "roadmap" defends certain philosophical assumptions required for this technology's success. So by determining the reasons why it succeeds or fails, we can obtain empirical data for philosophical debates regarding our mind and selfhood. I have chosen four issues to discuss: emergentism, analog vs. digital, unpredictability, and personal identity.
Brain emulation succeeds if a computer program replicates human neural functioning. Yet for the authors, its success increases when it perfectly replicates one specific person’s brain. She might then survive her body’s death by living as the simulation.
This prospect has posthumanist proponents. Their view presupposes certain traits of human consciousness and selfhood. Hans Moravec for example thinks our personal identities exist independently to our bodies.
According to his pattern-identity theory of selfhood, we are no more than the patterns and the processes found in our brains and bodies. William Bainbridge explains that we are neither man nor machine. We are just the dynamic patterns of information that can be realized in a wide variety of materials. Hence our personal patterns might be found in this body or in that computer. Either way we are the same person.
To emulate someone’s neural patterns, we first scan a particular brain to obtain precise detail of its structures and their interactions. Using this data, we program a simulation that will behave essentially the same as the original brain. Now first consider a gnat’s wild flight pattern. It seems irrational and random. But the motion of a whole swarm is smooth, controlled, and intelligent, as though the whole group of gnats has a mind of its own. To simulate the swarm, perhaps we will not need to understand how the whole swarm thinks. We instead just learn the way one gnat behaves and interacts with other ones. When we combine thousands of these simulated gnats, the swarm’s collective intelligence should thereby appear. Whole brain emulation presupposes this principle. The simulation will mimic the human brain’s functioning on the cellular level. Then automatically, higher-and-higher orders of organization should spontaneously arise. Finally human consciousness might emerge at the highest level of organization.
Early in this technology's development, we should only expect simpler brain states, like wakefulness and sleep. But in its ultimate form, whole brain emulation would enable us to make back-up copies of our minds. Then we might somehow survive our body’s death.
According to Bostrom & Sandberg, whole brain emulation should replicate all the original brain’s relevant properties so to produce a 1-to-1 model of the brain’s functioning. In this sense, the brain and its emulation are black boxes. We feed each one on its own the same sequence of stimuli. If they both respond with the same sequence of reactions, then they are functionally equivalent. In this way, the same mind could be realized in two physically different systems. Hilary Putnam claims that electronic computers can be functionally equivalent to mechanical ones and even to humans using pencil and paper. Their insides may differ drastically, but their outputs are identical.
There are various levels of emulation success. The highest ones are the most philosophically interesting.
When the technology achieves individual brain emulation, it produces emergent activity characteristic of one particular brain. With further success, we would emulate someone’s personal identity. Perhaps somehow it would be numerically the same person. But at least it would continue-on as that person even after her body dies. We achieve such a simulation when it becomes rationally self-concerned for the brain it emulates.
Minds emerge from the brain’s pattern of physical dynamics. If you replicate this pattern-dynamic in some other physical medium, the same mental phenomena should likewise emerge. One mind would then be realized in a multiplicity of different physical embodiments. So whole brain emulation’s success would provide evidence for the theory of multiple realizability.
According to emergentist theories, all reality is made-up of a single kind of stuff. But its parts aggregate and assemble into dynamic organizational patterns. The higher levels exhibit properties not found in the lower ones. But, there cannot be a higher order without lower ones underlying it.
Consider the H2O molecule. It does not itself bear the properties of liquidity, wetness, and transparency. However, a large enough aggregate of water molecules will exhibit these properties.
In our brains, no one single neuron is conscious. Yet, our minds emerge from the complex dynamic pattern of all our neurons’ parallel computations. Roger Sperry offers compelling evidence. There are "split brain" patients whose right and left brain hemispheres are disconnected from one another. Nonetheless, they maintain unified consciousness.
William Hasker offers the analogy of magnetic fields, which are distinct from the magnets producing them. The iron atoms themselves need to be organized in alignment in order for a magnetic field to emerge on a higher scale. In a similar way, the particular organization of the brain’s neurons generates a field of ‘consciousness.’ This emergent consciousness-field permeates and haloes our brain-matter, occupying its space and traveling along with it.
Not everyone agrees that the mind emerges from the brain. Todd Fineberg is one example. Now in fact, he does think that consciousness results from the complex interaction of many layers of neural organization. However, he argues that consciousness does not get squirted-out from neural activity and thereby obtain a life of its own. Instead, the layers of neural activity are all mutually interdependent and simultaneously cooperative.
Consider for example when we recognize our grandmother. One layer of neurons transmits information about the whole visual field. Another layer picks-out lines. Another one, shapes. Finally the information arrives at the grandmother cell, which only fires when it is she that we see. But this does not make the grandmother cell emergently higher. Rather, all the neural layers of organization must work together simultaneously to achieve this recognition. The brain is a vast network of interconnected circuits. So we cannot say that any layer of organization emerges over and above the others.
Fineberg’s objection may prove problematic for whole brain emulation. Bostrom & Sandberg explicitly state that we only need to simulate the lower levels of activity.
But if Fineberg’s holistic theory is correct, we cannot only emulate the lower levels and expect the rest to spontaneously emerge. For, we need already to understand the higher-levels in order to program the lower ones. So, whole brain emulation’s emergentist assumptions might not express the actual way that consciousness appears.
Notice how in the recent past, many digital technologies have replaced analog ones. It would seem that these two sorts of quantity-representation reside in contrary worlds: the continuous versus the discrete. An abacus is digital. It computes one discrete value or another, but it is blind to the values between its units. When we count to two on our fingers, meaningless empty space spans between our digits. So spreading our fingers further apart does not change their numerical value.
Slide-rules, however, are analog. One ruler slides against another continuously. So it may calculate any possible real number along the continuum. It could potentially compute and display irrational numbers like pi or the golden ratio. However, a digital computer would never cease calculating into lower digit places. For, there can be no final figure when rendering irrational numbers into digits. But, if you move the slide rule from three to four, you will for one instant display pi. Analog is dense. Between any two values is already a third one, and lying between those are yet even more, and so on infinitely. On account of this infinite divisibility, analog can compute and display an infinity of different values found within a finite range. But like the gaps between our fingers, digital at some point will be blind to a middle value, no matter how precise it is. So, because analog computers can deal with an infinity of values, they have more computing potential for certain applications.
Our emulated brain will receive simulated sense-signals. Does it matter if they are digital signals rather than analog? Many audiophiles swear by the unsurpassable superiority of analog. It might be less precise, but it always flows like natural sound waves. Digital, even as it becomes more accurate, still sounds to them artificial or cartoon-like. In other words, there might be a qualitative difference to how we experience analog and digital stimuli, even though it might take a person with extra sensitivities to bring this difference to our explicit awareness.
And if the continuous and discrete are so fundamentally different, then maybe a brain computing in analog would experience a qualitatively different feel of consciousness than if the brain were instead computing in digital.
Perhaps digital emulations might even produce a mental awareness quite foreign to what humans normally experience.
Bostrom’s & Sandberg’s brain emulation exclusively uses digital computation. But, they acknowledge the argument that analog and digital are qualitatively different. And, they admit that implementing analog in brain emulation could present profound difficulties. Yet, there is no need to worry, they say.
They pose what is called “the argument from noise.” Analog devices always take some physical form. It is unavoidable that interferences and irregularities, called noise, will make the analog device imprecise. So analog might be capable of taking-on an infinite range of variations. However, it will never be absolutely accurate, because noise always causes it to veer-off slightly from where it should be. Yet digital has its own inaccuracies. It is always missing variables between its discrete values. Nonetheless, digital is improving. Little-by-little it is coming to handle more variables. It is filling in the gaps. Digital will never be completely dense like analog. Values will always slip through its fingers. And analog will always miss its mark. But soon the distance between digital’s smallest values will equal the distance that analog veers-away from its proper course. Digital’s blindness would then match analog’s sloppiness. So, we only need to wait for digital technology to improve enough that it can compute the same values with equivalent precision. Both will be equally inaccurate, but for fundamentally different reasons.
Yet perhaps the argument from noise reduces the analog/digital distinction to a quantitative difference rather than a qualitative one. And analog is so prevalent in neural functioning that we should not so quickly brush it off.
Note first that our nervous system’s electrical signals are discrete pulses, like Morse code. In that sense they are digital. However, the frequency of the pulses can vary continuously. As well, there are many other neural quantities that are analog in this way.
Fred Dretske argues that our memories store information in analog. We might consider an event that occurred somewhere within a 15 minute time-span. But later, we also might search our memory for finer details to indicate more specifically when that event occurred. Because we can always further refine our remembered determinations, it could very well be that our brains record data in analog form.
But suppose anyway that the argument from noise is correct, and that we can dismiss analog’s computational superiority. Would there still be some reason to implement analog technology?
Recent research on neural-network learning supplies an answer. Analog noise interference is significantly more effective than digital at aiding adaptation. Being "wrong" allows neurons to explore new possibilities for computational values and connections. This enables us to learn and adapt to a chaotically changing environment. Using digitally-simulated neural noise might be inadequate. Analog is better. For, it affords our neurons an infinite array of alternate configurations. Hence, in response to Bostrom’s & Sandberg’s argument from noise, I propose this argument for noise.
Analog’s inaccuracies take the form of continuous variation. In my view, this is precisely what makes it necessary for whole brain emulation.
Neural noise can result from external interferences like magnetic fields. Or internal random fluctuations might make the signals unpredictable. In both cases, chance & chaos reign our brains. And in fact, these random, indeterminate, and probabilistic events assist our brain’s computations. It implements noise to keep us adjusted to the world’s changes and uncertainties.
Some also theorize that noise is essential to the human brain’s creativity. Johnson-Laird claims that creative mental processes are never predictable. On this basis, he suggests a way to make computers think creatively. We make them alter their own functioning by submitting their programs to artificially-generated random variations. According to Daniel Dennett, such indeterminism is precisely what endows us with what we call free will. Likewise, Bostrom & Sandberg suggest we introduce random noise into our simulation by using pseudo-random number generators. They are not truly random, because eventually the pattern will repeat. But if it takes a very long time before the repetitions appear, then probably it would be sufficiently close to real randomness. It would be a major obstacle, Bostrom & Sandberg admit, if artificial noise is not random enough for whole brain emulation.
Research suggests that we may characterize our neural irregularities as pink noise, or what is called 1/f noise. Benoit Mandelbrot classifies 1/f noise as what he terms “wild randomness.”
This sort of random might not be so easily simulated. The stock market for example is wildly random. In such natural systems, astronomically improbable fluctuations occur frequently. There is no way to predict when they will appear or how drastic they will be.
For this reason, he considers wild variation to be a state of indeterminism that is qualitatively different than the usual mild variations we encounter at the casino. For, there is infinite variance in the distributions of wild randomness. Anything can happen at any time. He says, “the fluctuation from one value to the next is limitless and frightening.” And this is the wildness of our brains.
Paul Shepard considers our minds to be wild in an even more literal sense: we are wild animals. He distinguishes tameness from domestication. Cows are domesticated. They have been bred to suit our needs. And now their genes would probably not prepare them to live in the wild without human protections. But the human species has merely been tamed by culture and not domesticated like cows. Genetically, we are still the same wild creatures who hunted the Pleistocene savannas. So to emulate the human brain is to simulate the workings not of a rational machine, but of a wild animal. He writes, “The savage mind is ours! ... as a species we have in us the call of the wild.”
But let’s suppose that the brain’s wild randomness can be adequately simulated. Will brain emulation still attain its fullest success of perfectly replicating a specific person’s own identity? Bostrom & Sandberg recognize that neural noise will prevent precise one-to-one emulation.
However, they think that the noise will not prevent the simulation from producing meaningful brain states. But to pursue further the personal identity question, let’s imagine that we want to emulate a certain casino slot machine.
A relevant property is its unpredictability. So, do we want the emulation and the original to both give consistently the same outcomes? That would happen if we precisely duplicate all the original’s relevant physical properties.
But what about its essential unpredictability? The physically-accurate emulation could predict in advance all the original’s forthcoming read-outs. Or instead, would a more faithful copy of the original produce its own distinct set of unpredictable outcomes? Then we would be replicating the original’s most important relevant property of being governed by chance.
The problem is that the brain’s 1/f noise is wildly random. So suppose we emulate some person’s brain perfectly. And suppose further that the original person and her emulation have an identity merger where each one somehow mistakes themselves for the other. Yet, if both minds are subject to wild variations, then their consciousness and identity might come to differ more than just slightly. They could veer-off wildly. Perhaps our very effort to emulate a specific human brain results in our producing an entirely different mind altogether.
Whether this technology succeeds or fails, it still can advance a number of philosophical debates. It could tell us if our minds emerge from our brains; if the philosophy of artificial intelligence should take analog more seriously. We might learn whether our brain’s randomness is responsible for creativity, adaptation, and free choice; or, if this randomness is the reason our personal identities cannot be duplicated. The only failure, as I see it, is if we neglect this technology’s philosophical potential.
Brain emulation succeeds if a computer program replicates human neural functioning. Yet for the authors, its success increases when it perfectly replicates one specific person’s brain. She might then survive her body’s death by living as the simulation.
This prospect has posthumanist proponents. Their view presupposes certain traits of human consciousness and selfhood. Hans Moravec for example thinks our personal identities exist independently to our bodies.
According to his pattern-identity theory of selfhood, we are no more than the patterns and the processes found in our brains and bodies. William Bainbridge explains that we are neither man nor machine. We are just the dynamic patterns of information that can be realized in a wide variety of materials. Hence our personal patterns might be found in this body or in that computer. Either way we are the same person.
To emulate someone’s neural patterns, we first scan a particular brain to obtain precise detail of its structures and their interactions. Using this data, we program a simulation that will behave essentially the same as the original brain. Now first consider a gnat’s wild flight pattern. It seems irrational and random. But the motion of a whole swarm is smooth, controlled, and intelligent, as though the whole group of gnats has a mind of its own. To simulate the swarm, perhaps we will not need to understand how the whole swarm thinks. We instead just learn the way one gnat behaves and interacts with other ones. When we combine thousands of these simulated gnats, the swarm’s collective intelligence should thereby appear. Whole brain emulation presupposes this principle. The simulation will mimic the human brain’s functioning on the cellular level. Then automatically, higher-and-higher orders of organization should spontaneously arise. Finally human consciousness might emerge at the highest level of organization.
Early in this technology's development, we should only expect simpler brain states, like wakefulness and sleep. But in its ultimate form, whole brain emulation would enable us to make back-up copies of our minds. Then we might somehow survive our body’s death.
According to Bostrom & Sandberg, whole brain emulation should replicate all the original brain’s relevant properties so to produce a 1-to-1 model of the brain’s functioning. In this sense, the brain and its emulation are black boxes. We feed each one on its own the same sequence of stimuli. If they both respond with the same sequence of reactions, then they are functionally equivalent. In this way, the same mind could be realized in two physically different systems. Hilary Putnam claims that electronic computers can be functionally equivalent to mechanical ones and even to humans using pencil and paper. Their insides may differ drastically, but their outputs are identical.
There are various levels of emulation success. The highest ones are the most philosophically interesting.
When the technology achieves individual brain emulation, it produces emergent activity characteristic of one particular brain. With further success, we would emulate someone’s personal identity. Perhaps somehow it would be numerically the same person. But at least it would continue-on as that person even after her body dies. We achieve such a simulation when it becomes rationally self-concerned for the brain it emulates.
Minds emerge from the brain’s pattern of physical dynamics. If you replicate this pattern-dynamic in some other physical medium, the same mental phenomena should likewise emerge. One mind would then be realized in a multiplicity of different physical embodiments. So whole brain emulation’s success would provide evidence for the theory of multiple realizability.
According to emergentist theories, all reality is made-up of a single kind of stuff. But its parts aggregate and assemble into dynamic organizational patterns. The higher levels exhibit properties not found in the lower ones. But, there cannot be a higher order without lower ones underlying it.
Consider the H2O molecule. It does not itself bear the properties of liquidity, wetness, and transparency. However, a large enough aggregate of water molecules will exhibit these properties.
In our brains, no one single neuron is conscious. Yet, our minds emerge from the complex dynamic pattern of all our neurons’ parallel computations. Roger Sperry offers compelling evidence. There are "split brain" patients whose right and left brain hemispheres are disconnected from one another. Nonetheless, they maintain unified consciousness.
William Hasker offers the analogy of magnetic fields, which are distinct from the magnets producing them. The iron atoms themselves need to be organized in alignment in order for a magnetic field to emerge on a higher scale. In a similar way, the particular organization of the brain’s neurons generates a field of ‘consciousness.’ This emergent consciousness-field permeates and haloes our brain-matter, occupying its space and traveling along with it.
Not everyone agrees that the mind emerges from the brain. Todd Fineberg is one example. Now in fact, he does think that consciousness results from the complex interaction of many layers of neural organization. However, he argues that consciousness does not get squirted-out from neural activity and thereby obtain a life of its own. Instead, the layers of neural activity are all mutually interdependent and simultaneously cooperative.
Consider for example when we recognize our grandmother. One layer of neurons transmits information about the whole visual field. Another layer picks-out lines. Another one, shapes. Finally the information arrives at the grandmother cell, which only fires when it is she that we see. But this does not make the grandmother cell emergently higher. Rather, all the neural layers of organization must work together simultaneously to achieve this recognition. The brain is a vast network of interconnected circuits. So we cannot say that any layer of organization emerges over and above the others.
Fineberg’s objection may prove problematic for whole brain emulation. Bostrom & Sandberg explicitly state that we only need to simulate the lower levels of activity.
But if Fineberg’s holistic theory is correct, we cannot only emulate the lower levels and expect the rest to spontaneously emerge. For, we need already to understand the higher-levels in order to program the lower ones. So, whole brain emulation’s emergentist assumptions might not express the actual way that consciousness appears.
Notice how in the recent past, many digital technologies have replaced analog ones. It would seem that these two sorts of quantity-representation reside in contrary worlds: the continuous versus the discrete. An abacus is digital. It computes one discrete value or another, but it is blind to the values between its units. When we count to two on our fingers, meaningless empty space spans between our digits. So spreading our fingers further apart does not change their numerical value.
Slide-rules, however, are analog. One ruler slides against another continuously. So it may calculate any possible real number along the continuum. It could potentially compute and display irrational numbers like pi or the golden ratio. However, a digital computer would never cease calculating into lower digit places. For, there can be no final figure when rendering irrational numbers into digits. But, if you move the slide rule from three to four, you will for one instant display pi. Analog is dense. Between any two values is already a third one, and lying between those are yet even more, and so on infinitely. On account of this infinite divisibility, analog can compute and display an infinity of different values found within a finite range. But like the gaps between our fingers, digital at some point will be blind to a middle value, no matter how precise it is. So, because analog computers can deal with an infinity of values, they have more computing potential for certain applications.
Our emulated brain will receive simulated sense-signals. Does it matter if they are digital signals rather than analog? Many audiophiles swear by the unsurpassable superiority of analog. It might be less precise, but it always flows like natural sound waves. Digital, even as it becomes more accurate, still sounds to them artificial or cartoon-like. In other words, there might be a qualitative difference to how we experience analog and digital stimuli, even though it might take a person with extra sensitivities to bring this difference to our explicit awareness.
And if the continuous and discrete are so fundamentally different, then maybe a brain computing in analog would experience a qualitatively different feel of consciousness than if the brain were instead computing in digital.
Perhaps digital emulations might even produce a mental awareness quite foreign to what humans normally experience.
Bostrom’s & Sandberg’s brain emulation exclusively uses digital computation. But, they acknowledge the argument that analog and digital are qualitatively different. And, they admit that implementing analog in brain emulation could present profound difficulties. Yet, there is no need to worry, they say.
They pose what is called “the argument from noise.” Analog devices always take some physical form. It is unavoidable that interferences and irregularities, called noise, will make the analog device imprecise. So analog might be capable of taking-on an infinite range of variations. However, it will never be absolutely accurate, because noise always causes it to veer-off slightly from where it should be. Yet digital has its own inaccuracies. It is always missing variables between its discrete values. Nonetheless, digital is improving. Little-by-little it is coming to handle more variables. It is filling in the gaps. Digital will never be completely dense like analog. Values will always slip through its fingers. And analog will always miss its mark. But soon the distance between digital’s smallest values will equal the distance that analog veers-away from its proper course. Digital’s blindness would then match analog’s sloppiness. So, we only need to wait for digital technology to improve enough that it can compute the same values with equivalent precision. Both will be equally inaccurate, but for fundamentally different reasons.
Yet perhaps the argument from noise reduces the analog/digital distinction to a quantitative difference rather than a qualitative one. And analog is so prevalent in neural functioning that we should not so quickly brush it off.
Note first that our nervous system’s electrical signals are discrete pulses, like Morse code. In that sense they are digital. However, the frequency of the pulses can vary continuously. As well, there are many other neural quantities that are analog in this way.
Fred Dretske argues that our memories store information in analog. We might consider an event that occurred somewhere within a 15 minute time-span. But later, we also might search our memory for finer details to indicate more specifically when that event occurred. Because we can always further refine our remembered determinations, it could very well be that our brains record data in analog form.
But suppose anyway that the argument from noise is correct, and that we can dismiss analog’s computational superiority. Would there still be some reason to implement analog technology?
Recent research on neural-network learning supplies an answer. Analog noise interference is significantly more effective than digital at aiding adaptation. Being "wrong" allows neurons to explore new possibilities for computational values and connections. This enables us to learn and adapt to a chaotically changing environment. Using digitally-simulated neural noise might be inadequate. Analog is better. For, it affords our neurons an infinite array of alternate configurations. Hence, in response to Bostrom’s & Sandberg’s argument from noise, I propose this argument for noise.
Analog’s inaccuracies take the form of continuous variation. In my view, this is precisely what makes it necessary for whole brain emulation.
Neural noise can result from external interferences like magnetic fields. Or internal random fluctuations might make the signals unpredictable. In both cases, chance & chaos reign our brains. And in fact, these random, indeterminate, and probabilistic events assist our brain’s computations. It implements noise to keep us adjusted to the world’s changes and uncertainties.
Some also theorize that noise is essential to the human brain’s creativity. Johnson-Laird claims that creative mental processes are never predictable. On this basis, he suggests a way to make computers think creatively. We make them alter their own functioning by submitting their programs to artificially-generated random variations. According to Daniel Dennett, such indeterminism is precisely what endows us with what we call free will. Likewise, Bostrom & Sandberg suggest we introduce random noise into our simulation by using pseudo-random number generators. They are not truly random, because eventually the pattern will repeat. But if it takes a very long time before the repetitions appear, then probably it would be sufficiently close to real randomness. It would be a major obstacle, Bostrom & Sandberg admit, if artificial noise is not random enough for whole brain emulation.
Research suggests that we may characterize our neural irregularities as pink noise, or what is called 1/f noise. Benoit Mandelbrot classifies 1/f noise as what he terms “wild randomness.”
This sort of random might not be so easily simulated. The stock market for example is wildly random. In such natural systems, astronomically improbable fluctuations occur frequently. There is no way to predict when they will appear or how drastic they will be.
For this reason, he considers wild variation to be a state of indeterminism that is qualitatively different than the usual mild variations we encounter at the casino. For, there is infinite variance in the distributions of wild randomness. Anything can happen at any time. He says, “the fluctuation from one value to the next is limitless and frightening.” And this is the wildness of our brains.
Paul Shepard considers our minds to be wild in an even more literal sense: we are wild animals. He distinguishes tameness from domestication. Cows are domesticated. They have been bred to suit our needs. And now their genes would probably not prepare them to live in the wild without human protections. But the human species has merely been tamed by culture and not domesticated like cows. Genetically, we are still the same wild creatures who hunted the Pleistocene savannas. So to emulate the human brain is to simulate the workings not of a rational machine, but of a wild animal. He writes, “The savage mind is ours! ... as a species we have in us the call of the wild.”
But let’s suppose that the brain’s wild randomness can be adequately simulated. Will brain emulation still attain its fullest success of perfectly replicating a specific person’s own identity? Bostrom & Sandberg recognize that neural noise will prevent precise one-to-one emulation.
However, they think that the noise will not prevent the simulation from producing meaningful brain states. But to pursue further the personal identity question, let’s imagine that we want to emulate a certain casino slot machine.
A relevant property is its unpredictability. So, do we want the emulation and the original to both give consistently the same outcomes? That would happen if we precisely duplicate all the original’s relevant physical properties.
But what about its essential unpredictability? The physically-accurate emulation could predict in advance all the original’s forthcoming read-outs. Or instead, would a more faithful copy of the original produce its own distinct set of unpredictable outcomes? Then we would be replicating the original’s most important relevant property of being governed by chance.
The problem is that the brain’s 1/f noise is wildly random. So suppose we emulate some person’s brain perfectly. And suppose further that the original person and her emulation have an identity merger where each one somehow mistakes themselves for the other. Yet, if both minds are subject to wild variations, then their consciousness and identity might come to differ more than just slightly. They could veer-off wildly. Perhaps our very effort to emulate a specific human brain results in our producing an entirely different mind altogether.
Whether this technology succeeds or fails, it still can advance a number of philosophical debates. It could tell us if our minds emerge from our brains; if the philosophy of artificial intelligence should take analog more seriously. We might learn whether our brain’s randomness is responsible for creativity, adaptation, and free choice; or, if this randomness is the reason our personal identities cannot be duplicated. The only failure, as I see it, is if we neglect this technology’s philosophical potential.
No comments:
Post a Comment