My Academia.edu Page w/ Publications

14 Apr 2009

Argument from Noise, 2 The Power of Computation, in Schonbein, "Cognition and the Power of Continuous Dynamical Systems"


[Nick Bostrom & Anders Sandberg argue for digital computation instead of analog for simulating human cognition. They base their contention in part on the "argument from noise." The entries in this series summarize Schonbein's defense of that argument.]





Whit Schonbein

Cognition and the Power
of Continuous Dynamical Systems

2. The Power of Computation


Our current understanding of computation begins with Turing's introduction of the Turing Machine (TM) in 1936.

A Turing Machine contains these components:
1) a controller,
2) a tape head, and
3) a tape of unbounded length.

The controller is found to be in different states at different stages of the computation. The number of states is finite. We will call them q0 - qn.

We divide the tape into discrete squares. In any square may be found a symbol. The set of symbols we call the 'alphabet.' At any given moment, we find the tape head positioned over one of the squares. [Turing images are from Marvin Minsky's Computation: Finite and Infinite Machines]



The machines proceeds through a series of discrete motions. Each time, the tape-head writes a symbol on the tape, and moves to the left or right. There are different formats for describing the machines behavior. These would be ways to display the machine's program or instructions. They take the form of hypotheticals. For example, they might say, "If you are in state q1, and you read a w1 on the tape, then write a w2 over it, move to the left, and change to state q2." One formalized way to render such an instruction is the quintuple: {q1, w1 → w2, R, q2}
All the rules can be given in one "transition table."



[Considering seeing this entry for Boolos & Jefferey's explanation and rendition of other display formats, and this entry where we carry-out an interesting Turing computation of the natural numbers.]

A TM computes the value of a function by beginning with some input finitely represented on the tape, and then running according to the deterministic procedure described by the transition table. The output of the calculation is the string remaining on the tape when the machine halts. (59b)

We must note one important feature that both Turing Machines and all other computational systems share. Their set of symbols and operations are always finite. Hence these systems are finitely specifiable: "any given TM can be fully described in a finite amount of space and time." (59bc). Now, if instead the machine's states and symbols admitted of a continuum of variation, then they would not be finitely specifiable; for, the possible states and symbols would be infinitely many.

Turing Machines are illustrations for the basic workings of all computational automata. They all share the basic features of finite specifiability and discreteness. However, most other computational systems are more complex. Each different kind of system can compute its own class of functions. Certain functions require machines with more resources. By ranking the functions according to the computational resources needed to compute them, we may determine a hierarchy of functions. This ordered list would also indicate the hierarchy of computing power of the machines capable of computing them. Turing Machines are more computationally powerful than most other sorts of systems. So if another system is as powerful as a Turing Machine, we will call it "Turing-equivalent." And if the system can compute functions that even a Turing Machine cannot compute, then we will call it "super-Turing-equivalent." (59-60)


Schonbein, Whit. "Cognition and the Power of Continuous Dynamical Systems." Mind and Machines, Springer, (2005) 15: pp. 57-71.



No comments:

Post a Comment