On Wednesday, December 28, 2006, at the Eastern APA in NYC, we held our session on classical computation and hypercomputation. (For some background, see previous posts.) From my point of view, it went roughly as follows.
In my presentation, I argued that in discussions of the Physical Church-Turing Thesis (Physical CT), we need to distinguish between what I called a bold and a modest version. According to the bold version, which is popular in physics and philosophy of physics circles, everything that can be “physically done” is computable by Turing machines. I argued that this thesis (including its more precise formulations) is both too strong (i.e., it is falsified by genuine random processes and by a liberal use of real numbers) and not related to the original notion of computation that led to work on computability theory and CT in the first place. The original notion was the epistemological notion of what problems of a certain kind can be solved in a reliable way.
To maintain contact with the epistemological notion of computation, I argued that we need to formulate a modest version of Physical CT, according to which everything that can be “physically computed” can be computed by Turing machines. By “physically computed”, I mean a process that can be used by a human observer to solve problems defined over strings of digits. In other words, modest Physical CT is true if and only if it is impossible to build a genuine hypercomputer, i.e., a device that can be used by a human observer to compute arbitrary values of a function that cannot be computed by Turing machines. Since it is presently unknown whether genuine hypercomputers can be built (though it seems unlikely that they can), the truth value of modest Physical CT remains to be determined (though the thesis is quite plausible).
Following me, Oron Shagrir discussed accelerating Turing machines, i.e., Turing machines that execute each step in half the time of the previous step. In two units of time, these (notional) machines can go through infinitely many steps, thereby performing what is known in the literature as a supertask. This feature can be exploited to compute functions that are not computable by (ordinary) Turing machines. Oron argued that strictly speaking (and contrary to what Jack Copeland had written), accelerating Turing machines do not compute functions that are not computable by ordinary Turing machines. What computes such functions is a different kind of machine, which is formed by adding to accelerating Turing machines a formal definition of how to generate the state of the machine when the machine completes the second unit of time.
Following Oron, Jack Copeland gave his comments. He said a lot of things that I agree with. I will only comment on two objections he made to my paper.
First, he argued that computability theory is about an ontic notion of computation, which may or may not occur in the physical world regardless of whether we, human observers, can access it. For example, and contrary to what I argued, even a genuine random process (if it exists) counts as a genuine hypercomputer. (Jack offered his own formulation of Physical CT, but I didn’t write it down.) Suffice it to say that I disagree. There is nothing wrong with an ontic notion of computation, which abstracts completely from issues of “epistemological embedding” (Jack’s term)—except that it’s practically useless. And computer science is about building machines to do things for us. It is this potential for use that makes computation most interesting.
Second, Jack argued that we should avoid the term “Physical Church-Turing thesis”, because what goes under that name has nothing to do with what Church and Turing were talking about. They were talking about what may be computed by human beings, and nothing more.
On this second point, Jack is in agreement with Robin Gandy, who was a student of Turing’s, and Wilfried Sieg, a distinguished historian and philosopher of computation. To support his view, Jack quoted a well known passage by Wittgenstein, according to which “Turing machines are humans who calculate”, and then proceeded to say that Turing made the same point when he said that humans who calculate are Turing machines (or something close).
Now, I do not have time to get into an extensive exegetical dispute with Jack and his allies. I have published a paper that bears on this
and I hope to write more someday. But I will make a simple observation about the evidence given by Jack. First, there is no reason to believe that Wittgenstein is a reliable interpreter of Turing’s. The two disagreed on the philosophy of mathematics, as shown by their dialogue recorded in Wittgenstein’s Lectures on the Foundations of Mathematics. Second, there is a significant difference between saying that Turing machines are humans who calculate (which leaves open whether all humans who calculate, or only some, are Turing machines) and saying that humans who calculate are Turing machines (which leaves open whether all Turing machines, or only some, are humans who calculate). In other words, Turing’s statement is consistent with there being physical mechanisms that are Turing machines.
Fortunately for me, during the discussion Selmer Bringsjord
, who is currently writing a paper on CT, took my side on the second point. He made the following observation. In computability theory textbooks, there are discussions of CT. Typically, these discussions do not restrict CT to what humans can compute. Instead, they assume that CT covers both humans and physical mechanisms. Are we to maintain that computer scientists and computability theorists are generally confused about their subject matter? Given Jack’s (and Gandy and Sieg’s) view, they are. Needless to say, I agree with Selmer that this is not plausible.