Saturday, January 21, 2006

Some General Purpose Philosophical Listservs

I was recently asked which general purpose listservs in philosophy is worth subscribing to. I subscribe to the following three, which strike me as reasonably useful so far:

http://www.louisiana.edu/Academic/LiberalArts/PHIL/philosop.html
http://www.lsoft.com/scripts/wl.exe?SL1=PHILOS-L&H=LISTSERV.LIV.AC.UK
http://www.disputatio.com/esap-news/

Sunday, January 15, 2006

Why the Zombie Conceivability Argument Is Unsound

Perhaps the currently most popular and discussed objection to physicalism is the zombie conceivability argument, whose most famous proponent is David Chalmers. In a nutshell, the argument goes as follows: zombies are conceivable, if zombies are conceivable then zombies are possible, and if zombies are possible, then physicalism is false; therefore, physicalism is false. Replies to this argument by physicalists have focused on the first two steps: they either deny that zombies are conceivable or that conceivability entails possibility. In response to these replies, Chalmers has recently elaborated in great detail the notions of conceivability and possibility that he thinks are at stake, and argued forcefully and skillfully that in the relevant senses of conceivability and possibility, the zombie conceivability argument stands.

I am unsatisfied with the structure of this dialectic. Although I am sympathetic to both standard responses to the zombie conceivability argument, I think they are not decisive, because they are open to Chalmers’ sophisticated rebuttals. I think it might be more promising, and it would at least be useful, to scrutinize more carefully the last step in the argument. This step requires assumptions about how physicalism should be formulated as well as which possible worlds are accessible (in the sense of possible world semantics) from the actual world. It we pay careful attention to the notion of accessibility between possible worlds and formulate physicalism in a way that doesn’t beg the question, I believe we can show that even granting the first two steps of the zombie conceivability argument, the falsity of physicalism doesn’t follow. Or if zombie-philes should insist that it does, then we can construct arguments fully analogous to the zombie conceivability argument to the effect that physicalism is true.

I am going to present an exploratory version of the above at Tucson VII – Toward a Science of Consciousness 2006. My talk is on Tuesday, April 4, afternoon.

Monday, January 09, 2006

On Individuating LOT Symbols

Schneider, Susan (unpublished). "The Nature of Symbols in the Language of Thought."

Schneider addresses the important problem of how to individuate LOT symbols. There is no accepted or even fully worked out solution to this problem in the literature. She gives various interesting and original arguments to the effect that LOT symbols are individuated by “total” computational role. In her view, computational role is found by Ramsifying over narrow cognitive science laws. Finally, she gives some interesting responses to the objection that if symbols are individuated holistically by their total computational role, then symbols cannot be shared. One of her responses is that cognitive science also has broad intentional laws, which do not range over symbols but over broad contents, which are publicly shared. So at least those laws apply to all subjects even though symbols aren’t shared.

Although I am sympathetic to a lot of what she says, I have some concerns.

Concern 1. Her proposal requires a non-semantic notion of symbol, but her non-semantic notion of symbol doesn’t seem to be well grounded. (Many authors have argued that nothing can be a symbol in the relevant sense without being individuated at least in part by semantic properties.) At the beginning, she appeals to Haugeland’s account of computation in terms of automatic formal systems. Unfortunately, the only clear and rigorous explication of the notion of formal systems that we possess is in terms of computation, so Haugeland’s account is circular. I think referring to Haugeland’s work in this context is unhelpful. Later, she gives her own account in terms of Ramsification over narrow cognitive science laws. But this raises the question of how these narrow cognitive science laws are to be discovered, which becomes all the more pressing in light of her stated view, later in the paper, that there are two sets of cognitive science laws: the narrow ones and the broad ones (ranging over broad contents). If ordinary cognitive science laws range over broad contents, how are we to discover the narrow ones? By doing neuroscience? (At some point, Schneider briefly mentions the “neural code,” something that in my understanding of these things, is not related to her issue.) Without at least a sketch of an account of how the narrow laws are to be found, I am unclear on how this proposal is supposed to work.

I think Concern 1 might be addressed by appealing to the non-semantic notion of computation that I have developed in some recent papers (forthcoming in Phil Studies and forthcoming in Australasian J. Phil).

Concern 2: Ramsification is popular among philosophers of mind but it is only a formal maneuver. It this view is going to have real bite as philosophy of cognitive science, Ramsification should be fleshed out in terms of some individuative strategy that actually plays a role in science.

I think Concern 2 might be addressed by appealing to functional explanation, or even better, mechanistic explanation. This is the way actual cognitive scientists go about individuating their explanantia. Schneider should be sympathetic to this move, since she appeals to functional explanation later in the paper. Notice that an appeal to mechanistic explanation is already part of my account of computational individuation, so that both Concerns 1 and 2 can be addressed by appealing to my account. The crucial observation, which is missing from her paper, is that symbols are components (or states of a component) of a computing mechanism. If you have a mechanistic explanation of a system, you thereby have access to individuation conditions for its components, including symbols (in the case of computing mechanisms).

Concern 3: Pending a resolution of Concerns 1 and 2, I would like to know more about what Schneider means by “total” computational role and especially, how it is possible to test hypotheses on whether something is a “total” computational role. If it includes all possible input conditions and internal states, it seems that total computational role can never be discovered. For how can we be sure that we have all the relevant data? Do we have to test the system under all relevant conditions? Is this even possible? Is it possible to know that we have succeeded?

I think Concern 3 might be addressed by appealing, once again, to functional explanation or better, mechanistic explanation. For as Schneider points out in various places in her own paper, mechanistic explanation gives you a way to individuate components and their activities (including, I say, symbols). Furthermore, in order to find a mechanistic explanation, you don’t need to study all possible computations. You can proceed piecemeal, component by component, operation by operation.

When you do the mechanistic explanation of a computing mechanism, you discover that total computational role supervenes on (what may be called) primitive computational role plus input and internal state conditions. So all you need to individuate a symbol is its primitive computational role, i.e. the way the symbol affects the computational architecture (components, their primitive computational operations, and their organization). So pending further explication of what Schneider means by “total”, in order to individuate the symbols, as far as I can tell you don’t need “total” computational role. (Notice that in her paper Schneider already states individuation conditions similar to the ones I suggest, under T2 and T4, but she immediately shifts from those to “total” computational role.) I think individuation in terms of primitive computational role would generate a notion of symbol that can be shared between subjects, provided that subjects share their basic computational architecture.

Thursday, January 05, 2006

Did I Commit the Church-Turing Fallacy?

Today I received my complimentary copy of The Philosophy of Science: An Encyclopedia, edited by Sahotra Sarkar and Jessica Pfeifer, Routledge, 2006. I wrote the entry on artificial intelligence. To my astonishment, the entry reads as follows:

If Turing’s thesis [i.e., the Church-Turing thesis] is correct, stored-program computers can perform any computation (until they run out of memory) and can reproduce mental processes (p. 27).
The italicized part is a perfect example of what Jack Copeland calls the Church-Turing fallacy, namely, the mistake of supposing that the computational theory of mind, or the view that mental processes are computational (and more specifically, that they are computable by Turing machines) follows from the Church-Turing thesis. Sadly, the Church-Turing fallacy is common among philosophers. Even more sadly, it is now firmly inserted in the entry on AI in the Routledge Encyclopedia of Philosophy of Science. Worse for me is, my name is at the bottom of that entry!

The most upsetting part of the story for me is that the offending statement was not in the original article that I submitted to the editors. The original text, which I wrote, read:
If Turing’s thesis is correct, stored-program computers can perform any computation (until they run out of memory). If McCulloch and Pitts’s theory [to the effect that mental processes are computational] is also correct, stored-program computers can reproduce mental processes.
Obviously, this is very different. Somehow, after I submitted the entry, the antecedent of my second conditional got deleted and the rest of the sentence merged with the previous sentence, turning two relatively uncontroversial statements into a fallacious one. Unfortunately, there was no proof correction, and thus no opportunity for me to notice this mistake, before publication.

I actually have a forthcoming article in Synthese criticizing arguments for the computational theory of mind that appeal to the Church-Turing thesis. Of all people, I am the last (with the possible exception of Jack Copeland) who should get caught committing the Church-Turing fallacy. Alas.

Wednesday, January 04, 2006

Serious Metaphysics?

Bloomfield, P. (2005). "Let's Be Realistic About Serious Metaphysics." Synthese 144: 69-90.

He argues that the only sense of possibility relevant to serious metaphysics (i.e., relevant to the metaphysics of the actual world) is how things may be given how the actual world is. (This notion of possibility-given-the-way-the-actual-world-is is supposed to be related to Chalmers' secondary intensions, or Jackson's C-intensions).

Bloomfield maintains that there are no zombies at the actual world and that zombies are “actually impossible” (p. 78; this means zombies are impossible given the way the actual world is), but he doesn’t explain why he believes so.

Bloomfield says he is attacking the method or machinery employed by Chalmers and Jackson, not the way they employ it. He attacks the view that primary intensions are primary and secondary intensions are secondary, and argues that it's the other way around. But without a clear discussion of what being primary or secondary means, and what follows from it, it’s not clear what difference this makes.

Bloomfield accepts the possibility of alleged "synthetic a priori truths" but never discusses how you are supposed to discover what is possible given the way the world is. How do you discover these synthetic a priori truths, if not by the method offered by Chalmers and Jackson?

What Bloomfield really seems to dislike is the zombie conceivability argument. At bottom, his substantive point is the good old point that conceivability does not entail possibility. I agree, but it will take more than this to score points against Chalmers and Jackson’s sophisticated view.

Furthermore, conceivability arguments can be run without the distinction between primary and secondary intensions (as Kripke himself did; notice that Bloomfield occasionally cites Kripke with approval). In other words, the issue of the merits of Chalmers and Jackson’s two-dimensional semantics is largely orthogonal to the issue of the merits of conceivability arguments.

Do Determinables Exist?

Gillett, C. and B. Rives (2005). "The Non-Existence of Determinables: Or, a World of Absolute Determinates as Default Hypothesis." Nous 39(3): 483-504.

They argue that there are no determinables, only determinates, on grounds of ontological parsimony. In their opinion, positing determinables on top of determinates leads to "double counting" of causal powers and consequent causal overdetermination (which are unacceptable).

They discuss both dispositional theories of properties (properties are the causal powers they contribute to entities) and categorical theories (properties are the categorical or qualitative bases for the causal powers they contribute to entities). They argue that their argument applies to both kinds of theory of property.

They discuss and reject Shoemaker's "subset" view, according to which determinables are properties constituted by a subset of the causal powers that constitute their determinates. They argue that even the subset view leads to double counting of causal powers. (Shoemaker's view is a version of a dispositional theory, but I imagine that an analogous subset view could be formulated within the framework of a categorical theory of properties.)

Unfortunately, I was not persuaded and remain inclined towards the subset view. (I'd like to remain neutral between categorical and dispositional theories if possible; I will discuss the dispositional version for simplicity.) If we maintain that determinables are constituted by a subset of the causal powers that constitute determinates, it seems to me that causal powers are only counted once, not twice. If you consider all relevant causal powers, you are considering a determinate. If you consider only some of them, you are considering a determinable. Determinables exist because the causal powers that constitute them exist; it's just that there are other relevant causal powers beyond them. Hence, there is neither double-counting of causal powers nor causal overdetermination.

Or am I missing something?

Tuesday, January 03, 2006

How to Improve Your Paper by a Judicious Use of Faculty

During a recent conversation with Brit Brogaard, we noticed that some students may benefit from some coaching on out how to properly use faculty to their advantage when working on a paper. The rules must be adjusted for context. If you are writing a term paper, you need to work with the instructor(s) for that course; ask others to read you paper only if you have your instructor’s permission. If you are working with a thesis advisor, consult with your advisor before you send your work to others. If you are working on your own and you are at an advanced stage, feel free to ask different people for feedback. With that in mind, below are some tips that resulted from my conversation with Brit, with extra help from Taffy Ross. Mutatis mutandis, the tips apply to non-philosophy papers:

1. Before you write a paper, make an outline. State your topic and thesis as clearly as possible. An outline may be as short as a paragraph or a few bullet points.

2. Feel free to ask one faculty member for comments on your outline.

3. If you are trying to figure out which of many topics or argumentative lines to pursue, make several outlines and show them to a faculty member.

4. If you get stuck, explain your problem to a faculty member and ask for advice. You may show them an unfinished paper if that’s the only way you know to convey the problem.

5. Write an actual paper. Don’t ask faculty to comment on your notes or rambling, unstructured writing. How do you know you have a paper? At a minimum, it must begin with an introduction, state a thesis, give an argument, and offer a conclusion.

6. Treat your first draft as a final draft. Before you show your paper to anyone, edit it until you can’t stand it. Check spelling and grammar. Format the paper carefully. Check and double-check your sources and make sure you acknowledge them all. Write a complete bibliography. Make sure the quotes are accurate. Make sure your writing is clear and precise. Make sure you understand every term you use. Make sure your argument is sound (by your lights). In short, make it the best paper you can. It doesn’t have to be ready for publication, but it shouldn’t distract the reader with errors or omissions that you could have corrected by yourself. Once you’ve produced the best draft you are capable of, you may show it to one faculty.

7. While you are waiting for feedback, sit back and relax, or more likely, work on something else. Do not make any major changes until you get comments (within a reasonable amount of time). Otherwise, your reader is wasting her time commenting on something that may no longer be part of your paper. If you still have major revisions to make while waiting for feedback, it proves that you asked for feedback too early (see above).

8. If your first reader doesn’t seem to get your paper at all, stop asking her for feedback and ask someone else instead.

9. While revising your paper, do not ignore any of the comments. It is frustrating to read a second draft and discover some of the same problems, because the writer has ignored comments on the first draft. Take all comments into account. If you don’t understand a comment, ask your reader to clarify it. If you disagree with a comment, discuss it with your reader or incorporate it in the paper and give it a good response.

10. Before you ask more faculty members (besides your first reader) to comment on your paper, wait until your first reader appears to be satisfied with the paper. That is, wait until your draft receives a grade (if it’s a term paper) or comments that fail to uncover serious flaws in the paper. Only at this point should you contact other faculty members and ask them to comment on your paper. Otherwise, everyone will be spending their time identifying the same problems, or worse, giving you conflicting suggestions.

Monday, January 02, 2006

Classical Computation and Hypercomputation at the 2006 Eastern APA

On Wednesday, December 28, 2006, at the Eastern APA in NYC, we held our session on classical computation and hypercomputation. (For some background, see previous posts.) From my point of view, it went roughly as follows.

In my presentation, I argued that in discussions of the Physical Church-Turing Thesis (Physical CT), we need to distinguish between what I called a bold and a modest version. According to the bold version, which is popular in physics and philosophy of physics circles, everything that can be “physically done” is computable by Turing machines. I argued that this thesis (including its more precise formulations) is both too strong (i.e., it is falsified by genuine random processes and by a liberal use of real numbers) and not related to the original notion of computation that led to work on computability theory and CT in the first place. The original notion was the epistemological notion of what problems of a certain kind can be solved in a reliable way.

To maintain contact with the epistemological notion of computation, I argued that we need to formulate a modest version of Physical CT, according to which everything that can be “physically computed” can be computed by Turing machines. By “physically computed”, I mean a process that can be used by a human observer to solve problems defined over strings of digits. In other words, modest Physical CT is true if and only if it is impossible to build a genuine hypercomputer, i.e., a device that can be used by a human observer to compute arbitrary values of a function that cannot be computed by Turing machines. Since it is presently unknown whether genuine hypercomputers can be built (though it seems unlikely that they can), the truth value of modest Physical CT remains to be determined (though the thesis is quite plausible).

Following me, Oron Shagrir discussed accelerating Turing machines, i.e., Turing machines that execute each step in half the time of the previous step. In two units of time, these (notional) machines can go through infinitely many steps, thereby performing what is known in the literature as a supertask. This feature can be exploited to compute functions that are not computable by (ordinary) Turing machines. Oron argued that strictly speaking (and contrary to what Jack Copeland had written), accelerating Turing machines do not compute functions that are not computable by ordinary Turing machines. What computes such functions is a different kind of machine, which is formed by adding to accelerating Turing machines a formal definition of how to generate the state of the machine when the machine completes the second unit of time.

Following Oron, Jack Copeland gave his comments. He said a lot of things that I agree with. I will only comment on two objections he made to my paper.

First, he argued that computability theory is about an ontic notion of computation, which may or may not occur in the physical world regardless of whether we, human observers, can access it. For example, and contrary to what I argued, even a genuine random process (if it exists) counts as a genuine hypercomputer. (Jack offered his own formulation of Physical CT, but I didn’t write it down.) Suffice it to say that I disagree. There is nothing wrong with an ontic notion of computation, which abstracts completely from issues of “epistemological embedding” (Jack’s term)—except that it’s practically useless. And computer science is about building machines to do things for us. It is this potential for use that makes computation most interesting.

Second, Jack argued that we should avoid the term “Physical Church-Turing thesis”, because what goes under that name has nothing to do with what Church and Turing were talking about. They were talking about what may be computed by human beings, and nothing more.
On this second point, Jack is in agreement with Robin Gandy, who was a student of Turing’s, and Wilfried Sieg, a distinguished historian and philosopher of computation. To support his view, Jack quoted a well known passage by Wittgenstein, according to which “Turing machines are humans who calculate”, and then proceeded to say that Turing made the same point when he said that humans who calculate are Turing machines (or something close).

Now, I do not have time to get into an extensive exegetical dispute with Jack and his allies. I have published a paper that bears on this and I hope to write more someday. But I will make a simple observation about the evidence given by Jack. First, there is no reason to believe that Wittgenstein is a reliable interpreter of Turing’s. The two disagreed on the philosophy of mathematics, as shown by their dialogue recorded in Wittgenstein’s Lectures on the Foundations of Mathematics. Second, there is a significant difference between saying that Turing machines are humans who calculate (which leaves open whether all humans who calculate, or only some, are Turing machines) and saying that humans who calculate are Turing machines (which leaves open whether all Turing machines, or only some, are humans who calculate). In other words, Turing’s statement is consistent with there being physical mechanisms that are Turing machines.

Fortunately for me, during the discussion Selmer Bringsjord, who is currently writing a paper on CT, took my side on the second point. He made the following observation. In computability theory textbooks, there are discussions of CT. Typically, these discussions do not restrict CT to what humans can compute. Instead, they assume that CT covers both humans and physical mechanisms. Are we to maintain that computer scientists and computability theorists are generally confused about their subject matter? Given Jack’s (and Gandy and Sieg’s) view, they are. Needless to say, I agree with Selmer that this is not plausible.