Thursday, June 29, 2006

Mind, Metaphysics, and Philosophy of Science

The theme underlying the current NEH Seminar in Mind and Metaphysics is that there is a deficit of metaphysics in contemporary philosophy of mind and a deficit of ontological seriousness in contemporary metaphysics. According to John Heil, who is the seminar organizer and director, much of the talk of counterfactuals, possible worlds, supervenience, propositions, and other devices favoured by philosophers, is ungrounded. If we can get it right on some basic ontological issues, Heil maintains, many of the central problems in philosophy of mind will be automatically solved. Once the ontology is correct, the philosophy of mind will take care of itself, as it were. For more details on this, including Heil's philosophical methodology, ontology, and view of the mind, see his recent book, From An Ontological Point of View (OUP, 2003).

I was raised to think in a different way, namely, that there is a deficit of philosophy of science in both contemporary philosophy of mind and contemporary metaphysics, and that if you want to do philosophy of science well, you need to understand science well.

Interestingly, these two diagnoses are mutually consistent.

Any thoughts on this? Does philosophy of mind need an infusion of good philosophy of science, good metaphysics, neither, or both?

Tuesday, June 27, 2006

More Mind Reading Technology

Sunday, June 25, 2006

Kirk Takes Zombies Back

Robert Kirk, Zombies and Consciousness, Oxford University Press, 2005.

Kirk is famous for inventing phenomenal zombies--creatures physically indistinguishable from us but lacking consciousness--and for using their possibility to refute physicalism. (The undelying idea goes back to Descartes.)

Kirk published his original papers on zombies in 1974. In recent years, David Chalmers has formulated a version of Kirk's zombie conceivability argument and put it at the center of debates on consciousness.

There is now a huge literature that discusses the possibility of zombies. It is only fitting that Kirk weighs in with his own book.

Kirk is now convinced that the zombie idea is incoherent. According to Kirk's new book, sombies are not possible, and hence they don't refute physicalism, after all.

The Argument for Concept Splitting from Language

In our forthcoming paper, "Splitting Concepts," Sam Scott and I argue, among other things, that the notion of concept may need to be split into linguistic representations (responsible for cognition that involves language) and nonlinguistic representations (responsible for the rest of cognition). Roughly, the reason is that linguistic cognition appears to require representations with more expressive and inferential power than the rest of cognition. Another way of putting the point is, creatures that can learn and master languages are much smarter than creatures that cannot. We need an explanation for this fact, and a reasonable explanation might involve concepts of radically different kinds.

In his comments on the version of the paper that we presented at last year's SPP, Dan Ryder suggested that the argument from language at most shows that syntactic linguistic representations are special, whereas semantic representations (i.e., concepts) may be left unaffected.

In the paper, we have a multi-pronged response to this worry.

First, in so far as syntactic representations are needed to explain linguistic cognition, they belong in the theory of concepts, broadly construed. If you will, the concepts in question are concepts of syntactic categories, rather than concepts of kinds and properties in the domain of discourse. Nevertheless, they are concepts in the same sense in which other representations are concepts, and the fact that they are usually not called concepts in the literature is only a terminological point.

Second, the exact relationship between semantic and "syntactic" representations are controversial. Depending on what they are, the argument might affect semantic representations too. (E.g., perhaps there is no sharp distinction between syntactic and semantic representations.)

Finally, even if semantic and syntactic representations are sharply distinct, it remains possible (though we don't argue for it) that semantic linguistic representations are different in kind from non-linguistic ones.

In a comment to a previous post, Dan Ryder expresses skepticism about our response to his worry. He writes:

"First: surely the psychologists' and linguists' use of a different term here indicates that they think syntactic processing involves a different kind of representation, i.e. some non-conceptual kind of representation - and isn't the paper supposed to be about the scientists' notion of concepts? And second, the theory of concepts is not "the theory of the representations that explain phenomena (1) to (6)." There's no such thing as *the* representations that explain (1) to (6). (1) to (6) will involve all sorts of perceptual representations, for instance, and most scientists doubt those belong to the same kind of representation that concepts do. (Note that many think that syntactic representations are more like perceptual representations than conceptual ones.)"

Dan's comments are relevant and helpful, but they do not affect the important point underlying our argument.

With respect to the terminological point, I agree that people typically use different terms to mean different things. The question is whether the difference is relevant for present purposes. Language is used to represent the domain of discourse, and in that respect, there is a useful distinction between semantic representations (which represent objects and properties in the domain of discourse) and syntactic representations (which do not). But syntactic representations still represent: they represent properties of linguistic structures (which are still aspects of the world, by the way). So in so far as "concept" means, roughly, representation of some aspect of the world, both semantic and syntactic representations are concepts. Also, "concepts" as psychologists use the term are representations postulated to explain certain cognitive capacities. So in so far as both semantic and syntactic representations are needed to explain the same capacity, they belong in the same psychological theory. Bottom line: given the way the term "concept" is used in the literature, there is one respect (here not very important) in which syntactic representations do not count as concepts, but there are other respects (here relevant) in which they do. And by the argument from language, linguistic representations (syntactic, "semantic," or both) are different in kind from nonlinguistic ones.

With respect to Dan's second point (contrasting perceptual and conceptual representations), I am skeptical of the traditional constrast between perceptual and conceptual representations. I think all representations are "perceptual", at least in the minimal sense that they originate with the brain's processing of perceptual information. And I think all representations are "conceptual," at least in the minimal sense that they discriminate between what falls under them and what doesn't. Perhaps the perceptual-conceptual dichotomy constitutes a continuum rather than a sharp divide. (BTW, I'm taking no stance with respect to the nativism-empiricism debate.) But this way of putting things is inadequate, because it uses the ambiguous term "concept". The whole point of our paper is that there is no single notion of concept: there are many. In one sense, concepts are linguistic representations. In another sense, they are representations undelying nonlinguistic cognition. (And there may be other ways that concepts split.)

Thursday, June 22, 2006

Simulation Theory

Alvin Goldman, Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading, OUP, 2006.

How do we understand other minds? I was introduced to this problem by a psychologist, Susan Johnson. In her course of Theory of Mind, the main debate seemed to be between those who think we have an innate theory of mind and those who think we learn the theory from experience.

Later, I found out that there was another debate, between those who think we understand other minds via a theory--the theory theorists (which include most psychologists)--and those who think we understand other minds via simulating them within our own mind--the simulation theorists. At first, I thought there was little to the theory vs. simulation debate. After all, the psychologists didn't seem too concerned with it. And besides, what exactly is the difference between understanding other minds via theories and understanding them via simulations?

Then I met Robert Gordon, the founder of simulation theory, and I read a draft of Goldman's new book, in which he defends a hybrid theory-simulation theory, with emphasis on simulation. It turns out that matters are more complicated than I thought. The debate is interesting and has far reaching consequences for both psychology and philosophy of mind.

Since I don't have time to write about it myself, if you are interested in this, you'll have to read Goldman's book. Even if you don't entirely agree with him, you'll be impressed by the wide range of evidence and considerations that he musters.

Tuesday, June 20, 2006

Do Determinables Exist? (2)

In a previous post, I expressed scepticism about a recent argument by Gillett and Rives to the effect that determinable properties don't exist: only determinate properties do. Yesterday, we discussed Gillett and Rives' paper in the NEH Summer Seminar on Mind and Metaphysics. Curiously, John Heil (who probably doesn't read my blog) expressed the same criticism that I had made in my post (though unlike me, Heil has little sympathy for Shoemaker's "subset view").

I'll take this as an opportunity to look at a reply kindly sent to me by Brad Rives:
"Thanks for your comments on the paper. I'm not sure that I understand your response to the parsimony argument. You say: "Determinables exist because the causal powers that constitute them exist; it's just that there are other relevant causal powers beyond them. Hence, there is neither double-counting of causal powers nor causal overdetermination."
I don't see how this follows. Suppose determinables are constituted by powers that are subsets of those that constitute their determinates. It's true that on this view particulars will have the powers that individuate determinable, but the point of the simplicity argument is that the determinables won't be contributing any causal powers to particulars that aren't contributed by some or other determinate. We can thus account for ALL the powers of particulars simply by attributing determinates to them, whereas this isn't true of determinables. Assuming we should only posit those properties needed to account for the causal powers of particualrs, the argument concludes that we shouldn't posit determinables. If you suppose that both determinate and determinables are instantiated, it's hard to see how there won't be overdetermination of powers. Since some of the powers that individuate a determinate also individuate the determinable, those powers will be contributed by two distinct properties, which just is overdetermination. Convinced?"


The worry about overdetermination arises only if a property is something over and above the powers that it "contributes" and most importantly, the relation bewteen a determinable property and its determinates is not analogous to the part-whole relation that holds between the powers they "contribute".

If all there is to properties is powers and the powers of determinables are a subset of the powers of determinates, then there is no double counting and no overdetermination. But even if properties are something more than the powers they "contribute," there is still no overdetermination provided that determinables stand in a part-whole relation to their determinates.

Workshop on Computation

The workshop on the Origins and Nature of Computation is over. It was an amazing experience: many of the best computability theorists and computer scientists, philosophers of computation, and historians of computation discussing together.

One of the presenters, Stewart Shapiro, has a new book on Vagueness in Context (OUP, 2006), which looks very interesting, especially for philosophers interested in concepts.

Another presenter, Saul Kripke, gave a provocative talk arguing that the best argument for the Church-Turing thesis is based on the idea that computation is a form of valid mathematical reasoning, plus the principle that all forms of valid mathematical reasoning can be formalised in first order logic, the completeness of first order logic, and the fact that first order logic is recursive. Something to think about.

I wish I had time to write more on the workshop but I don't. I invite everyone to look up the presenters and their published works.

Friday, June 16, 2006

True Bill Gates Story

As told by Martin Davis:

A woman heading a research group on theoretical computer science at Microsoft meets Bill Gates for the first time. She says how wonderful it is for Bill Gates to head a company that supports a group--her research group--that has little chance of generating any application before a hundred years or so.

Bill Gates looks at her, looks at the man who introduced her to him, and says: "what is she talking about?"

Monday, June 12, 2006

Randomness and Computation

In my Eastern APA talk last year, I argued that a genuinely random physical process should not be counted as a computation (in the interesting sense of the term). Jack Copeland disagreed.

So here in Jerusalem, I took the opportunity to ask two "great men" for their opinion. I'm happy to say that both Michael Rabin and Martin Davis appear to share my opinion. Rabin told me that a random process is not a computation because it's not repeatable, and repeatability is a feature of computation. Davis, after resisting my question for a while (on the grounds that any actual physical process is finite and hence it's not clear in which sense it would deserve to be called random), said he wouldn't call a random process a computation.

Dinner with John McCarthy

Here are some things I found out last night from John McCarthy:

When he famously created Lisp (which became the standard programming language in AI), he had gotten the idea of list processing from Allen Newell and Herbert Simon at the Darmouth conference on AI in 1956.

When he "stole" the lambda notation from Alonzo Church and used it in creating Lisp (giving credit to Church, of course), he didn't know that the lambda calculus already was an universal formalism for computation, because he had bought Church's book but didn't read it all the way through. Had he known that the lambda calculus is a universal computing formalism, he might have tried to create a language based entirely on the lambda calculus, as people did 20 years later. But that, he says, would not have been as good a language as Lisp.

When he and Marvin Minsky decided to start a research project on AI at MIT, he intentionally avoided the opinion of (then influential MIT professors) Warren McCulloch and Norbert Wiener, because he thought they would have strong views on how he and Minsky should proceed and would try to influence their research.

The Origins and Nature of Computation

I'm in Jerusalem at the workshop on The Origins and Nature of Computation, which started today. Many of the most prominent historians and philosophers of computation are here (e.g., Kripke, Copeland, Sieg, Shagrir, Stewart Shapiro), and so are some of the founding fathers of computer science (e.g., Martin Davis, John McCarthy, Michael Rabin) as well as many prominent computer scientists. The workshop was organized years ago, before I got my Ph.D., hence before I was in any position to be invited. But due to the Intifada, the workshop had to be postponed. I am honored and humbled to have been invited to be here.

I hope I'll have some time to post about it.

Friday, June 09, 2006

Resources for Students

My trusted assistant, John Gabriel, has created a list of online resources on how to study, write papers, apply to graduate school, publish, and get a job in philosophy. Some students may find it useful. Some of the links are to previous posts on this blog, but there is much else besides. The list is also part of my permanent website.

How to Write a Philosophy Paper

Useful web tutorials on writing philosophy papers.

Guidelines on Writing a Philosophy Paper by Jim Pryor
A Brief Guide to Writing Philosophy Papers by Richard Field
Tips on Writing a Philosophy Paper by Douglas W. Portmore
Writing a Philosophy Paper by Peter Horban
How to Write a Philosophy Paper by Jeff McLaughlin

How to Study and Keep a Reading Notebook

Each person studies differently. But there are some worthwhile strategies that most successful students use.

How to Study Philosophy from Queen’s University Belfast
How to Study by William J. Rapaport
Keeping a Reading Notebook by William J. Rapaport
Questions to Consider When Making Reading Notebook Entries from Northern Illinois University

How to Use Faculty Feedback

Recommendations on using faculty to your advantage when working on a paper.

How to Improve Your Paper by Judicious Use of Faculty

How to Apply to Graduate School

Information on whether, how, and where to apply to graduate programs in philosophy.

Applying to Graduate Schools from the Philosophical Gourmet Report
Should I Apply to Graduate School? from the University of Alberta
The Overall Ranking of Graduate Programs in Philosophy in the English-Speaking World from the Philosophical Gourmet Report

How to Publish Your Work

Some tips on publishing your work in philosophy journals.

Getting Published as a Graduate Student
On Avoiding Rejection by Journals by Nancy D. Simco
An Informal Ranking of Journals that Publish in Philosophy of Mind
Philosophy Journals: Which Ones are Responsible, Which Ones Not? by Brian Leiter

How to Get a Job in Philosophy

Advice for philosophy job seekers: General information and observations on applying for philosophy jobs.

Getting a Job in Philosophy and Getting a Job in the USA from the Australian National University
On the Philosophy Job Market
More on the Philosophy Job Market
Advice for Academic Job Seekers from Leiter Reports

Referee Humor

I just saw this mockery of philosophy refereeing by Chase Wrenn. Funny.

Thursday, June 01, 2006

A New History of Cognitive Science

Margaret Boden, Mind as Machine: A History of Cognitive Science, Oxford: Oxford University Press, due July 15, 2006.

In my opinion, Boden is one of the best philosophers of AI and cognitive science. (I say it because she is probably less recognized and cited, at least in the U.S., than she deserves.)

For some time, Boden has been working on a monumental (two volumes, 1,600 pages!) history of cognitive science, which, I'm happy to notice, is about to come out.

Boden undersands the history of cognitive science better than most. For example, she is one of the few people who has written that the classical computational theory of mind and connectionist versions of the computational theory have a joint origin in Warren McCulloch and Pitts's classic 1943 paper, and are conceptually closer than many participants in the classicism-connectionism debate seem to realize. (See Boden, M. (1991). "Horses of a Different Color?" In Philosophy and Connectionist Theory, ed. by W. Ramsey, S. P. Stich and D. E. Rumelhart. Hillsdale, LEA: 3-19. For more details on McCulloch and Pitts's theory and its historical and conceptual importance, see my recent paper in Synthese on the subject.)

The price is steep ($225), so some of you won't be able to reserve their personal copy on Amazon. But you should at least consider asking your university library to buy it. I, for one, can't wait to see it.

Another Robot

A short report on a robot designed to experiment with its environment and learn "like a human infant," based on neural network technology and testing neuroscience models. (Link courtesy of my student, Adam Hartke.)