This article analyzes issues around the Turing Test (of machine intelligence). It ends with the Chinese Room (see http://www.iep.utm.edu/c/chineser.htm or http://en.wikipedia.org/wiki/Chinese_room for example) but talks about Ned Block's "Blockhead" argument in good detail before that (http://en.wikipedia.org/wiki/Blockhead).
What I find interesting about the BlockHead argument is that is seems to parallel Chomsky's quote on computers beating humans at games of chess very nicely. If conversation is reduced to "math" and the computer then emits responses that are humanly plausible to humans in a sort of rote or "uncomposed" fashion then this is just a math problem. And computers are good at math. We don't say that they are then intelligent because of it.
The Turing Test is eerily compared to Descartes describing how no machine could be made to act like a human in importantly human ways: thinking and talking.
The article covers common objections to the Turing Test: Theological objection, Lovelace objection, Mathematical objection and so on.
It seems to me that the idea of "passing" the Turing Test seems to be answerable in two ways: a whitebox and blackbox way. In the white box way we know how the computer does it (e.g., brute force, or block head logic tree). In the second case we don't - it is a black box. However, in the black box example we are convinced that it is a human.
It seems that the Turing Test analysis want to say, ok let's pretend that you can black box your way through the test, but that then is not good enough. Now we must analyze your blackbox success in ways that are meaningful in a whitebox sense (by looking inside to see how it did it). If the whitebox analysis concludes that the methods of your program are the same as other methods we've decided "aren't intelligent" then your black box success is meaningless.
I am not sure that is a reasonable way to progress. This is the logic of (at least my minimal understanding of) Ned Block's Blockhead computer.
I am not sure the Turing Test is saying: Not only will it fool you seventy percent of the time, but it's mechanism will impress you one hundred percent of the time and will not be of the kind that you have before hand decided aren't intelligent. I think this is the "moving goal posts" problem. Every time something passes a test, the test is decided not to be important to intelligence and the goal posts are set somewhere else that hasn't been done yet.
The Chinese Room argument is a different kettle of fish, however it too seems to want to go "within" the computer and argue against it, "even if it too passes the Turing Test".