Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Is the Brain Equivalent to a Turing Machine?

By ZeuS572 in Technology
Mon Mar 17, 2003 at 02:00:56 PM EST
Tags: Science (all tags)
Science

From the NewScientist.com:

"The world's first brain prosthesis - an artificial hippocampus - is about to be tested in California. Unlike devices like cochlear implants, which merely stimulate brain activity, this silicon chip implant will perform the same processes as the damaged part of the brain it is replacing.
The prosthesis will first be tested on tissue from rats' brains, and then on live animals. If all goes well, it will then be tested as a way to help people who have suffered brain damage due to stroke, epilepsy or Alzheimer's disease."



This brings up the key question that this article will focus on - Is the Brain a Turing Complete Machine?

Description of a Turing Machine

Alan Turing was a mathematician and logician in 20th century. He is often considered the pioneer of computer science as we know it today. He helped the British Government crack German codes during World War II. His world-changing contribution to society was a Turing Machine.

For a history about Alan Turing, the man, see: http://www.turing.org.uk/turing/index.html

Introduced by Alan Turing in 1936, a Turing machine is a very simple kind of computer whose operations are limited to reading and writing symbols on a tape, or moving along the tape to the left or right. This tape is divided into squares, any square of which may contain a symbol from a finite alphabet, with the restriction that there can be only finitely many non-blank squares on the tape. The Turing machine can only read or write on one of these squares at once--the square located directly below its "read/write" head.

In addition to a finite alphabet, the Turing machine has finite states. The machine is guided by a set of instructions that define the following: Given the current state and the current symbol, write x new state and y new symbol, and move the head either left or right. The state table defines these instructions.

Turing machines are important because they represent what computers can and cannot do. Any deterministic algorithm can be mapped to a Turing machine algorithm. If an algorithm is described as "Turing Complete," then it meets the requirements of a Turing Machine.

The Implications

If our brain or a self-contained part of the brain could be mapped to a Turing machine, what would that signify? There are some significant developments that could come out of such a discovery.

We could start off by the creation of Artificial Intelligence (Artificial only in the sense that it's not in a living human. It would otherwise be a copy of intelligence from a real brain). If we could map the operations of the brain to a Turing machine, then we could replicate this behavior in a machine. We would have a non-organic human mind, living in a computer. Not only will it be fascinating to have such a mind, but it could serve several useful purposes. For example, brains could be sustained and utilized with less resources than that necessary for traditional humans to function. We could have three brains operating for the price of human one. Secondly, we could modify certain functions and see how the overall model of the human brain was affected. The discoveries coming from this research could help in medical discoveries to aid real humans. For example, we could replicate an Alzheimer's brain, and keep attempting to modify it until it no longer suffered from the ailment, and then replicate the steps in true humans suffering. We could use these models to find enhancements that can be done to the human brain (outside of simply curing diseases). And finally, we could find out what a human could be like with more processing power--we could speed up a human brain and find out how much more it can process, and if it changes it's personality and/or inherent nature.

Besides the discoveries in computation, we could also learn to modify the state tables of humans. Perhaps we could implant memories in humans (ala the movie "Total Recall"). Perhaps every child would be implanted with the amount of knowledge that current adults have, and they would start off with this base knowledge at an early age.

Clearly, the possibilities are endless (within the limits of human imagination!).

The Counter Argument - Non-Computational States

A showstopper argument often given is that the brain has states that can't be measured finitely, or that follow patterns that cannot be measured mathematically. If this was the case, we wouldn't be able to map the brain to a Turing machine, and thus wouldn't be able to map it to a computer in the way that we currently create algorithms and programs.

The existence of non-mathematically measurable patterns within math was first proved by Kurt Godel in Godel's Theorem, in 'response' to Bertrand Russell's Principia Mathematica.

One well known author Douglas Hofstader writes in his beautiful book, <u>Godel, Escher, Bach: An Eternal Golden Braid</u> that these brain patterns are from self-referencing loops (what he calls "strange loops"). One example he gives of a self-referencing statement that creates a loop is to evaluate whether the following MEANINGFUL statement (could be an axiom) is true or false-- "This Statement is False." The statement simply makes an assertion; though because of its self referencing nature, it creates a state which is neither true nor false (or possibly both true and false). If it was a true statement, it would be false, since it proclaims to be false (and it's true what it proclaims); if it is false, then it must be a true since it tells the truth of itself.

Hofstader believes that consciousness is comprised of such self-referencing loops.

Also, consider this--If the brain modifies itself, then we could possibly never map it to a Turing machine, because the instruction table would keep changing. The self-modification aspect would be hard to replicate.

What does the reported discovery promise?

Though this discovery in itself is not enough to say whether the brain is or isn't a Turing machine, it does bring it once step closer to it being a mapping to a Turing machine (albeit a very small step).

Though they couldn't understand how it worked, they could copy it (e.g. mimic the state table and the instruction set). It's important to keep this in perspective: They haven't yet tested it fully, but they are as closer to this than any has been before. That's what makes this quite exciting. The road to replicating AI is much longer. We would have to finish successfully replicating the hippocampus (which is the easiest part), and then move on to the more complicated parts. Presumably, if we could find a method to replicate without understanding, we should be able to use this method in any part of the brain.
"No one understands how the hippocampus encodes information. So the team simply copied its behaviour. Slices of rat hippocampus were stimulated with electrical signals, millions of times over, until they could be sure which electrical input produces a corresponding output. Putting the information from various slices together gave the team a mathematical model of the entire hippocampus."

Also, of interest is the following fact: The brain processes information in parallel, whereas a Turing machine is ultimately serial. In order to mimic a parallel machine in a Turing machine, you would have to have an exponentially larger number of states. And yet they did it with computer technology available today!

What about consciousness? The researchers haven't yet gotten that far. The experiment currently focuses on the Hippocampus. But they will try:
"The researchers developing the brain prosthesis see it as a test case. `If you can't do it with the hippocampus you can't do it with anything,' says team leader Theodore Berger of the University of Southern California in Los Angeles. The hippocampus is the most ordered and structured part of the brain, and one of the most studied. Importantly, it is also relatively easy to test its function."

If they can do it to the hippocampus without understanding its functions, who knows what will happen with deeper levels!

Part 2: Ethics

What does this mean with regards to the value of human life? If we can be easily replicated, copied, or replaced, what will be left in our inherent humanness?

Would you support behavior modification?
Would you support implanted memories? (ala "Total Recall")

One other important ethical point is that of the "life" of the machine that was created (via replication). Even if the machine is not living in the traditional sense, it still is aware of itself and has the functions of a human. Would it be ethical to then modify this behavior? Could the machine feel pain as we do? If we create weird states for it, who experiences these states?

And if you argue about protecting this machine, then what about animal research today? Do you support that?

Some More Reading

About Turing Machines:
http://www.ams.org/new-in-math/cover/turing.html
http://www-csli.stanford.edu/hp/Turing1.html
http://grail.cba.csuohio.edu/~somos/bb.html
The Myth of the Turing Machine
An Alternative View of CS:Quantum Computing

AI- Is the Brain a Turing Machine?
A Google Query
Godel, Escher, Bach: An Eternal Golden Braid
The Singularity Institute
Creating Friendly AI

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Google
o http://www .turing.org.uk/turing/index.html
o http://www .ams.org/new-in-math/cover/turing.html
o http://www -csli.stanford.edu/hp/Turing1.html
o http://gra il.cba.csuohio.edu/~somos/bb.html
o The Myth of the Turing Machine
o Quantum Computing
o A Google Query
o Godel, Escher, Bach: An Eternal Golden Braid
o The Singularity Institute
o Creating Friendly AI
o Also by ZeuS572


Display: Sort:
Is the Brain Equivalent to a Turing Machine? | 104 comments (77 topical, 27 editorial, 0 hidden)
Aren't you really asking..... (4.50 / 6) (#2)
by Blarney on Sat Mar 15, 2003 at 02:41:25 AM EST

If the brain relied entirely on cells which performed certain operations in a manner entirely dependent upon sensory input and signals from other cells, it would be simulable by a Turing Machine. The Newtonian deterministic billiard-ball universe is Turing Complete.

The only way that the brain could not be Turing complete would be if it was a chaotic system, influenced by even the smallest perturbations such as quantum mechanics would randomly produce. In other words, there would have to be randomness feeding our thought processes. But can we really call quantum undeterministic influences random? Perhaps they are - or perhaps they're simply part of a bigger system which our observable Universe is only a small part. So if the brain is not Turing complete, we are guided by subatomic signals that cannot be characterized by science - maybe this is what believers would call a soul.

Now, just because the Newtonian deterministic universe is Turing complete doesn't mean you can get a computer big enough to actually simulate it.....

Anandamide (3.66 / 3) (#7)
by artsygeek on Sat Mar 15, 2003 at 04:27:17 AM EST

I'm just wondering if there's an electronic

Anandamide essentially acts as a filter, blocking the memorization of "extraneous" information.  It keeps us from remembering every little extraneous detail.  To quote the co-discoverer of anandamide Raphael Mechoulam "Would you want to remember every face you saw on the subway?"

Sp what happens if (2.00 / 1) (#16)
by Ward57 on Sat Mar 15, 2003 at 02:48:38 PM EST

one day, we can replace the entire brain with silicon without changing the personality of the human being involved? I'm sure I've read some science fiction on exactly this subject. *They* claimed the result was immortality.

-1: Bored with this topic (1.00 / 3) (#23)
by srn on Sun Mar 16, 2003 at 04:47:19 AM EST

There was good discussion on the last attempt to get an article up. Enough is enough.

Some scattered comments (4.90 / 11) (#27)
by jacob on Sun Mar 16, 2003 at 08:58:52 AM EST

First, self-modifying Turing machines have the same computational power as regular Turing machines. (Just consider what it would happen to complexity theory if this weren't true!)

Second, your implications section is pure pie in the sky. People have been working on writing programs to simulate cognitive functions for 60 years now, and the results are depressingly bad -- the upshot is that we've got no idea how even very small subparts of the brain work. This has nothing to do with Turing-completeness: if the brain computes functions a Turing-machine can't, there are still other functions it computes that the brain can, and we don't have those ones down yet; and the belief that there exists a Turing machine that somehow performs cognitive functions hasn't made things any easier.

Another point to be made is that Turing-machines are good for reasoning about decidability of decision problems (questions with yes or no answers) but not as good for reasoning about other kinds of problems. Brains don't get all their input at once, for example, and have the ability to choose what inputs they receive (you can turn your head, smell, close your eyes, touch things, and so on) and these capabilities don't necessarily have good representations in Turing-machine world. So if you want to meaningfully equate brains and Turing-machines, you need to describe a way of taking brain problems and posing them as decision problems.


--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

BrainFuck - a Turing-complete language (4.57 / 7) (#28)
by bdesham on Sun Mar 16, 2003 at 02:19:42 PM EST

A good example of a Turing-complete programming language is BrainFuck. Although pretty much any other language has a much larger set of features, BF shows how you have to do things on a "pure" Turing Machine-- that is, one with only the eight required Turing commands ([, ], <, >, ., ,, +, and -).

For example, adding two values together -- an operation usually done with a single "+" -- is accomplished with the command [<+>-]<. Multiplication? Take a look at >[-]>[-]<< <[>[>+>+<<-] >[<+>-] <-] >>>[<<<+>>>-]<<< . Heck, even the "Hello World" program is ungodly long. The prime number checker, then, can give you brain problems if you try to figure it out.

Sure, BF ain't pretty, but it'll give you a new appreciation of high-level programming languages and how much they simplify the process of programming as compared to a simple Turing machine (which could, theoretically, do anything that, say, C could do).

--
"Scattered showers my ass." -- Noah
Oh (4.00 / 4) (#30)
by Big Sexxy Joe on Sun Mar 16, 2003 at 06:22:37 PM EST

Is that why I have an infinite amount of tape coming out of both of my ears?

I'm like Jesus, only better.
Democracy Now! - your daily, uncensored, corporate-free grassroots news hour
A computer is not a Turing machine (3.30 / 13) (#35)
by acronos on Mon Mar 17, 2003 at 12:40:44 AM EST

Because so many people get this confused, I thought it important to make this distinction.

Computers can do several things a Turing machine cannot.

The first and most significant difference is that a computer can use interrupts.  Interrupts allow a computer to stop a currently running task and start a different one based on a trigger such as a timer or key press.

This means a computer can recover from the "halting problem" while a Turing machine cannot.  Use of a timer interrupt can determine when an algorithm has been running too long and stop or alter that algorithm.  An example of this in practice is that newer versions of Windows can survive the crashing of a single program as long as the crashing program is not running in the core levels of the OS.  Even low level crashes could be stopped if the OS were designed properly, which hasn't happened yet.

The second difference is in the design of inputs.  A traditional Turing machine is really just an algorithm that you put in motion.  After it starts, there are no additional inputs.  This means that a the Turing machine as Turing modeled it would not be able to interact with it's environment.  However, today's computers are not limited in this way.  They can have thousands of simultaneous inputs from networking to cameras to keyboards all changing the internal behavior of the computer.

The Universal Turing machine was intended to prove that extraordinarily complicated systems can be made from very simple designs.  It has been distorted by people with little imagination into saying, that since a Turing machine is simple and has problems, and a computer is essentially a Turing machine, that a computer must be simple and have the same problems.  This does not follow.  We are not even close to finding the real limitations of our current computing hardware, much less our future hardware.  It is currently human complexity limitations and processor speed that are limiting computers, not their fundamental hardware design. The biggest problem is the very human limitation on how much complexity can be brought into a problem before everyone's minds are blown.  The second problem programmers run into is speed.  The speed component is what forces programmers to have finesse and greatly increases complexity, feeding back into the first problem.  The jury really is still out on whether we will achieve human level AI.  Those who say differently on either side are selling smoke oil.

However, to answer your question, the brain is definitely NOT a Turing machine, but my gut feel is that it could probably be emulated on something similar to one.


Welcome to my sandbox (4.77 / 9) (#43)
by iGrrrl on Mon Mar 17, 2003 at 09:12:49 AM EST

I'd have to say that the brain is not any kind of Turing machine. It's too plastic, stochastic, and dependent upon metabolic states.
If the brain modifies itself, then we could possibly never map it to a Turing machine, because the instruction table would keep changing.
Guess what? Portions of the brain constantly modify themselves. Modifications come in short- and long-term changes in synaptic strength and in local changes in the "wiring diagram."

This is, I think, one of the biggest hurdles for the project. Researchers use the hippocampus for most of the synaptic studies on cellular models of learning and memory. The choice is made in part due to the regularity of the architecture, and in part due to the fact that the hippocampus is necessary for translating short-term memory into long-term memory. No one knows how this happens, but one standard underlying hypothesis is that long- and short-term synaptic plasticity are crucial for this function.

Each pyramidal cell in hippocampus receives input from many different other neurons, both stimulatory and inhibitory. Specific synaptic can be strengthened or inhibited. On top of that, the chances of neurotransmitter release to any given stimuli are quite less than 1. The network is governed by stochastic that can be modified by changing the probability of transmitter release. Given that activity changes the gene expression profile of a given neuron, one hypothesis states that each cell in the network may have an entirely unique profile of transmitter receptors and ion channels -- a profile that is plastic given activity. I've written more about thiselsewhere

In sum, I think they're trying to do something very difficult, I salute the attempt, and I'll be stunned if they succeed.

I don't think it's impossible. People have done some remarkable work stimulating the first layer of processing in the olfactory system (sense of smell). In fact, they've done it well enough to make an artificial dog's nose. (Disclaimer: I'm talking about my husband.) However, I think that encoding discreet stimuli, even through the combinatorial means thought to prevail in olfaction, is easier than the kinds of networks in the hippocampus.

But I could be wrong.

The researcher mentioned in the New Scientist article, Dr. Theodore Berger, published something very interesting about their preliminary work. Here's the abstract:

A new type of biosensor, based on hippocampal slices cultured on multielectrode arrays, and using nonlinear systems analysis for the detection and classification of agents interfering with cognitive function is described. A new method for calculating first and second order kernel was applied for impulse input-spike output datasets and results are presented to show the reliability of the estimations of this parameter. We further decomposed second order kernels as a sum of nine exponentially decaying Laguerre base functions. The data indicate that the method also reliably estimates these nine parameters. Thus, the state of the system can now be described with a set of ten parameters (first order kernel plus nine coefficients of Laguerre base functions) that can be used for detection and classification purposes.
[Gholmieh G. Soussou W. Courellis S. Marmarelis V. Berger T. Baudry M. A biosensor for detecting changes in cognitive processing based on nonlinear systems analysis. Biosensors & Bioelectronics. 16(7-8):491-501, 2001 Sep.]

This paper is important to the so-called hippocampal prosthesis because they claim they are basing the design on input-output parameters measured in slices of hippocampus, and the best data would come from slices grown on microchips. However, in the paper they used only field potentials (rather than single cell measures), albeit with arrays of electrodes that could give spatio-temporal information. Given what I said above about the cell-specificity of synaptic plasticity, this muddies the data. The goal of the published experiments, however, was not to gather data in advance of a prosthesis. From the paper:

Our goal is to quantify accurately the nonlinear characteristics of neuronal activity over time and online for two general purposes: (1) to identify biologically-based nonlinear dynamics for incorporation into biomimetic systems designed for pattern recognition and (2) to reliably detect systematic changes caused by exposure to chemical-biological agents.
In other words, they were trying to make a tissue-based biosensor. The model would detect changes dues to outside influence. The data from such an experiment are invaluable for the creation of a silicon-based bioimitator. My skepticism arises out of a sense that the mathematical model (based on this data and reported in the paper) has the limitation of not including self-modification. The analysis focused on finding changes based on outside influences (chemicals). Strictly speaking, one could call changes in input patterns from the natural hippocampus to the artificial one an outside influence. However, that doesn't account for local processing.

All that said, I didn't think much of the write-up above. As much as the popular press have tried to make of this coming experiment (which hasn't even succeeded in vitro, let alone in vivo), the article above stretches even further into the realm of arm waving.

--
You cannot have a reasonable conversation with someone who regards other people as toys to be played with. localroger
remove apostrophe for email.

I should know (3.00 / 5) (#45)
by 8 out of 10 doctors on Mon Mar 17, 2003 at 11:28:25 AM EST

As someone who studies brains for a living and Turing machines for a hobby, I'm glad k5 is talking about this. The piece is generally well supported/cited (even considering the challenger posts) and raises interesting questions without trying to answer them difinitively.

In short, I like it.

Turing machine, or Chinese room II (3.00 / 1) (#48)
by ChaosD on Mon Mar 17, 2003 at 12:33:17 PM EST

I still think that any discussion about turing machines and brain function should include a reference to John Searle's Chinese Room thought experiment.
-----------------------------
There are no stupid questions
Othersitish (4.80 / 5) (#64)
by jforan on Mon Mar 17, 2003 at 04:00:42 PM EST

"If an algorithm is described as 'Turing Complete,' then it meets the requirements of a Turing Machine."

Actually, only systems can be Turing Complete.  In algorithms theory, the term completeness kind of implies "if and only if" (iff); a Turing Complete machine can implement all "functionality" implementable by a Turing Machine, and visa versa.

The definition of a Turing Machine, however, has nothing to do with how quickly algorithms can be process various input (except that given certain inputs a Turing Machine must process the result (halt) in a finite amount of time) many algorithmically-fast designs of various computations systems are Turing Complete, including various parallel Turing Machine designs.  This should be mentioned alongside the parallel algorithm argument you made:

"Also, of interest is the following fact: The brain processes information in parallel, whereas a Turing machine is ultimately serial."

To be Turing Complete, this doesn't really matter, unless the level of parallelism is defined by the size of the input, which would probably not be the case with the brain.

Additionally, the algorithms used to implement various functionality on a Turing Machine could alter their instruction set by first creating an instruction set separate from the "left right write read" instruction set that is intrinsically available to the Turing Machine.  It doesn't matter that "the self-modification aspect would be hard to replicate", but rather that it can exist on a Turing Machine.  This is computation >theory<, not practice, after all.

My point in general is that the actual implementation of the mapping is not relevant to the concept of Turing Completeness; only the  provability of the existence of that mapping.

I guess I wouldn't be so picky, except that
"the key question that this article will focus on [is '] Is the Brain a Turing Complete Machine? [']".

Jeff

I hops to be barley workin'.

What! (3.50 / 2) (#65)
by miah on Mon Mar 17, 2003 at 04:08:46 PM EST

But is the brain resistant to halting problems? ;)

Religion is not the opiate of the masses. It is the biker grade crystal meth of the masses.
SLAVEWAGE
Rats (3.50 / 2) (#73)
by bartok on Mon Mar 17, 2003 at 08:02:33 PM EST

The prosthesis will first be tested on tissue from rats' brains, and then on live animals. Last time I checked, rats were still considered animals.

Why Turing machine? (4.33 / 3) (#85)
by olethros on Tue Mar 18, 2003 at 05:52:17 AM EST

Why does it have to be a turing machine? And why model parallelism with a single turing machine and not with a number of TMs in parallel.

Also, there are other types of TMs that don't suffer such limitations as the TM itself, such as the GTM. And as someone already mentioned, computers are not Turing Machines.

The biggest problem with this prosthetic is not associated with whether the brain is a 'TM' or not. It is that they model a limited interface (only taking into account electrical impulses - not taking into account neuromodulators) - and that the modelling, however fine the simulation, might be inaccurate because of lack of plasticity.

If it works it will show that lack of plasticity is not detrimental to the short-term function of the brain. However I doubt that it will work since neuromodulators do have a strong effect on brain function... including the function of the hippocampus.

-- Homepage| Music
I miss my rubber keyboard.

consciousness (3.00 / 1) (#98)
by manmanman on Wed Mar 19, 2003 at 11:35:42 AM EST

here is a very interesting article about what could be consciousness.


Computer scientist, Montpellier, south of France.
Penrose (5.00 / 1) (#99)
by RandomAction on Thu Mar 20, 2003 at 08:35:09 AM EST

Roger Penrose has a take on this, for him the brains apparent ability to see the answer of an incomputable problem, is central to the difference between a Turing machine and an organic mind. He gives the example of aperiodic tiling; a Turing machine will never halt when determining if a set of aperiodic tiles will cover an infinite plane. However a human can instantly see that they can. From this he determines there must be some non-computable physical law that the brain leverages to achieve this. He theorises that this ability and in fact consciousness stem from quantum level events taking place in brain cells.

Lecture on Turning and Penrose, and their ideas

Personally I think it might be possible to create a neural net complex enough to simulate this ability, through recognition of these problems and learning an appropriate response.

Some thoughts on AI (none / 0) (#100)
by freality on Thu Mar 20, 2003 at 04:12:13 PM EST

I'm trying to make one. I've got some thoughts at an SF project page. Freality Machine Intelligence.

Alan Newell's book Unified Theories of Cognition is a good one to look into for anyone serious about this.

Where does nat. lang. fall on the chomsky hierachy (none / 0) (#102)
by lukme on Fri Mar 21, 2003 at 09:53:09 PM EST

Where does natural language fall on the chomsky hierachy?

My guess is that natural language is at least recursively enumerable. We can still communicate and know what the other person means, eventhough the language is not grammacally correct.


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
The brain (none / 0) (#104)
by frijolito on Wed Jun 18, 2003 at 07:46:30 PM EST

...will someday be hacked into. Mark my words.

Is the Brain Equivalent to a Turing Machine? | 104 comments (77 topical, 27 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!