Monday, June 06, 2005

Insane on the Mainframe

I am not sure if this story represents a case of computer science helping to advance neuroscience or of neuroscience helping to advance computer science. Either way (or both) it sounds like a nifty, if long term project. What gets simulated may not be very human though I would propose that human intelligence is not the greatest final blue-print and that using the neo-cortex as a jumping off point rather than an end in itself might be the way to go. What will really be interesting is when neo-cortical inspired artificial intelligences are paired with other forms of AI that are nearly as powerful but are currently limited because there needs to be a human intelligence to set the problem up for it. Genetic algorithms/evolutionary design systems and the creativity machine that Stephen Thaler is working on are a couple of examples of tools which a neo-cortical AI might make use to enhance its abilities.

By the time the brain simulation project is done (ten years, roughly) the massive supercomputers needed to run the model will be small and cheap enough for at least a medium size business to own one (Google and cable companies will be able to buy whole tribes of them). Even if that kind of hardware never makes it to the desktop market for some reason, A small group of neighbors could chip in and get one to act as a monitor for security cameras, a lawyer/accountant/lobbyist for the neighborhood's interests and screen out telemarketers while being polite to the extended family of the residents (or visa-versa).

I must reiterate at this point that artificial intelligences do not need to be perfect human replacements. They do not need to fool anyone into thinking they are a human on a Turing test in order to change everything about life in the twenty first century. They only need to be intelligent problem solvers, creative thinkers and have the ability to learn. It seems to be that even without a model of the human neo-cortex, it is easier to get machines to do this than to wait for these skills to emerge spontaneously in the human population. This may be the best argument yet for working to automate human level intelligence. We need teachers and tutors who can actually learn how to teach. Children deserve better than having two or three good teachers (tops) in their public school careers.

2 Comments:

At Tue Jun 07, 11:56:00 AM 2005, Blogger alice said...

"They only need to be intelligent problem solvers, creative thinkers and have the ability to learn. It seems to be that even without a model of the human neo-cortex, it is easier to get machines to do this than to wait for these skills to emerge spontaneously in the human population."

Surely you jest, Apesnake. First let's look at the word "only" in the sentence above. You cannot seriously think that the human brain's capabilities which have evolved over millions of years and have reached a sophistication which allows such things as language should be descibed with "only" as a modifier.
Yeah, there are some silly people in this world (present company excluded) but even the dumbest among us are a million times more creative than any machine which could be designed to be creative.

We grew up in this environment. We are adapted to it. We see things and feel things, put data together in strange and wonderful and sometimes tragic ways. No machine
could be programed to really learn. There are too many variables. They have tried and only been able to achieve a very rudimentary set of problem solving skills. There's just way to much involved in the learning process to be able to simulate.

And then you'd have to define "learn"

 
At Wed Jun 08, 05:13:00 PM 2005, Blogger Apesnake said...

Perhaps I should have put that sentence first and then put said:

"I must reiterate at this point that artificial intelligences do not need to be perfect human replacements. They do not need to fool anyone into thinking they are a human on a Turing test in order to change everything about life in the twenty first century."

The point I was trying to make was that in order for computers to be intelligent they do not need to be just like humans.

"You cannot seriously think that the human brain's capabilities which have evolved over millions of years and have reached a sophistication which allows such things as language should be descibed with "only" as a modifier."

Firstly millions of years of evolution (far fewer units if you think of generations) can be simulated in a short time if we are talking about a software that uses genetic algorithums. Secondly, we have the end product of millions of years of evolution to peak at. We do not need to reinvent the brain; we just need to copy nature's homework assignment and put our name on it.

As far as "even the dumbest among us are a million times more creative than any machine which could be designed to be creative." I am not so sure.

If you look at human civilization, all the advancements in art, literature, music, science, civil society, technology, etc. are the products of maybe a few hundred people - thousands at most. Of the billions of people throughout history and prehistory, most go through their whole life never having written a poem, invented a new type of tooth-brush, or written a piece of music (Stephen Thaler's device has done those). They hunt game, lift boxes, drive chariots, and then they go home and drink, dance, eat and sleep. computer programs are currently designing new products like boat engines and code for software. They are able to write code that is more reliable than that which humans can yet it is more complex than human programers can understand.

We seem to elevate the qualities of creativity and intelligence because they are things we feel only humans can do (until recent times that was true). We feel that it must require something uniquely human but that does not seem to be the case.

"No machine could be programmed to really learn. There are too many variables. They have tried and only been able to achieve a very rudimentary set of problem solving skills. There's just way to much involved in the learning process to be able to simulate."

One could make the same argument about the human brain. There are too many variables for it to deal with also. It has only been in recent years, with the rapid progress made in neuroscience that we have begun to understand how the brain deals with the problem; how it converts the massive streams of information and noise which bombard our senses and yet comes to understand it. While it is a fantastic process, it is a process that can be understood and, I firmly believe, replicated in a different physical substrate.

Just about all of the past efforts in A.I. fall into two categories raw logical programing of every response to every input and simple neural networks. Neither of these approaches even comes close to describing what the brain does or how memory and perception are related. Until now neurology and A.I. have been nearly completely separate fields. The failure of neural nets brought about the AI winter where funds were hard to come by and moral was low. This seems to be changing and not only is A.I. is a rapidly growing industry but it is a fast moving field of research also.

http://www.computerworld.com/printthis/2005/0,4814,99691,00.html

People often use the word "mere" (or infer it) to describe simulations of systems because we know that a "mere simulation" of the weather is not going to knock down power lines. But when we simulate something we are trying to get an out answer that is what we hope the real system would produce. If a simulation of weather is 98% accurate over three days we would feel that it is a powerful simulation. If we could simulate the basics of a brain with that degree of accuracy and if it could constantly get feedback as to how it was doing and improve its performance, would it not be considered intelligent?

"We grew up in this environment. We are adapted to it. We see things and feel things, put data together in strange and wonderful and sometimes tragic ways."

This is true. Many computer scientists believe that A.I. systems will require a degree of "infant time" when an architecture based on the brain learns to speak and possible (one or more) bodies to learn about the world by poking around. The thing is that if the intelligence is on software or at least on self programmable hardware which can have it's settings copied, you can clone the intellect without needing to go through the training process again.

I certainly understand skepticism of the A.I. field since it has been strangled by its own hype on several occasions but I am firmly convinced that the mind is a result of an interaction between the brain and the environment and that this relationship can be duplicated to a high degree. The process of reverse engineering is making a lot of progress and until I see some major road block I will continue to see A.I. as a wholly obtainable goal.

And the Thnickaman agrees with me. He even told me to "Shut up, kid!" which proves he is on my side with this. He only tells people he likes to shut up.

 

Post a Comment

<< Home



 


Day By Day© by Chris Muir.