The military gives the Turing Test a go.

We have talked about the turing test before, now it appears the US Army wants to give it a shot in World of Warcraft. Along with several other advanced technologies like regenerating body parts, erasing bad memories, and electronic telepathy they are attempting to develop virtual soldiers that look and behave like a real human.

To test these virtual humans they are going to deploy them in World of Warcraft. The virtual soldiers should be able to convince humans they are real, going as far as emulating emotion and using local slang correctly while responding to questions and communicating effectively with other players.

It appears the Army is attempting to tackle some very complex AI and Nanotechnological problems. If one of their AI soldiers does manage to fit in properly in World of Warcraft, how far off is it from passing a full Turing Test? Does it have full understanding and is able to relay tactical situation information back or does it just manage to fit in without any higher awareness of its situation?

They may be biting off more than they can chew, however the Army/DoD have been responsible for several massive technological advances. Anyone ever hear of this thing called the internet?

Desktop Super Computers and Singularity Friday

What is the Singularity? Ever hear of Moore’s Law?

If you are reading this blog you probably have. Moore’s Law is an informal law about technological advancement. In a nutshell it states, that every 18 months or so, the power of computers doubles. Computer power has been following this trend for 40 years and it is set to continue following it for another 40 years. The idea of the singularity is, if technology does indeed continue to double at this rate, in another 40 or 50 years the sheer power of the computing systems available will so far surpass mankinds abilities that the formation of an artificial mega intelligence will be impossible to avoid and it will be a new age of evolution. Of course, this new age could go the way of Terminator or we could essentially all become godlike through unimaginable advances in AI, computing power and Nanotechnology. A lot of pieces need to fall into place for it to happen but many people are starting to think it is inevitable.

Don’t agree? Intel thinks the Singularity is a real possibility. Don’t think computers will ever really get that powerful? How about a super computer for your desktop by Cray? While Cray hasn’t released performance specs for this system it is bound to be more powerful than their first super computer. In fact, your common desktop today is roughly two hundred times faster than the first Cray. While this computer still cannot model the entire human brain that doesn’t mean anyone isn’t going to try. IBM’s Blue Brain project plans to model the human brain in its entirety in the next ten years. Is that enough to form a mega intelligence and bring about the singularity? Probably not, but it is getting closer.

The 2008 Singularity Summit is happening this friday. Some of the worlds greatest minds in AI, Nanotechnology, Innovation, and The Singularity will be there. Who knows what interesting ideas will be shared at the conference this year. Typically video’s of the conferences talks are available after the conference for anyone who is interested.

We can’t know for certain what the future holds, like I said, a lot of pieces need to fall into place for the singularity to happen. Will Nanotechnology advance the fastest and allow us to transform humans into fully cybernetic entities thus creating mega intelligent computer systems out of our own consciousnes? Will computers advance to a point where we create true AI? This AI would have access to our vast stores of knowledge, it would have the ability to instantly learn and integrate new knowlege and then could develop new theories based on that knowledge. It would be able to optimize itself, and duplicate itself creating more and more AI’s. These new AI’s could allow the advancement of Nanotechnology to the point where we could join them in their electronic virtual playground if we so chose to. Or maybe neither will happen, due to economic and political stresses. We just don’t know.

I for one welcome our future mega-intelligent overloards.

I used to grow crystals as a child

When I was quite young my mother gave me a grow your own crystal kit. It came with a bag of Alum and instructions. I first had to grow seed crystals in a super saturated solution of Alum and water. Then I placed the best seed crystal at the bottom of a jar and covered it in another saturated solution of Alum. After several weeks I had a fairly nice big crystal, formed from my seed crystal. It didn’t do anything except look pretty.

Scientists have really stepped that up a notch with these Self-Assembled Orgaic Circuits. In order to form these complex structures they create a silicon dioxide substrate with gold electrodes using conventional techniques. They then submerge the substrate in a solution containing the organic semiconductor and the molecules arrange themselves on the substrate in a densely packed single layer of molecular layer. Prior work has developed faster circuits but this method is extremely easy to deploy.

This type of technology will allow complex electronics to be embedded in items where it was previously not possible. It opens up the possibility of structurally flexible electronics that are cheap and easy to assemble. You may have a computer directly embedded into things like coffee cups, cereal boxes, newspapers, or have portable display devices that can roll up or fold up without complex engineering requirements.

Very cool future possibilities.

More useful junk

In a paper published today in PLoS Genetics the research supporting the value of so-called junk DNA gained more ground. As is stated in the article biologists have known about junk DNA for many years but it was felt that it was mostly extraneous data in the genetic code.

If you have read the research behind SDNEAT, you know that scientists are starting to change their perceptions of junk DNA. Segmental duplications seem to be critical in the evolution of species, they allow for high levels of genetic variation and mutation with a smaller chance of disabling the original genome all together.

This new study suggests that DNA ‘retrotransposons” are important to human evolution. One specific set of retrotransposons are called Alu elements:

“Alu elements are a major source of new exons. Because Alu is a primate-specific retrotransposon, creation of new exons from Alu may contribute to unique traits of primates”

Perhaps as an extension to SDNEAT the algorithm to merge a segmental duplication into the genome being mutated could be extended to allow for transposition of the identified segment within the genome. These higher order mutations could be exceptionally valuable when dealing with extremely complex solution spaces and genomes.

Autonomous Helicopters

These automatons are certainly not adrift but they are definitely airborne and capable! Computer Scientists at Stanford have developed an autonomous helicopter that can learn from a human expert pilot to perform complex manoeuvres better than the original human expert!

The learning system does not just copy the controls for performing the manoeuvres it watches several built in sensors for the state of the environment around the helicopter and through several iterations develops an algorithm that can handle situations that were not part of the training but allow the autonomous pilot to complete the manoeuvre. These adapted agents could even keep more precise control over the aircraft than the original pilot could.

“For five minutes, the chopper, on its own, ran through a dizzying series of stunts beyond the capabilities of a full-scale piloted helicopter and other autonomous remote control helicopters. The artificial-intelligence helicopter performed a smorgasbord of difficult maneuvers: traveling flips, rolls, loops with pirouettes, stall-turns with pirouettes, a knife-edge, an Immelmann, a slapper, an inverted tail slide and a hurricane, described as a “fast backward funnel.”

The pièce de résistance may have been the “tic toc,” in which the helicopter, while pointed straight up, hovers with a side-to-side motion as if it were the pendulum of an upside down clock.”

I know I can’t do any of that and I certainly couldn’t learn it quickly. Could the learning system that Ng and his team have created be adapted to driving cars and flying planes? Possibly, as it depends on if it uses any assumptions about its environment. Flying around in a big open space is much easier than driving quickly through busy city streets. That doesn’t mean it can’t be done. The only downside to this research is it could be used to pilot Autonomous military planes to deploy weapons, completing the conversion of war to a video game.

Hopefully we see this deployed for peaceful use instead.

The Learning Algorithm of the Brain

A group of researchers at NYU have been given funding to “discover the learning algorithm of the brain“.

This will be some exciting work, many scientists believe the easiest method to general artifical intelligence is through duplicating the human brain, that the sum of the parts is the soul so to speak. These researchers will be focusing on the visual system and how it manages to identify all the important bits in a photograph or any real world scene. They will then apply what they learn from these experiments to see if the same methodology works on similar brain structures.

I am not certain but this research may be related to the recent full simulation of the human visual cortex on Roadrunner currently the most powerful computer in the world. This full simulation of the visual cortex was a simulation system called PetaVision which actually was a full simulation of all the neurons that compose the human optic system and visual cortex that could be run in real time.

Slap some ears on it, a nose, and a neocortex and we have a beautiful baby AI.

Happy Thanksgiving, Mr. Turing

Well yesterday was Thanksgiving, my wife and I went to our friends house and cooked a turkey. Quite the proceedure, I researched several sources including YouTube and my mother, to figure out the best way to get the job done. The turkey was fabulous. The human beings ability to take in information from several sources, assimilate it, process it and use it to understand and reproduce something is remarkable.

Computers got a bit closer to that on the same day. The Turing Test I wrote about in my last post happened yesterday and quite a few of the systems did quite well. The program Elbot actually managed to fool twenty five percent of the judges into thinking it was a human. That is no small feat, as even the tiniest confusion or mistake can make a human aware it is not talking to one of its own kind.

I wonder if in another 25 years, an AI based robot I have will ask me about how to cook a turkey. I will explain the process to it, and it will proceed to make that turkey for me. Who knows…

Turing test next weekend.

Next sunday there will be a fun little challenge happing at the University of Reading. Several computer programs will be competing to pass the ‘Turing test’.

To explain it simply, the Turing test, is an experiment to test a computers intelligence by having it attempt to fool a human into believeing it is human as well. A human judge faces off against a computer program and a human pretending to be the same program simultaneously. If the judge cannot tell which conversation is the human and which is the program, the program will have passed the Turing test.

Some people will definitely argue that the program doesn’t understand what it is saying, it is simply following rules to respond to the questions posed to it, and it does not represent intelligence. Well, don’t you as a human really just do the same thing, but with a considerably more complex and dynamic ruleset?

If the example conversation in this article is representative of all the programs competeing then they have a long way to go before they fool a human.

If there is an algorithm for intelligence…

Then we could run it in about 50 atoms worth of space. That is assuming that we could build the smallest state machine possible in that amount of space and then actually wire it up to some tiny interface. The smallest possible universal state machine was proven to exist a couple days ago by Alex Smith of Birmingham, UK.

This really does have some significant impact. While I don’t think we would run the algorithm for intelligence on this particular state machine, we could. We could in fact run any program at all on this state machine and have it input and output any possible string of information. You can think of this as the smallest possible independent microprocessor. This could be a significant step in the advancement of massively parallel sensor networks.

Think of this, you construct a piece of e-paper made up of these tiny little state machines. You connect them in all eight directions to their neighbours using carbon nanotube circuitry as well as connecting the top layer to a set of output machines, something like a pixel in an LCD. Now put a few of these together into a magazine and put some somewhat more complex circuitry into the spine of the book along with a power source (probably an external layer of solar energy molecules) and you have yourself an extremely powerful parallel computer that looks something like a book but is capable of far more. Completely dynamic content all controlled through a massively distributed network of basic microprocessor state machines. I can’t take credit for this idea, it is from “The Diamond Age” a very good novel about nanotechnology but what is fantastic in that book is a step closer to reality due to this proof and several recent advances in nanotechnology. Another common theme from the same book is how ubiquitous these massively parallel systems could be. They could consist of countless billions of tiny sensor nodes distributed through the air. You could breathe them in without destroying significant portions of the network because they are so small (about the size of dust). Yet the processing power in each is universal and the power of the entire system is extraordinary. Each of the nodes could perform complex sensing tasks and transmit their information through the network back to their home base. These truly would be automatons adrift!

Of course the algorithms to do that efficiently don’t really exist yet, but they are being worked on.

I know it seems pretty wild, but that is just one example. When you distribute the processing of a program across millions of tiny universal state machines you can drop the time to process down to a much smaller value. This would require a completely new direction in programming but it is entirely possible. The original posting of this was found on slashdot, thanks guys!

Simbad a quick introduction

Simbad is the 3D robot simulator I am using for my autonomous agent research. This is just a brief look at the simbad interface and how you can interact with the simulation environment. We won’t even peek at the really cool features, like its ease of use or potential for Evolutionary Artificial Neural Network research!

The Simbad user interface.

The Simbad user interface.

If you click on the image you will see a larger copy of the picture. The large main window is the world view. This is the visualization of the 3D world your simulated robots traverse. It is roughly 20 meters x 20 meters and the basic agents have a radius of 0.5 meters. Underneath the world window is the control window. This interface provides controls for the simulated environment. You can adjust the speed of the simulation, pause it, reset, stop and step through the simulation. You can also adjust your viewing angle to set angles. If you want to adjust the world further you can rotate and move the world image my left click dragging and right click dragging respectively.

The final windows are the watch windows. These provide a view into your agents current states. You can see what sensors are firing, an agents location, if it has collided and other options as well. Simbad also provides several different sensors for your robot. The most notable is a camera sensor (not shown) which actually renders a 2D view of the 3D world which your agent can perform edge finding on and use the visual data to navigate.

I will provide a more detailed view into simbad in the future on this blog.