Evolution of the Neuroevolutionary Solver documentation

There was some major updating of the Neuroevolutionary Solver page on the weekend. Now there are sections to help you download the software, install the required Java JDK and Java 3D packages and instructions on how to get the NS working on your home system. Working with the NS is much easier through the Eclipse IDE and detailed instructions are available for that method. Command line instructions for Windows and Linux are also available. Click through to the NS page here or click the Neuroevolutionary Solver tab at the top of this page.

Keep watching for more updates over the next few days.

The Learning Algorithm of the Brain

A group of researchers at NYU have been given funding to “discover the learning algorithm of the brain“.

This will be some exciting work, many scientists believe the easiest method to general artifical intelligence is through duplicating the human brain, that the sum of the parts is the soul so to speak. These researchers will be focusing on the visual system and how it manages to identify all the important bits in a photograph or any real world scene. They will then apply what they learn from these experiments to see if the same methodology works on similar brain structures.

I am not certain but this research may be related to the recent full simulation of the human visual cortex on Roadrunner currently the most powerful computer in the world. This full simulation of the visual cortex was a simulation system called PetaVision which actually was a full simulation of all the neurons that compose the human optic system and visual cortex that could be run in real time.

Slap some ears on it, a nose, and a neocortex and we have a beautiful baby AI.

Happy Thanksgiving, Mr. Turing

Well yesterday was Thanksgiving, my wife and I went to our friends house and cooked a turkey. Quite the proceedure, I researched several sources including YouTube and my mother, to figure out the best way to get the job done. The turkey was fabulous. The human beings ability to take in information from several sources, assimilate it, process it and use it to understand and reproduce something is remarkable.

Computers got a bit closer to that on the same day. The Turing Test I wrote about in my last post happened yesterday and quite a few of the systems did quite well. The program Elbot actually managed to fool twenty five percent of the judges into thinking it was a human. That is no small feat, as even the tiniest confusion or mistake can make a human aware it is not talking to one of its own kind.

I wonder if in another 25 years, an AI based robot I have will ask me about how to cook a turkey. I will explain the process to it, and it will proceed to make that turkey for me. Who knows…

Turing test next weekend.

Next sunday there will be a fun little challenge happing at the University of Reading. Several computer programs will be competing to pass the ‘Turing test’.

To explain it simply, the Turing test, is an experiment to test a computers intelligence by having it attempt to fool a human into believeing it is human as well. A human judge faces off against a computer program and a human pretending to be the same program simultaneously. If the judge cannot tell which conversation is the human and which is the program, the program will have passed the Turing test.

Some people will definitely argue that the program doesn’t understand what it is saying, it is simply following rules to respond to the questions posed to it, and it does not represent intelligence. Well, don’t you as a human really just do the same thing, but with a considerably more complex and dynamic ruleset?

If the example conversation in this article is representative of all the programs competeing then they have a long way to go before they fool a human.

Is this thing on?

Well, it has been about a year since I posted on Automatons Adrift. There have been some significant changes in my life. I completed my masters degree with a clear pass. A very rare feat I am told, typically there are at least minor revisions. I left my job at UNBC and moved to the University of Alberta to be the System Administrator for the Faculty of Science. This of course means I moved to the wonderful city of Edmonton.

The most surprising thing is, all these changes happend in the last couple of months. Now that my research for my thesis is complete I have a lot less time pressure and I can devote more energy to Automatons Adrift. I have updated the website along with the hosting service for it. I have added some new content including pages on the SDNEAT and NEAT algorithms which were at the center of my research. I have also started a page for the Neuroevolutionary Solver. This page will outline how to use the system and modify it to perform further experiments.

There is a lot of material to be posted as we move forward. I hope you enjoy the new Automatons Adrift!

If there is an algorithm for intelligence…

Then we could run it in about 50 atoms worth of space. That is assuming that we could build the smallest state machine possible in that amount of space and then actually wire it up to some tiny interface. The smallest possible universal state machine was proven to exist a couple days ago by Alex Smith of Birmingham, UK.

This really does have some significant impact. While I don’t think we would run the algorithm for intelligence on this particular state machine, we could. We could in fact run any program at all on this state machine and have it input and output any possible string of information. You can think of this as the smallest possible independent microprocessor. This could be a significant step in the advancement of massively parallel sensor networks.

Think of this, you construct a piece of e-paper made up of these tiny little state machines. You connect them in all eight directions to their neighbours using carbon nanotube circuitry as well as connecting the top layer to a set of output machines, something like a pixel in an LCD. Now put a few of these together into a magazine and put some somewhat more complex circuitry into the spine of the book along with a power source (probably an external layer of solar energy molecules) and you have yourself an extremely powerful parallel computer that looks something like a book but is capable of far more. Completely dynamic content all controlled through a massively distributed network of basic microprocessor state machines. I can’t take credit for this idea, it is from “The Diamond Age” a very good novel about nanotechnology but what is fantastic in that book is a step closer to reality due to this proof and several recent advances in nanotechnology. Another common theme from the same book is how ubiquitous these massively parallel systems could be. They could consist of countless billions of tiny sensor nodes distributed through the air. You could breathe them in without destroying significant portions of the network because they are so small (about the size of dust). Yet the processing power in each is universal and the power of the entire system is extraordinary. Each of the nodes could perform complex sensing tasks and transmit their information through the network back to their home base. These truly would be automatons adrift!

Of course the algorithms to do that efficiently don’t really exist yet, but they are being worked on.

I know it seems pretty wild, but that is just one example. When you distribute the processing of a program across millions of tiny universal state machines you can drop the time to process down to a much smaller value. This would require a completely new direction in programming but it is entirely possible. The original posting of this was found on slashdot, thanks guys!

Whew… Blogging Breakdown

What happens when you work a full time job, get lots of overtime at work and have a major deadline in your thesis work that requires you to code your butt off? You get a breakdown in the amount of cool blog posts you get to put up.

I am working out some loose ends in my integration of NEAT into PicoEvo and Simbad. I am to the point where I have to integrate the genetic operators and the evolution epoch into the algorithm and putting it together the right way is tricky. I suspect I will wind up just slamming it together so it works then I can pull it apart and put it back together the right way after I meet my deadline!

Neurotic Agents

I got this from Slashdot a couple days ago and I wanted to share it.

When you are playing a real time strategy game (or any video game for that matter) the artificial intelligence you are playing against is usually a form of rule based system. The AI is given large amounts of game information, has a complex set of rules it follows and really would kick your butt every time if the game makers didn’t dumb it down. Some recent research into emotional AI with game playing shows that a Neurotic personality does best at playing a real time strategy it even beats the AI that is tuned to be difficult for humans.

I wonder if a evolutionary agent could learn the emotions of this AI? Perhaps it could evolve an efficient Neural Network structure for Neurotic game play. Could we separate the neurosis from the game rules in the NN? Some interesting questions, the article is from New Scientist (which is an awesome magazine).

Turkey Weekend

I hope everyone had a nice Thanksgiving weekend. I enjoyed two dinners, one on Sunday and one on Monday. I have many leftovers.

I also spent a lot of time working on my implementation of NEAT into Picoevo. It is a very intricate process integrating NEAT into an evolutionary systems like Picoevo. It wasn’t really designed to handle an algorithm like NEAT though it is quite capable of it. I think there is more than one way to implement it, and I am following the approach that I think works right now. In the future I may revise the design to bring it more inline with the design of Picoevo.

One of the more interesting tasks of this project is deciding where each portion of NEAT belongs in the Picoevo environment. Picoevo heavily uses inheritance to be flexible and it has been designed to work with almost any type of Genome. One just has to decide how to extend each element to support what you require. So for implementing NEAT I have each gene in the genome implemented as an Element, each genome is an individual composed of multiple types of Elements and each Population has multiple NEAT genomes. Crossover, Speciation and Innovation are controlled at the Population level, mutating genomes by adding links and nodes is controlled at the Individual level and mutation of weights is controlled at the Element level.

Picoevo was designed well and is indeed flexible but it wasn’t really designed with the idea that each Individual or genome might have more than one type of gene or Element. So I had to be creative when it comes to extending the Individual class in Picoevo. I think the solution will work out fine. When I post the Neuroevolutionary Solver for public consumption I will talk about some of these design ideas more.

Simbad a quick introduction

Simbad is the 3D robot simulator I am using for my autonomous agent research. This is just a brief look at the simbad interface and how you can interact with the simulation environment. We won’t even peek at the really cool features, like its ease of use or potential for Evolutionary Artificial Neural Network research!

The Simbad user interface.

The Simbad user interface.

If you click on the image you will see a larger copy of the picture. The large main window is the world view. This is the visualization of the 3D world your simulated robots traverse. It is roughly 20 meters x 20 meters and the basic agents have a radius of 0.5 meters. Underneath the world window is the control window. This interface provides controls for the simulated environment. You can adjust the speed of the simulation, pause it, reset, stop and step through the simulation. You can also adjust your viewing angle to set angles. If you want to adjust the world further you can rotate and move the world image my left click dragging and right click dragging respectively.

The final windows are the watch windows. These provide a view into your agents current states. You can see what sensors are firing, an agents location, if it has collided and other options as well. Simbad also provides several different sensors for your robot. The most notable is a camera sensor (not shown) which actually renders a 2D view of the 3D world which your agent can perform edge finding on and use the visual data to navigate.

I will provide a more detailed view into simbad in the future on this blog.