If there is an algorithm for intelligence…

Then we could run it in about 50 atoms worth of space. That is assuming that we could build the smallest state machine possible in that amount of space and then actually wire it up to some tiny interface. The smallest possible universal state machine was proven to exist a couple days ago by Alex Smith of Birmingham, UK.

This really does have some significant impact. While I don’t think we would run the algorithm for intelligence on this particular state machine, we could. We could in fact run any program at all on this state machine and have it input and output any possible string of information. You can think of this as the smallest possible independent microprocessor. This could be a significant step in the advancement of massively parallel sensor networks.

Think of this, you construct a piece of e-paper made up of these tiny little state machines. You connect them in all eight directions to their neighbours using carbon nanotube circuitry as well as connecting the top layer to a set of output machines, something like a pixel in an LCD. Now put a few of these together into a magazine and put some somewhat more complex circuitry into the spine of the book along with a power source (probably an external layer of solar energy molecules) and you have yourself an extremely powerful parallel computer that looks something like a book but is capable of far more. Completely dynamic content all controlled through a massively distributed network of basic microprocessor state machines. I can’t take credit for this idea, it is from “The Diamond Age” a very good novel about nanotechnology but what is fantastic in that book is a step closer to reality due to this proof and several recent advances in nanotechnology. Another common theme from the same book is how ubiquitous these massively parallel systems could be. They could consist of countless billions of tiny sensor nodes distributed through the air. You could breathe them in without destroying significant portions of the network because they are so small (about the size of dust). Yet the processing power in each is universal and the power of the entire system is extraordinary. Each of the nodes could perform complex sensing tasks and transmit their information through the network back to their home base. These truly would be automatons adrift!

Of course the algorithms to do that efficiently don’t really exist yet, but they are being worked on.

I know it seems pretty wild, but that is just one example. When you distribute the processing of a program across millions of tiny universal state machines you can drop the time to process down to a much smaller value. This would require a completely new direction in programming but it is entirely possible. The original posting of this was found on slashdot, thanks guys!

Whew… Blogging Breakdown

What happens when you work a full time job, get lots of overtime at work and have a major deadline in your thesis work that requires you to code your butt off? You get a breakdown in the amount of cool blog posts you get to put up.

I am working out some loose ends in my integration of NEAT into PicoEvo and Simbad. I am to the point where I have to integrate the genetic operators and the evolution epoch into the algorithm and putting it together the right way is tricky. I suspect I will wind up just slamming it together so it works then I can pull it apart and put it back together the right way after I meet my deadline!

Neurotic Agents

I got this from Slashdot a couple days ago and I wanted to share it.

When you are playing a real time strategy game (or any video game for that matter) the artificial intelligence you are playing against is usually a form of rule based system. The AI is given large amounts of game information, has a complex set of rules it follows and really would kick your butt every time if the game makers didn’t dumb it down. Some recent research into emotional AI with game playing shows that a Neurotic personality does best at playing a real time strategy it even beats the AI that is tuned to be difficult for humans.

I wonder if a evolutionary agent could learn the emotions of this AI? Perhaps it could evolve an efficient Neural Network structure for Neurotic game play. Could we separate the neurosis from the game rules in the NN? Some interesting questions, the article is from New Scientist (which is an awesome magazine).

Turkey Weekend

I hope everyone had a nice Thanksgiving weekend. I enjoyed two dinners, one on Sunday and one on Monday. I have many leftovers.

I also spent a lot of time working on my implementation of NEAT into Picoevo. It is a very intricate process integrating NEAT into an evolutionary systems like Picoevo. It wasn’t really designed to handle an algorithm like NEAT though it is quite capable of it. I think there is more than one way to implement it, and I am following the approach that I think works right now. In the future I may revise the design to bring it more inline with the design of Picoevo.

One of the more interesting tasks of this project is deciding where each portion of NEAT belongs in the Picoevo environment. Picoevo heavily uses inheritance to be flexible and it has been designed to work with almost any type of Genome. One just has to decide how to extend each element to support what you require. So for implementing NEAT I have each gene in the genome implemented as an Element, each genome is an individual composed of multiple types of Elements and each Population has multiple NEAT genomes. Crossover, Speciation and Innovation are controlled at the Population level, mutating genomes by adding links and nodes is controlled at the Individual level and mutation of weights is controlled at the Element level.

Picoevo was designed well and is indeed flexible but it wasn’t really designed with the idea that each Individual or genome might have more than one type of gene or Element. So I had to be creative when it comes to extending the Individual class in Picoevo. I think the solution will work out fine. When I post the Neuroevolutionary Solver for public consumption I will talk about some of these design ideas more.

Simbad a quick introduction

Simbad is the 3D robot simulator I am using for my autonomous agent research. This is just a brief look at the simbad interface and how you can interact with the simulation environment. We won’t even peek at the really cool features, like its ease of use or potential for Evolutionary Artificial Neural Network research!

The Simbad user interface.

The Simbad user interface.

If you click on the image you will see a larger copy of the picture. The large main window is the world view. This is the visualization of the 3D world your simulated robots traverse. It is roughly 20 meters x 20 meters and the basic agents have a radius of 0.5 meters. Underneath the world window is the control window. This interface provides controls for the simulated environment. You can adjust the speed of the simulation, pause it, reset, stop and step through the simulation. You can also adjust your viewing angle to set angles. If you want to adjust the world further you can rotate and move the world image my left click dragging and right click dragging respectively.

The final windows are the watch windows. These provide a view into your agents current states. You can see what sensors are firing, an agents location, if it has collided and other options as well. Simbad also provides several different sensors for your robot. The most notable is a camera sensor (not shown) which actually renders a 2D view of the 3D world which your agent can perform edge finding on and use the visual data to navigate.

I will provide a more detailed view into simbad in the future on this blog.

The Founding Problem

When I really started working on my research several months ago, I started bringing environments together and testing various technologies out and trying to find the right combination of tools for my work. I could have written my own completely but that would have taken far more time and there’s no need to reinvent the wheel. The first tool I went looking for was a virtual robot simulator. There are many tools out there I won’t go over them, but I eventually settled on Simbad (I have included the link to the right). It was a nicely coded, 3d robot simulator. It is easily extensible and written in Java, a language I am comfortable with. So this worked for me.

I won’t talk about all the other tools I am using. I am going to leave that for a history page. What I will do is talk about my very first problem I ran into with the Simbad simulator. This problem made me so angry, I spent a week on it. I tried variations in my code, researched what others had done, read through multiple texts I purchased as a teenager about 3d graphics programming. This problem is directly responsible for me wanting to blog my work.

I needed to make dumb agents (robots) that can turn towards their goal and walk toward it in a straight line.

Sounds easy right? It wasn’t so easy for me. These dumb agents have to behave in a believable way, almost as if they were humans walking in a hallway or busy room. They are required to simulate a training environment for my learning agents. This task seems simple for you and I, but for a dumb robot it is quite diffcult.

I first had to add some ability to the agents to know where a goal is in their environment. That is simple enough, give them a goal point in the world and the means to check their own location in the world. This is the kind of thing we can do with GPS today, so it is fine if a wanna-be-human robot can do it. As a human, how can you tell that you are pointing towards your goal? Well you can look at your compass and it will tell you if the direction is incorrect. That boils down to plotting a straight line from your point to the goal and checking to see if you are pointing parallel to that line. So now you know you are or aren’t going in the right direction which way do you turn to go in the right direction if you aren’t? Well you turn the shortest amount to parallel with the proper direction. But how do you know which way to turn and how if your direction is parallel to the line between you and your goal, how do you know you aren’t heading away from your goal? We relate our directions to North, but the dumb agents don’t really have that luxury.

To solve these problems you have to calculate the distance to goal. That way if your distance to goal is increasing you know you are going the wrong way and you should turn towards the goal. Which way to turn though? Did this even work? No, it didn’t work. The distance calculation comes in handy but it isn’t really a good judge of when to turn and what way to turn. Initially I just told the robot to turn right and if it detected it was getting further away from the goal to turn left. This is great until you are heading directly away from the goal. Then you hop back and forth between left and right turns and never achieve anything. Humans may do this, if they are drunk. Not what I need for my test environment.

So I started calculating the angle between my current position and the goal and what would be my next location and the goal. When these angles dropped below 0.1 radians I knew I was heading in the right direction and could just keep going that way. I still didn’t know what way was the shortest turn and I really couldn’t get the agent to turn completely towards its goal. It would walk in a big circle around its goal or it would meander towards its goal, the inaccuracy of calculating lines just wasn’t working. So I broke out the linear algebra texts and started reading. If you calculate the dot product you can tell if your angle to another point is acute or obtuse. That proved to be handy, I would only run into problems at the 90, 180 and 270 degree marks but if you test for all instances of the dot product and wrap some other logic around it, you can force the agents to turn completely towards their goal. Now we are getting somewhere, but they still always turn right.

Well after much googling I found this very handy tid-bit on a basic 2d graphics website (http://www.geocities.com/SiliconValley/2151/math2d.html) and I want to share this with you, because if anyone out there is feeling this problem like I did, this will make you very happy.

clockwise = ((currentPosition.x-lastPosition.x)*(goal.y-lastPosition.y))-((currentPosition.y-lastPosition.y)*(goal.x-lastPosition.x))

What this does is tell you that the graph that is formed between the last position, current position, and goal are formed in a clockwise or counter-clockwise order. So no matter where you are in the X,Y plane if this calculation is < 0 you want to turn counter-clockwise since the angle of the lines is clockwise and vice versa if it is > 0.

Implement these components in the right order accounting for the 90 degree and 180 degree marks and you will have a nice little robot that turns the shortest (and most human) way towards the goal every time and is very efficient about getting on track to goal even after being bumped out of the way.

That took me forever to get working, and it was just a simple little problem. It really made me want to talk about what I did to solve the problem and some of the thought process. Will it solve your problem? I don’t know, but I hope you enjoyed reading about my pain.

This process has just begun…

I don’t really know what I will be writing here. I decided I would start to blog about my research and the various nifty problems I encounter every time I try something new. Maybe I will just talk about that, maybe I will talk about other things. If you are reading this and I haven’t told you to come here then you probably don’t even have a clue who I am.

I will hopefully put a page up about myself in the next few days and a page about my research shortly after that, then we can start talking about the fun things.