Herself's Artificial Intelligence

Humans, meet your replacements.

Archive for the ‘topics in artificial intelligence’ Category

Braitenberg Vehicles

without comments

Synthetic psychology is a field where biological behavior is synthesized rather than analyzed. The father of such behavior Valentino Braitenberg ( home page ) did some interesting work with toy vehicles in the 1986 book “Vehicles: Experiments in Synthetic Psychology“.

The Braitenberg vehicles were simple toy cars with light sensors as headlights. In some cars the wire for each headlight went to the real wheel directly behind it, in some the wires went to the opposite back wheel. The headlight receptors were aimed slightly off to the outside. The more light received by a receptor the faster the wheel wired to that receptor would turn.

Each vehicle exhibited realistic behavior when placed in a plane with several randomly placed light sources. A vehicle wired straight back when placed near a light source will then rush towards the light veering off as it gets closer to the light. As it gets more distant from the light sources the vehicle slows down. The reason is the wheel receiving the most light spins fastest turning the car away from the light source.

The vehicle with the crossed wires will turn and drive towards the brightest light in its field of view. The closer it gets, the faster it goes eventually running into the light.

Pretty interesting behavior from a very simple machine. But what if we add in a neurode to each wire and instead of a plain wire we use a wire that inhibits signals? Neurodes are set to only fire if they receive signals over a certain threshold. In this case zero is to be our threshold. So now our cars send signals to the wheels if there is no light, and do not send a signal if there is a light. Now the vehicle with the wires straight back moves toward a light and slows as it approaches, facing the light. The second vehicle now avoids light sources, speeding off to the darkest corner it can find.

Using thresholds ( see logic version ) you begin to see group behavior, one set of vehicles herds and traps another set.

So what has this all to do with current artificial intelligence? Some of our best stuff right now came from earlier work that was done and stopped. Some of our best mathematical algorithms come from extremely early math. And to remind you ( and me ) that simple rules can create very complex behavior in game characters and artificial life forms.

Four neurode versions of these vehicles have been built and they will exhibit more complex, non-linear behavior. “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” (Edsger Dijkstra) At what point does the behavior become realist enough to be considered a life form?

Source Code:
ObjC simulation of Braitenberg Vehicles( #2 fear and aggression )
ObjC simulation of Braitenberg Vehicles( #3 love, explore and complex )
ObjC simulation of Braitenberg Vehicles( #4 values and tastes )
ObjC simulation of Braitenberg Vehicles( #5 Logic )

More information:
Braitenberg Vehicles ( Java and Lisp simulators here )

Papers:
Swarm modelling. The use of Swarm Intelligence to generate architectural form. ( pdf)

Braitenberg vehicles Fear and Aggression

Braitenberg vehicles
Fear and Aggression

Written by Linda MacPhee-Cobb

June 5th, 2014 at 5:00 am

Lecture notes on Network Information Theory

without comments

If you are interested in network information theory you might want to check out this pdf of combined lecture notes from several graduate classes.

Network information theory deals with the fundamental limits on information flow in networks and optimal coding techniques and protocols that achieve these limits. It extends Shannon’s point-to-point information theory and the Ford–Fulkerson max-flow min-cut theorem to networks with multiple sources and destinations, broadcasting, interference, relaying, distributed compression and computing. Although a complete theory is yet to be developed, several beautiful results and techniques have been developed over the past forty years with potential applications in wireless communication, the Internet, and other networked systems.
This set of lecture notes, which is a much expanded version of lecture notes used in graduate courses over the past eight years at Stanford, UCSD, CUHK, UC Berkeley, and EPFL, aims to provide a broad coverage of key results, techniques, and open problems in network information theory. The lectures are organized in a “top-down” manner into four parts: background, single-hop networks, multi-hop networks, and extensions. The organization attempts to balance the introduction of new techniques and new models. Extensions (if any) to many users and large networks are discussed throughout. The lectures notes provide a unified, simplified, and formalized treatment of achievability using a few basic lemmas. The proofs in the lecture notes use elementary tools and techniques, which should make them accessible to graduate students in EE, CS, Statistics, and related fields as well as to researchers and practitioners in industry.

Network Information Theory Notes

pdf only

More Information:
First improvement to max flow algorithm in 10 years announced by MIT

Written by Linda MacPhee-Cobb

January 21st, 2010 at 9:13 am

Sparse Distributed Memory

without comments

Sparse distributed memory first appeared in 1998 as a model of long term memory in humans. The main idea is that distances between concepts in our brains can be represented as distances between points in a high dimension world. Since distances between points are far apart in many dimensions, the distance between concepts is large. The large distance between concepts means I only have to come closer to it than another idea for a match to be made.

For example if I give each letter of the alphabet its own dimension then map it by position in a word then ‘aeple’ is closer to ‘apple’ than ‘ample’ and I can guess that the correct word for the mistyped word. Or a four legged creature that is tall and is spotted is closer to a giraffe than a short legged spotted leopard.

This type of storage of data means you can store far fewer examples you need to match allowing you to store more information in a much smaller memory footprint.

All input is represented in binary form in sparse distributed memory. The algorithm works by calculating the ‘Hamming distance’ between data input and existing examples in memory.

Books:
Reinforcement Learning: An Introduction (pdf download )

More information:
Sparse Distributed Memory
Kevin Kelly ‘Out of Control’ Chapter 2
Sparse Distributed Memory and Related Models Chapter 3 (pdf)

Papers:
Kanerva’s Sparse Distributed Memory An associative memory algorithm well suited to the connection machine. (pdf) ( exellent starting point )
Kanerva’s Sparse Distributed Memory, Multiple Hamming Thresholds ( pdf )

Written by Linda MacPhee-Cobb

May 8th, 2008 at 5:00 am

Backward induction

without comments

All games and all competitions can be represented by trees. Each node represents a place to make a decision, each edge represents a decision that can be made from that node. One of the simplest games we all know is tic-tac-toe. The game tree for tic-tac-toe has an root node with 9 edges. Each edge represents one of the 9 positions you can play if you are player one. The second level of nodes each have 8 edges. Each edge represents one of the remaining 8 positions open to player 2. The paths through the game tree for tic-tac-toe = 9! or 362,880 possible ways to play the game through to completion. So while tic-tac-toe is with in the realm of games the computer can fully solve while playing most games are not. One method used in artificially intelligent games to solve this problem is backward induction.

For example: You get a job transfer to Mars for 10 years. There are two possible jobs on Mars open to your spouse. One job pays $60/year, one job pays $100/year. You know that at the end of 10 years you will both be shipped back to Earth.

A lottery is held every year for the job openings, 1/2 are for the $60/year job, half are for the $100 a year job. Once you commit, that is your job until the trip home. At which point in time should your spouse stop holding out for the $100/year job and take the $60/job?

In year 10 the possibilities are $100/$60/$0 so the $60 job should be accepted.

In year 9 the possibilities are $200 for the $100/year job; $120 for the $60 year job. But if the spouse holds out there is a 50% chance of either. So 50%( $100 + $60 ) = $80. We have then $200/$120/$80. Take the bad job.

In year 8 we have possible earnings of $300/$180 or ( 50% * ($200 + $120 ) = $160 ). The bad job is a better choice than holding out.

In year 7 we have possible earnings of $400/$240 or ( 50% ( $300 + $180 ) = $320 ). Since the $320 is higher than the $240 we should hold out.

In year 6 and earlier it will be better to hold out and hope to do better in next years job lottery, in year 8 on it is better to take either job.

Now there are some big assumptions here. One is that everyone has all the information needed to make the decision. And two that your spouse won’t go stir crazy sitting on Mars with nothing to do. Another assumption which can lead to trouble is that backward induction assumes the players do not collude with each other.

In games you would use the final payoffs for each player to do your backward induction. In business backward induction is commonly used to guess what the competition will do in response to your moves in the market. While backward induction has been very successful in AI and in politics it won’t solve all game tree problems because we don’t always have all the information and there’s more to life than a paycheck.

Papers:
Is good algorithm for computer players also good for human players?

Written by Linda MacPhee-Cobb

January 28th, 2008 at 5:00 am

Cellular Automata

without comments

In Wolfram’s book “A New Kind of Science” he studies cellular automata. What is also interesting is the approach he is taking. Rather than take something we know and try to figure out the rules, he tries different rules to see what they will create. While his book was badly received when it came out, other’s have found great success using this approach recently.

Wolfram was hardly the first to use cellular automata to mimick real life. Martin Gerhardt and Heike Schuster created ‘HodgePodge’ a cellular automata to mimic a chemical reaction. They explained it as using a discrete form of a differential equation. Since we have described most natural phenomena with differential equations that may explain why we have not done more with cellular automata. We just haven’t needed it. We may find use for it in things we can not yet easily describe with differential equations.

For example, cellular automata has found success in artificial intelligence in solving echohydraulics modeling problems. Cellular automata is also being used in pattern recognition, image processing, fluid mechanics and bioinformatics.

Cellular automata is simple rules repeated over and over. While one would expect simple patterns to emerge and repetitive patterns to emerge, non-repetitive and complex patterns also emerge in some cases. The two examples shown use only two colors and simple rules. Three colors leads to many more complex designs.

Nested: If either neighbor but not both neighbors in previous row are black-> color this cell black else color it white.

Irregular: If self and right neighbor in previous row are white then self is same color as left neighbor in previous row. If self and right neighbor are not both white then self is opposite color of left neighbor in previous row.

Source code:
Java automata examples

More information:
A New Kind of Science, Talk at US by Stephen Wolfram
CelLab, Cellular Automata Laboratory
A New Kind of Science ( book is available online and free)
Some of the recent controversy about Wolfram’s book
Fractal Geometry
Cellular Automata Links
An Introduction to Lindenman Systems ( related subject )
The Primordial Soup Kitchen
One dimensional cellular automata Java applet
A weakly universal cellular automaton in the hyperbolic 3D space with three states

Written by Linda MacPhee-Cobb

January 18th, 2008 at 5:00 am