Last updated: 2004-02-20
The subsumption architecture
Introduced in: 0.3
Developed by: Treefinger
Latest iteration in: 0.3
Latest iteration by: Treefinger
The subsumption architecture was introduced in "A robust layered control system for a Mobile Robot" by Rodney Brooks in 1985. It describes an architecture that is based on small Behaviours and Connections between them. The connections transfers data between behaviours and behaviours decide what data to send out. At the start of the chain, there is sensors that sense the world, and at the end of the chain there is actuators that makes the robot do things. The behaviours taken from their context can be very simple, Brooks idea was that by connecting several simple behaviours, a more complex behaviour will emerge. The architecture was foremost intended to be used in Mobile Robots that were situated in the world and embodied, which means "Real Robots".
Haphazard doesn't implement the subsumption architecture for embodied robots, but is quite content to keep to simulated robots in a simulated world. For more information on the subsumption architecture, please read more from the links to Brooks posted above.
When this documentation was written, Haphazard implements two small behaviours for testing:
Implemented data types
Data types are small wrappers that implement the merge method. The data can be singular or composite.
A small net to test this first version of the subsumption architecture was constructed. It uses a small sensor that scans the environment around the agent up to a certain range and then sends out a mathematical 3D vector defining the relative location of the closest object.
The vector makes its way into the Avoid behaviour through a connection, it becomes inverted and sent out again.
The vector now reaches the MoveAction behaviour which issues a move in the direction of the vector making the agent run away from anything that comes near.
A simple way to extend this is to add the Wander behaviour and merge that with the output of avoid, thus generating an agent that wanders randomly around, avoiding objects as it goes along
The test agent we are planning to implement will have the subsumption architecture as the design in the figure shows. All the leftmost behaviours will be fed sensor data from the environment. The sensor data tells the agent what the nearest object is and the position of the object. The behaviours avoid, MoveAction and wander are already implemented and functions well. The next layer will be able to pick up and put down things the agent sees in the environment. The sensor eatable is a kind of nose that tells the agent if the thing it is holding in its hands is eatable or not. Put-down puts down anything that the agent is holding in its hands on the ground again.
The layer after that is the eat- and feelings layer. Feelings let the agent experience feelings like hunger. If the agent is hungry it wanders around looking for things to pick up and if the things are eatable it will eat them. The eatenSensor tells the environment and feelings that the agent has consumed the eatable thing.
The memory- and pathplan layer let the agent remember which things are eatable and where to find them. Memory collects information from the input data from the environment and from the eatable sensor. When the agent feels hunger the memory behaviour checks if it knows about any food anywhere. If it does it gives the coordinates of the food to pathplan who constructs a vector in a direction towards the food.
The memory will be limited and the items in the memory will be prioritized in a fifo (first in first out)manner. However, if an item has been used to find food, it will be moved to the first place in the queue.
If the agent does not know about any food, it will be working with the other layers, wandering around randomly, picking up things and eat them if they are eatable.