Theory and documentation

Version history and plans





SourceForge homepage Logo

Last updated: 2004-02-20

The subsumption architecture

    Status: Alpha
    Introduced in: 0.3
    Refactorings: 0
    Developed by: Treefinger
    Latest iteration in: 0.3
    Latest iteration by: Treefinger


The subsumption architecture was introduced in "A robust layered control system for a Mobile Robot" by Rodney Brooks in 1985. It describes an architecture that is based on small Behaviours and Connections between them. The connections transfers data between behaviours and behaviours decide what data to send out. At the start of the chain, there is sensors that sense the world, and at the end of the chain there is actuators that makes the robot do things. The behaviours taken from their context can be very simple, Brooks idea was that by connecting several simple behaviours, a more complex behaviour will emerge. The architecture was foremost intended to be used in Mobile Robots that were situated in the world and embodied, which means "Real Robots".

Haphazard implementation

Haphazard doesn't implement the subsumption architecture for embodied robots, but is quite content to keep to simulated robots in a simulated world. For more information on the subsumption architecture, please read more from the links to Brooks posted above.

  • Behaviour
    An abstract class that helps a programmer to define a behavioural module.
  • Connection
    A class that defines a connection. A connection can be inhibited, suppressed or merged with/by connections coming from higher layers. Only a specific data type can be transferred by a connection.
  • DataType
    A data type can be transported by connections and serve as input and output definitions from Behaviours. The reason to define a specific data type for for this is the need for the special operation merge that a data type needs to implement.

Implemented behaviours

When this documentation was written, Haphazard implements two small behaviours for testing:

  • Avoid
    This behaviour uses a Vector3D as input, it then transforms the Vector3D to the opposite direction and scales it so that the shorter the input vector, the longer the output vector.
  • Wander
    Wander generates a new heading every x milliseconds.
  • MoveAction
    This behaviour is a terminator for a subsumption net, it transforms an input vector to a move command and executes that. If a null is received as an input, the move command is terminated.

Implemented data types

Data types are small wrappers that implement the merge method. The data can be singular or composite.

  • Vector3D
    This data type is simply a mathematical 3D vector that is wrapped. The merge simply adds the vectors together and scales it to the average length of the two vectors.


A small net to test this first version of the subsumption architecture was constructed. It uses a small sensor that scans the environment around the agent up to a certain range and then sends out a mathematical 3D vector defining the relative location of the closest object.

The vector makes its way into the Avoid behaviour through a connection, it becomes inverted and sent out again.

The vector now reaches the MoveAction behaviour which issues a move in the direction of the vector making the agent run away from anything that comes near.

A simple way to extend this is to add the Wander behaviour and merge that with the output of avoid, thus generating an agent that wanders randomly around, avoiding objects as it goes along


The test agent we are planning to implement will have the subsumption architecture as the design in the figure shows. All the leftmost behaviours will be fed sensor data from the environment. The sensor data tells the agent what the nearest object is and the position of the object. The behaviours avoid, MoveAction and wander are already implemented and functions well. The next layer will be able to pick up and put down things the agent sees in the environment. The sensor eatable is a kind of nose that tells the agent if the thing it is holding in its hands is eatable or not. Put-down puts down anything that the agent is holding in its hands on the ground again.

The layer after that is the eat- and feelings layer. Feelings let the agent experience feelings like hunger. If the agent is hungry it wanders around looking for things to pick up and if the things are eatable it will eat them. The eatenSensor tells the environment and feelings that the agent has consumed the eatable thing.

The memory- and pathplan layer let the agent remember which things are eatable and where to find them. Memory collects information from the input data from the environment and from the eatable sensor. When the agent feels hunger the memory behaviour checks if it knows about any food anywhere. If it does it gives the coordinates of the food to pathplan who constructs a vector in a direction towards the food.

The memory will be limited and the items in the memory will be prioritized in a fifo (first in first out)manner. However, if an item has been used to find food, it will be moved to the first place in the queue.

If the agent does not know about any food, it will be working with the other layers, wandering around randomly, picking up things and eat them if they are eatable.