In the previous chapter we saw how to build simulations in which very simple automata interacted on a grid so that patterns of behaviour at the global scale emerged. In this chapter we explore how one might develop automata for social simulation which are somewhat more complex in their internal processing and consequently in their behaviour. Such automata are conventionally called agents, and there is now a growing literature on how they can be designed, built and used.
While there is no generally agreed definition of what an ‘agent’ is, the term is usually used to describe self-contained programs that can control their own actions based on their perceptions of their operating environment (Huhns and Singh 1998). Agent programming is rapidly becoming important outside the field of social simulation. For example, agents have been built to watch out for information as it becomes available over the Internet, informing the user if it finds relevant sources (Maes 1994). The agent is instructed about the topics thought to be interesting and it then continuously monitors known sources for items fitting this profile. Other agents have been built to help with electronic network management, business workflow and to guide people to use software more effectively (the agent monitors keystrokes and mouse movements and provides suggestions for faster ways of doing tasks).
The aim of agent design is to create programs that interact ‘intelligently’ with their environment. Agent software has been much influenced by work in artificial intelligence (AI), especially a subfield of AI called distributed artificial intelligence (DAI) (Bond and Gasser 1988; Chaib-draa et al. 1992).