Magazine article AI Magazine

Logics for Multiagent Systems

Magazine article AI Magazine

Logics for Multiagent Systems

Article excerpt

Logic can be a powerful tool for reasoning about multiagent systems. First of all, logics provide a language in which to specify properties--properties of an agent, of other agents, and of the environment. Ideally, such a language then also provides a means to implement an agent or a multiagent system, either by somehow executing the specification, or by transforming the specification into some computational form. Second, given that such properties are expressed as logical formulas that form part of some inference system, they can be used to deduce other properties. Such reasoning can be part of an individual agent's capabilities, but it can also be done by a system designer or the potential user of the agents.

Third, logics provide a formal semantics in which the sentences from the language are assigned a precise meaning: if one manages to come up with a semantics that closely models the system under consideration, one then can verify properties either of a particular system (model checking) or of a number of similar systems at the same time (theorem proving). This, in a nutshell, sums up the three main characteristics of any logic (language, deduction, semantics), as well as the three main roles logics play in system development (specification, execution, and verification).

We typically strive for logics that strike a useful balance between expressiveness on the one hand and tractability on the other: what kind of properties are interesting for the scenarios of interest, and how can they be "naturally" and concisely expressed? How complex are the formalisms, in terms of how costly is it to use the formalism when doing verification or reasoning with them?

In multiagent research, this complexity often depends on a number of issues. Let us illustrate this with a simple example, say the modeling of a set of lifts. If there is only one agent (lift) involved, the kind of things we would like to represent to model the agent's sensing, planning and acting could probably be done in a simple propositional logic, using atoms like pos(i, n) (currently lift i is positioned at floor n), callf{n) (there is a call from floor n), callt(n (there is a call within the lift to go to destination n), at(i, n) (lift i is currently at floor n), op (i) (the door of lift i is open) and up(i) (the lift is currently moving up). However, taking the agent perspective seriously, one quickly realizes that we need more: there might be a difference between what is actually the case and what the agent believes is the case, and also between what the agent believes to hold and what he would like to be true (otherwise there would be no reason to act!). So we would like to be able to say things like [logical not]callf(n) [conjunction] [B.sub.i]callf (n) [conjunction] [D.sub.i]callf(n) [right arrow] up(i)) (although there is no call from floor n, the agent believes there is one, and desires to establish that in that case the agent moves up).

Obviously, things get more interesting when several agents enter the scene. Our agent i needs not only a model of the environment but also a model of j's mental state, the latter involving a model of i's mental state. We can then express properties like [B.sub.i][B.sub.j]Callf(n) [right arrow] [logical not] up(i) (if i believes that j believes there is call from floor n, i will not go up). Higher-order information enters the picture, and there is no a priori limit to the degree of nesting that one might consider (this is for instance important in reasoning about games: see the discussion on common knowledge for establishing a Nash equilibrium [Aumann and Brandenburger 1995]). In our simple lift scenario, if for some reason lift i would like not to go up to floor n, we might have [B.sub.i]callf(n) [conjunction] [B.sub.j]callf(n) [conjunction] [B.sub.i][B.sub.j]callf(n) [conjunction] [D.sub.i][B.sub.j] [logical not] [B.sub.i]callf{n) (although both lifts believe there is a call from floor n, i desires that j believe that i is not aware of this call). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.