Segments
agent Temporal Evolution
118
Total Segments
7
Documents
430
Avg Words/Segment
2887
Avg Characters
Document Segments 118
agent, n.1 & adj.
2024
30 segments
#None
paragraph
NOUN
Accept All Cookie Settings
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 1/21
#None
paragraph
About OED How to use the OED
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 20/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
Historical Thesaurus Purchasing
Editorial policy Help with access
Updates World Englishes
Institutional account management Contribute
Accessibility Oxford University Press
Contact us Oxford Languages
Upcoming events Oxford Academic
Case studies Oxford Dictionary of National Biography
Media enquiries
Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship,
and education by publishing worldwide
Cookie policy Privacy policy Legal notice
Copyright © 2024 Oxford University Press
#None
paragraph
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 21/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
sports agent, n. 1943–
A person who represents a professional athlete in sleeper agent, n. 1945–
= sleeper, n. I.2d.
bioagent, n. 1950–
A harmful or disease-producing microorganism, biopesticide, biotoxin, etc., esp. one used in
warfare or for the purposes of terrorism.
G-agent, n. 1953–
Any of a group of four organophosphorus nerve agents originally developed by German scientists
during the Second World War, characterized by being…
uncoupling agent, n. 1956–
= uncoupler, n.
stripping agent, n. 1958–
nerve agent, n. 1960–
A substance that alters the functioning of the nervous system, typically inhibiting
neurotransmission; esp. one used as a weapon, a nerve gas.
Agent Orange, n. 1966–
A defoliant and herbicide used by the United States during the Vietnam War to remove forest
cover and destroy crops. Cf. agent, n.¹ & adj.compounds…
penetration agent, n. 1966–
A spy sent to penetrate an enemy organization.
treble agent, n. 1967–
A spy who works for three countries, his or her superiors in each being informed of his or her
service to the other, but usually with actual…
triple agent, n. 1968–
= treble agent, n.
managing agent, n. 1969–
A person responsible for administering or managing an activity (esp. a sale) on behalf of another;
(Insurance) a manager of an underwriting syndicate…
masking agent, n. 1977–
A chemical compound which conceals the presence of a substance within the body; (Sport) a
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
A substance, such as yeast or baking powder, which is used in dough or batter to make it rise
during (and sometimes before) baking.
travel agent, n. 1885–
A person who owns or works for a travel agency; (also) a travel agency.
polling agent, n. 1887–
An ofstation on the day of an election.
special agent, n. 1893–
A person who conducts investigations on behalf of the government; (now) spec. (U.S.) a person
who conducts criminal investigations and has arrest…
transport-agent, n. 1897–
alkylating agent, n. 1900–
A substance that brings about alkylation; (Pharmacology) any of a class of cytotoxic
immunosuppressant drugs which alkylate DNA and are used in…
addition agent, n. 1909–
(In electrodeposition) a substance which is added to an electrolyte, typically in small quantities,
in order to modify the quality of the deposit…
site agent, n. 1910–
a. An agent authorized to inspect, survey, and purchase land for development (rare); b. (in the
construction industry) a person responsible for…
marketing agent, n. 1915–
harassing agent, n. 1919–
A non-lethal chemical which is deployed in the form of a gas or aerosol and used to incapacitate
an enemy or disperse a crowd; = harassing gas, n.
contrast agent, n. 1924–
A substance introduced into a part of the body to enhance the quality of a radiographic image by
increasing the contrast of internal structures with…
binding agent, n. 1933–
A substance that assists cohesion (cf. bind, v. III.10).
stock-agent, n. 1933–
Oxfodrodu Ubnleiv aergseitnyt P, rne.ss u1s9e3s5 c–ookies to enhance your experience on our website. By selecting ‘accept all’ you
are Aa gsrpeye iwngh oto w oourrk uss oen o fb ceohoaklfie osf. Ymouut ucaanll yc hhaonsgteil ey ocuoru cnotorikeise, suestutianlglys awti athn ya ctitmuea.l Malolergei ainnfcoerm oantliyo nto can be
founodn ein. our Cookie Policy.
release agent, n. 1938–
A substance which is applied to a surface in order to prevent adhesion to it.
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 19/21
#None
paragraph
tourist agent, n. 1884–
raising agent, n. 1885–
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 18/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
booking agent, n. 1849–
a. A person who or a business which arranges transport or travel for goods or passengers, or sells
tickets in advance for concerts, plays, or other… Frequently derogatory in early use, denoting
agents for railway or shipping companies who issued tickets or passes which were greatly
overpriced or invalid; cf. booker, n. 3a.
passenger agent, n. 1852–
baggage-agent, n. 1858–
employment agent, n. 1859–
An individual acting as a professional intermediary between applicants for work and employers.
claim-agent, n. 1860–
matrimonial agent, n. 1860–
personation agent, n. 1864–
An ofadvance agent, n. 1865–
An agent who is sent on ahead of a main party (cf. advance man, n.); also transfer agent, n. 1869–
information agent, n. 1871–
mission-agent, n. 1871–
lecture agent, n. 1873–
publicity agent, n. 1877–
agent word, n. 1879–
A word that indicates agency or active force; esp. a word that denotes the doer of an action; =
agent noun, n.
personating agent, n. 1879–
= personation agent, n.
rental agent, n. 1880–
bittering agent, n. 1883–
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
business agent, n. 1831–
customs agent, n. 1838–
= customs ofmine agent, n. 1839–
agentive, adj. & n. 1840–
Of or relating to an agent or agency (see agent, n.¹ A.1c); indicating or having the semantic role of
an agent.
railroad agent, n. 1840–
road agent, n. 1840–
†a. An agent or driver for a stagecoach company (obsolete); b. a robber who steals from travellers
or holds up vehicles on the road (now historical).
rogue agent, n. 1840–
station agent, n. 1840–
a. Chieworks for (a particular branch of) an intelligence…
A substance used to clarify a liquid; spec. (a) a substance used to remove organic compounds
from a liquid, esp. beer or wine, to improve the clarity…
freight agent, n. 1843–
shipping-agent, n. 1843–
A licensed agent who transacts a ship's business for the owner.
goods agent, n. 1844–
intelligence agent, n. 1844–
patent agent, n. 1845–
land-agent, n. 1846–
A steward or manager of landed property; also, an agent for the sale of land, an estate agent.
change agent, n. 1847–
A person who initiates social or political change within a group or institution.
Oxfo p r a d y U a n g i e ve n r t s , i t n y . Pre1s8s4 u7s–es cookies to enhance your experience on our website. By selecting ‘accept all’ you
are Aangr oeeffounreds ipno onusri b Cloeo fkoire a Pdovliicsyin. g the U.S. president on rates of…
bureau agent, n. 1848–
An agent or ofhttps://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 17/21
#None
paragraph
agentless, adj. 1831–
Lacking an agent (in various senses); without an agent.
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 16/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
An agent (now typically a professional one) who acts on behalf of an author in dealing with
publishers and others involved in promoting his or her…
theatrical agent, n. 1797–
An agent whose business is to act as an intermediary between actors looking for work and those
seeking to employ them.
commission agent, n. 1798–
†a. = commission broker, n. (a) (obsolete); b. an agent who conducts business or trade for another
party on the principle of commission (commission…
book agent, n. 1810–
A person who promotes the sale of books; (now) spec. a literary agent (cf. agent, n.¹ A.2e).
forwarding agent, n. 1810–
A person or business that organizes the shipment or transportation of goods.
newsagent, n. 1811–
A dealer in newspapers and periodicals, esp. the owner of a shop where these are sold; (now also)
the shop itself, usually also selling tobacco…
police agent, n. 1813–
ship-agent, n. 1813
A shipping agent.
oxidizing agent, n. 1814–
A substance that brings about oxidation and in the process is itself reduced.
press agent, n. 1814–
A person employed to organize advertising and publicity in the press on behalf of an organization
or person.
reducing agent, n. 1816–
A substance that brings about chemical reduction and in the process is itself oxidized; cf.
oxidizing agent, n.
parliamentary agent, n. 1819–
A person professionally employed to take charge of the interests of a party concerned in or
affected by any private legislation.
counter-agent, n. 1821–
A counteracting agent or force; a counteractant.
#None
paragraph
s
–
cookies to enhance your experience on our website. By selecting ‘accept all’ you
are agreeing to our use of cookies. You can change your cookie settings at any time. More information can be
An agent employed (by the landlord or owner) in letting or selling a house, collecting rents, etc.;
found in our Cookie Policy.
(now esp.) an estate agent.
literary agent, n. 1794–
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 15/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
crown agent, n. 1753–
An agent for the Crown; spec. (usually with capital initials) (a) in Scotland, a law ofcharge of criminal proceedings, acting under…
agentess, n. 1757–
A female agent.
navy agent, n. 1765–
A person or paymaster or purser in the U.S. navy (obsolete).
Indian agent, n. 1766–
An ofpeople; (in Canada) the chief government…
prize agent, n. 1766–
An agent appointed for the sale of prizes taken in maritime war.
An agent (now esp. an intelligence agent) who works away from a central ofadvertising agent, n. 1775–
purchasing agent, n. 1777–
coal agent, n. 1778–
federal agent, n. 1781–
A representative of the U.S. federal government, (now) esp. a federal law-enforcement ofnewspaper agent, n. 1781–
agent noun, n. 1782–
A noun (in English typically one ending in -er or -or) denoting someone or something that
performs the action of a verb, as worker, accelerator, etc.
estate agent, n. 1787–
A person or company involved in the business or profession of arranging the sale, purchase, or
rental of buildings and land for clients. Also (also…
revenue agent, n. 1787–
recruiting agent, n. 1792–
Oxfohrodu Useni vaegresnityt ,P nre.ss u
#None
paragraph
agentship, n. 1608–
The position, role, or function of an agent (in various senses); agency. Also: an instance of this.
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 13/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
Frequency of agent, n.¹ & adj., 2017–2023
* Occurrences per million words in written English
Modern frequency series are derived from a corpus of 20 billion words, covering the period from 2017 to the
present. The corpus is mainly compiled from online news sources, and covers all major varieties of World
English.
Compounds & derived words
Sort by Date (oldest nihilagent, n. 1579–80
A person who does nothing.
agentry, n. 1590–
The ofagency; the process or fact of being an agent…
vice-agent, n. 1597–
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
Frequency
agent is one of the 1,000 most common words in modern written English. It is similar in frequency to
words like agree, distribution, kill, military, and sell.
It typically occurs about 100 times per million words in modern written English.
agent is in frequency band 7, which contains words occurring between 100 and 1,000 times per million
words in modern written English. More about OED's frequency bands
Frequency data is computed programmatically, and should be regarded as an estimate.
Frequency of agent, n.¹ & adj., 1750–2010
* Occurrences per million words in written English
Historical frequency series are derived from Google Books Ngrams (version 2), a data set based on the Google
Books corpus of several million books printed in English between 1500 and 2010.
The overall frequency for a given word is calculated by summing frequencies for the main form of the word,
any plural or inFor sets of homographs (distinct entries that share the same word-form, e.g. mole, n.¹, mole, n.², mole, n.³,
etc.), we have estimated the frequency of each homograph entry as a fraction of the total Ngrams frequency
Oxfofordr t Uhen iwveorrsdi-tfyo Prmre. s Tsh uiss mesa yc oreoskuielts i nto i neanchcuarnacceie yso.ur experience on our website. By selecting ‘accept all’ you
are agreeing to our use of cookies. You can change your cookie settings at any time. More information can be
found in our Cookie Policy.
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 12/21
#None
paragraph
1600s agentt
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 11/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
1535 The fynall necessytie also, and the cause agent [Latin causam agentem] or eW. Marshall, translation of Marsilius of Padua, Defence of Peacei. viii. f. 67v
1575 The ayre being more thin and liquide then the water, and more vnable to resist, is sooner and
more easily atranslation of L. Daneau, Dialogue Witches iii. sig. E.vii
1615 Hughe Mill and Elinor his wife the parties agentes in this cause and William delve defendent.
in B. Cusack, Everyday English 1500–1700 (1998) 24
1620 What a hot fellow Sol (whom all Agent Causes follow).
J. Melton, Astrologaster 13
1704 The proper oJ. Norris, Essay Ideal Worldvol. II. vii. 350
1856 Agent or patient, singly or one of a crowd.
T. De Quincey, Confessions Eng. Opium-eater (revised edition) in Selections Grave & Gayvol. V. 83
1949 The Philosopher is speaking in that passage not of the agent cause but of the formal cause.
M. C. Fitzpatrick, translation of St. Thomas Aquinas, On Spiritual Creatures i. 25
2009 The [Philippine] people have transmogriagent force of revolution.
N. X. M. Tadiar, Things fall Away vii. 290
Pronunciation
BRITISH ENGLISH U.S. ENGLISH
/ˈeɪdʒ(ə)nt/ /ˈeɪdʒ(ə)nt/
AY-juhnt AY-juhnt
Pronunciation keys
Forms
Variant forms
late Middle English– agent
#None
paragraph
Acting, exerting power (sometimes contrasted with patientadj. A.2a). 1535–
† party agentnoun Obsolete Law the person or party bringing a suit.
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 10/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
Sometimes overlapping with sense A.1b.
1579 The gallowes is no agent or doer in those good thinges.
W. Fulke, Heskins Parleament Repealed in D. Heskins Ouerthrowne 621
1593 Not a nayle in it [sc. the Crosse] but is a necessary Agent in the Worlds redemption.
T. Nashe, Christs Teares 21/1
a1616 Here is her hand, the agent of her heart.
W. Shakespeare, Two Gentlemen of Verona (1623) i. iii. 46
1654 God doth often good works by ill agents.
J. Bramhall, Just Vindication of Church of England iii. 43
1793 War, which is the agent which must in general be employed upon these occasions,
presents..an uncertain court of judicature.
B. Vaughan, Letters Concert of Princes p. iii
1842 Nature..Thro' many agents making strong, Matures the individual form.
Lord Tennyson, Love thou thy Land in Poems (new edition) vol. I. 225
1878 Whatever thus furnishes us with the that is, something which acts for us and assists us.
W. S. Jevons, Political Economy 26
1920 Money is the agent through which good purposes are made eIntellectvol. 12 233/2
2002 [In Marlowe's physiology] the arteries..carry the vital spirit..which is the agent by which the
soul eY. Takahashi in S. W. Wells, Shakespeare Surv. 181
4. Chemistry. A substance that brings about a chemical or physical effect or causes a chemical 1624–
reaction. In later use chieeffect or reaction. Cf. reagentn. 2.
alkylating, oxidizing, reducing, wetting agent, etc.: see the 1624 The vinegre..is the onely Agent [French l'vnique agent; Latin solum medium aptum] in the whole
World for this Art, that can resolue and reincrudate, or make raw againe the Mettallicke
Bodies.
‘E. Orandus’, translation of N. Flamel, Expos. Hieroglyphicall Figures St. Innocent's Church-yard 159
1671 The agent in the change wrought by Petrisaxeous odour, or invisible ferment.
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
mis-agent, n. 1625
non-agent, n. 1632–
smock-agent, n. 1632–
agent, v. 1637–
transitive. To act as agent in (some business or process); to conduct or carry out as agent. Also: to
act as an agent for (a person or project).
agenting, n. 1646–
The business or process of acting as an agent (in various senses); the profession of an agent.
foreign agent, n. 1646–
A person who represents or acts on behalf of one country while located in another; (in later use
spec.) a person who works secretly to obtain…
free agent, n. 1649–
a. A person able to act freely, as by the exercise of free will, or because of the absence of
restriction, constraint, or responsibilities; b. Sport…
reagent, n. 1656–
Chemistry. A substance used in testing for other substances, or for reacting with them in a
particular way; (more widely) any substance used in…
agent general, n. 1659–
spec. (sometimes with capital initials). Formerly: the representative of a British colony in London
(now historical). Later: the representative of an…
under-agent, n. 1677–
A sub-agent.
subagent, n. 1683–
A subordinate agent; (U.S. Law) an agent authorized to transact business or otherwise act on
behalf of another.
chemical agent, n. 1728–
A chemical substance producing a speci(now often) spec. a substance used to incapacitate…
inter-agent, n. 1728–
An intermediate agent; a go-between, intermediary.
Oxfocordm Umnievrecrsiaitly a Pgreesnst ,u nse.s co1o73k7ie–s to enhance your experience on our website. By selecting ‘accept all’ you
are Aa gpreeresinogn toor o ourrg uasneiz oaft cioono kaiuesth. Yooriuz ecadn t och aacntg oen y oaunro tchoeork'ise bseehttainlfg isn a mt aanttye trims ree. l Matoinreg itnofo crommamtioenr ccean be
found in our Cookie Policy.
or trade; spec. (U.S.) an oftravelling agent, n. 1737–
A travelling salesperson; a representative who travels on behalf of a company.
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 14/21
#None
paragraph
3. The means by which something is done; the material cause or instrument through which an 1579–
effect is produced (often implying a rational employer or contriver).
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 8/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
1825 Mr. Schemer, the agent, had no situation for our hero upon his books, but Proteus
heard..that Mr. Make-a-bill..was in great want of a person at his theatre.
P. Egan, Life of Actor vi. 220
1892 By an early hour of the numbered evening I might have been observed..dining with my
agent.
R. L. Stevenson & L. Osbourne, Wrecker vi. 95
1917 The name on the door was Abe Riesbitter, Vaudeville Agent, and from the other side of
the door came the sound of many voices.
P. G. Wodehouse, Man with Two Left Feet 34
1946 Mr Watt, my agent, and Mr Faber, my publisher, have Daimlers and country cottages.
P. Larkin, Letter 28 July in Selected Letters (1992) 120
1970 Her agent..was nonplussed. ‘Look, baby,’ he gently chided, ‘we're walking away with one
million..dollars a picture.’
T. Southern, Blue Moviei. viii. 64
1983 Agents, often to justify their percentage when all they really do for a big star is make a
phone call, are geniuses when it comes to new things to ask for.
W. Goldman, Adventures in Screen Trade 18
2003 [She] was no longer the timid, inexperienced ingénue..protected by her agent.
C. Fitheatre 2.f. U.S. A stagecoach robber; = road agentn. Now historical. 1876–
1876 The driver San Andreas.
Weekly Calaveras Chronicle (Mokelumne Hill, California) 29 July 3/1
1880 We reached it before long, and concluded that the ‘agents’, or robbers, had an excellent
eye for position.
A. A. Hayes, New Colorado (1881) xi. 154
1904 Nex' time I drives stage some of these yere agents massacrees me from behind a bush.
S. E. White, Blazed Trail Storiesii. iii. 155
1970 The agents developed a system of marking departing stagecoaches that were carrying
treasure so that confederates would know which ones to stop.
H. S. Drago, Great Range Wars xviii. 207
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
1571 Faieth is produced and brought foorth by the grace of God, as chiefe agent and worker
thereof.
W. Fulke, Confut. Popishe Libelle (new edition) f. 108v
1592 I stepped back againe into the garden,..leauing them still agents of these vnkind villanies.
R. Greene, Philomela sig. F4v
1645 The distans.
A. Ross, Philosophicall Touch-stone 35
1666 Whether or no the Shape can by Physical Agents be alter'd.., yet mentally both..can be done.
R. Boyle, Origine of Formes & Qualities 9
1699 When the Samians invaded Zancle, a..great Agent in that aR. Bentley, Dissertation upon Epistles of Phalaris (new edition) 155
1719 I was still to be the wilful Agent of all my own Miseries.
D. Defoe, Life Robinson Crusoe 43
1722 Nor can I think, that any body has such an idea of chance, as to make it an agent or really
existing and acting cause of any thing.
W. Wollaston, Religion of Nature v. 60
1848 Successful production..depends more on the qualities of the human agents, than on the
circumstances in which they work.
J. S. Mill, Principles of Political Economyvol. I.i. vii. 123
1875 The Rhizopods were important agents in the accumulation of beds of limestone.
J. W. Dawson, Life's Dawn on Earth vi. 134
1904 The glacier will be eJournal of Geology (Chicago) vol. 12 574
1963 The key idea of man as the agent for the whole future of evolution.
J. S. Huxley, Human Crisis 19
2010 At Cambridge..I had no theories about theatre as an agent of social or political change.
S. Fry, Fry Chronicles 94
1.c. Grammar. The doer of an action, typically expressed as the subject of an active verb or in c1620–
a by-phrase with a passive verb.
Cf. agent nounn.
c1620 The active verb adheres to the person of the agent; As, Christ hath conquered hel and
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
agent
NOUN1 & ADJECTIVE
Etymology
Summary
Of multiple origins. Partly a borrowing from French. Partly a borrowing from Latin.
Etymons:Frenchagent; Latinagent-, agē ns, agere.
< (i) Middle Frenchagent (Frenchagent) (noun) person acting on behalf of another, representative,
emissary (1332 in an isolated attestation, subsequently (apparently after Italian) from 1578), person who
or thing which acts upon someone or something (c1370, originally and frequently in philosophical
contexts), substance that brings about a chemical effect or causes a chemical reaction (1612 (in the
passage translated in quot. 1624 at sense A.4) or earlier; rare before early 19th cent.), person who
intrigues (1640), (adjective) that acts, that exerts power (1337; c1450 in grammar; second half of the 15th
cent. in cause agent (compare quot. 1535 at sense B)),
and its etymon (ii) classical Latinagent-, agē ns acting, active, (masculine noun) pleader, advocate, in
post-classical Latin also representative, ofchurch (6th cent.), (neuter noun) (in philosophy) instrumentality, cause (from 8th cent. in British
sources; also in continental sources), uses as adjective and noun of present participle of agere to act, do
(see actv.).
With sense A.1a and corresponding adjectival use compare earlier patientn. and patientadj.
Notes
Parallels in other European languages.
Compare Catalanagent, adjective and noun (14th cent.), Spanishagente (late 14th cent. as noun, early
15th cent. as adjective), Portugueseagente, adjective and noun (15th cent.), Italianagente (a1294 as
adjective, a1328 as noun). Compare also Dutchagent (noun) of(masculine noun) representative, emissary (1546), spy (18th cent., now the usual sense), Agens (neuter
noun) person who or thing which acts upon someone or something (1598).
#None
paragraph
matters for an actor, performer, writer, etc.
In earliest use: a theatrical agent. literary, press, publicity, sports agent, etc.: see the https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 7/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
1764 ReC. Wiseman, Compl. English Grammar 155
1771 An active verb..necessarily supposes an agent, and an object acted upon; as..I praise John.
D. Fenning, New Gram. English Tongue 32
1845 It often becomes necessary to state the object of a verb active, or the agent of a verb
passive. Hence arises the necessity for..the accusative and the ablative.
Encyclopædia Metropolitana (1847) vol. I. 33/1
1953 With an intransitive verb the subject is as much a patient as an agent. I walk is as much ‘I
cause my walking’ as ‘I experience my walking’.
W. J. Entwhistle, Aspects of Language vi. 179
2007 Truck driver is an acceptable (and existing) compound..but child-driver is not
acceptable..since child is the agent of the verb.
N. Tsujimura, Introd. Japanese Linguisticsiv. vii. 166
grammar
1.d. Parapsychology. In telepathy: the person who originates an impression (opposed to the 1883–
percipient who receives it).
1883 In Thought-transference..both parties (whom, for convenience' sake, we will call the Agent
and the Percipient) are supposed to be in a normal state.
Proceedings of Society for Psychical Research 1882–3vol. 1 119
1886 We call the owner of the impressing mind the agent, and the owner of the impressed mind
the percipient.
E. Gurney et al., Phantasms of Livingvol. I. 6
1961 Spontaneous cases [of telepathy] do occasionally occur in which no such connection
between apparent agent and apparent percipient can be traced.
W. H. Salter, Zoar xi. 149
1990 Analytical attention..has shifted down the years from agent (sender) to percipient (receiver).
L. Picknett, Encycl. Paranormal 218/1
parapsychology
2. A person acting on behalf of another.
2.a. A person who acts as a substitute for another; one who undertakes negotiations or 1523–
transactions on behalf of a superior, employer, or principal; a deputy, steward,
#None
paragraph
1652 John and Peter (1 The Agent.) travelled together to (2 The Verb.) Rome.
F. Lodowyck, Ground-work New Perfect Language 15
https://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 3/21
#None
paragraph
cause of some process or change. Frequently with for, in, of.
Sometimes dihttps://www.oed.com/dictionary/agent_n1?tab=meaning_and_use#8694696 2/21
#None
paragraph
10/9/24, 7:26 PM agent, n.¹ & adj. meanings, etymology and more | Oxford English Dictionary
1.a. A person who or thing which acts upon someone or something; one who or that which a1500–
exerts power; the doer of an action. Sometimes contrasted with the patient (instrument,
etc.) undergoing the action. Cf. actorn. 3a.
Earliest in Alchemy: a force capable of acting upon matter, an active principle. Now chiesociological contexts.
a1500 The fyrst [kind of combining] is callyd by phylosophers dyptatyve be-twyxte ye agent & ye
(1471) pacyent.
G. Ripley, Compend of Alchemy (Ashmole MS.) l. 718
a1555 The forgeuenes of oure sinnes..is onely gods worke & we nothing els but patientes & not
agentes.
J. Bradford, Godlie Medit. Lordes Prayer (1562) sig. Q.ii
1614 For he maketh foure originals, whereof three are agents, and the last passiue and
materiall.
W. Raleigh, History of Worldi.i. i. §6. 6
1646 Nor are we to be meer instruments moved by the will of those in authority..but are morall
Agents.
S. Bolton, Arraignment of Errour 295
1788 He that is not free is not an agent, but a patient.
J. Wesley, Serm. Several Occasionsvol. V. 177
1809 Agent and Patient, when the same person is the doer of a thing, and the party to whom
done: as where a woman endows herself of the best part of her husband's possessions.
T. E. Tomlins, Jacob's Law-dictionary
1870 In conformity with this view, the distinction between agent and patient, between
something which acts and some other thing which is acted upon, is formally abolished.
F. C. Bowen, Logic xii. 401
1909 We are..conversant with the fact in human ais an intelligent agent.
Popular Science Monthly April 379
1989 It is silly to berate the hurricane for irresponsibility... It..cannot be a true agent; it cannot
author or own an action.
C. T. Sistare, Responsibility & Criminal Liability ii. iv. 15
2010 It is only an exercise of power if the agent gets the subject to do something whether or not
the subject wants to do it.
J. R. Searle, Making Social World vii. 152
5 segments
#None
paragraph
AGENT, Black's Law Dictionary (12th ed. 2024)
- vice-commercial agent (1800) Hist. In the consular service of the United States, a consular officer who was substituted
temporarily to fill the place of a commercial agent who was absent or had been relieved from duty.
Westlaw. © 2024 Thomson Reuters. No Claim to Orig. U.S. Govt. Works.
End of Document © 2024 Thomson Reuters. No claim to original U.S. Government Works.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 5
#None
paragraph
AGENT, Black's Law Dictionary (12th ed. 2024)
- showing agent (1901) A real-estate broker's representative who markets property to a prospective purchaser. • A showing
agent may be characterized as a subagent of the listing broker, as an agent who represents the purchaser, or as an intermediary
who owes an agent's duties to neither seller nor buyer. — Also termed selling agent. Cf. listing agent.
- soliciting agent (1855) 1. Insurance. An agent with authority relating to the solicitation or submission of applications to an
insurance company but usu. without authority to bind the insurer, as by accepting the applications on behalf of the company.
2. An agent who solicits orders for goods or services for a principal. 3. A managing agent of a corporation for purposes of
service of process.
- special agent (17c) 1. An agent employed to conduct a particular transaction or to perform a specified act. Cf. general agent
(1). 2. Insurance. An agent whose powers are usu. confined to soliciting applications for insurance, taking initial premiums,
and delivering policies when issued. — Also termed (in sense 2) local agent; solicitor.
- specially accredited agent (1888) An agent that the principal has specially invited a third party to deal with, in an implication
that the third party will be notified if the agent's authority is altered or revoked.
- statutory agent (1844) An agent designated by law to receive litigation documents and other legal notices for a nonresident
corporation. • In most states, the secretary of state is the statutory agent for such corporations. Cf. agency by operation of law
(1) under agency (1).
- stock-transfer agent (1873) See transfer agent.
- subagent (18c) 1. A person to whom an agent has delegated the performance of an act for the principal; a person designated by
an agent to perform some duty relating to the agency. • If the principal consents to a primary agent's employment of a subagent,
the subagent owes fiduciary duties to the principal, and the principal is liable for the subagent's acts. — Also termed subservant.
Cf. primary agent; subordinate agent.
“By delegation … the agent is permitted to use agents of his own in performing the function he is employed to perform for
his principal, delegating to them the discretion which normally he would be expected to exercise personally. These agents are
known as subagents to indicate that they are the agent's agents and not the agents of the principal. Normally (though of course
not necessarily) they are paid by the agent. The agent is liable to the principal for any injury done him by the misbehavior of
the agent's subagents.” Floyd R. Mechem, Outlines of the Law of Agency § 79, at 51 (Philip Mechem ed., 4th ed. 1952).
2. See buyer's broker under broker.
- subordinate agent (17c) An agent who acts subject to the direction of a superior agent. • Subordinate and superior agents are
coagents of a common principal. See superior agent. Cf. subagent (1).
- successor agent (1934) An agent who is appointed by a principal to act in a primary agent's stead if the primary agent is
unable or unwilling to perform.
- superior agent (17c) 1. An agent on whom a principal confers the right to direct a subordinate agent. See subordinate agent.
2. See high-managerial agent (1).
- transfer agent (1850) An organization (such as a bank or trust company) that handles transfers of shares for a publicly held
corporation by issuing new certificates and overseeing the cancellation of old ones and that usu. also maintains the record of
shareholders for the corporation and mails dividend checks. • Generally, a transfer agent ensures that certificates submitted for
transfer are properly indorsed and that the transfer right is appropriately documented. — Also termed stock-transfer agent.
- trustee-agent A trustee who is subject to the control of the settlor or one or more beneficiaries of a trust. See trustee (1).
- undercover agent (1930) 1. An agent who does not disclose his or her role as an agent. 2. A police officer who gathers
evidence of criminal activity without disclosing his or her identity to the suspect.
- underwriting agent (1905) Insurance. 1. An agent who acts on behalf of an insurance company to provide insurance to a
customer. — Also termed policywriting agent. 2. An agent who acts for an individual Lloyd's underwriter and manages the
underwriting syndicate of which the underwriter is a member. — Also termed managing agent. See lloyd's underwriters. 3. An
agent who acts for an individual Lloyd's underwriter in all respects except for managing the underwriting syndicate. — Also
termed (in sense 3) member's agent. See lloyd's underwriters.
- undisclosed agent (1863) An agent who deals with a third party who has no knowledge that the agent is acting on a principal's
behalf. Cf. undisclosed principal under principal (1).
- universal agent (18c) An agent authorized to perform all acts that the principal could personally perform.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 4
#None
paragraph
AGENT, Black's Law Dictionary (12th ed. 2024)
- land agent See land agent.
- listing agent (1927) The real-estate broker's representative who obtains a listing agreement with the owner. Cf. selling agent;
showing agent.
- local agent (1804) 1. An agent appointed to act as another's (esp. a company's) representative and to transact business within
a specified district. 2. See special agent (2).
- managing agent (1812) 1. A person with general power involving the exercise of judgment and discretion, as opposed to an
ordinary agent who acts under the direction and control of the principal. — Also termed business agent. 2. See underwriting
agent (2).
- managing general agent (1954) Insurance. A wholesale insurance intermediary who is vested with underwriting authority
from an insurer. • Managing general agents allow small insurers to purchase underwriting expertise. They typically become
involved in policies that require specialized expertise, as with those for professional liability. — Abbr. MGA.
- member's agent See underwriting agent (3).
- mercantile agent (18c) An agent employed to sell goods or merchandise on behalf of the principal. — Also termed commercial
agent.
- nonservant agent (1920) An agent who agrees to act on the principal's behalf but is not subject to the principal's control over
how the task is performed. • A principal is not liable for the physical torts of a nonservant agent. See independent contractor. Cf.
independent agent; servant.
- ostensible agent See apparent agent.
- patent agent (1859) A specialized legal professional — not necessarily a lawyer — who has fulfilled the U.S. Patent and
Trademark Office requirements as a representative and is registered to prepare and prosecute patent applications before the
PTO. • To be registered to practice before the PTO, a candidate must establish mastery of the relevant technology (by holding
a specified technical degree or equivalent training) in order to advise and assist patent applicants. The candidate must also pass
a written examination (the “Patent Bar”) that tests knowledge of patent law and PTO procedure. — Often shortened to agent.
— Also termed registered patent agent; patent solicitor. Cf. patent attorney.
- policywriting agent See underwriting agent (1).
- primary agent (18c) An agent who is directly authorized by a principal. • A primary agent generally may hire a subagent to
perform all or part of the agency. Cf. subagent (1).
- private agent (17c) An agent acting for an individual in that person's private affairs.
- process agent (1886) A person authorized to accept service of process on behalf of another. See registered agent.
- procuring agent (1954) Someone who obtains drugs on behalf of another person and delivers the drugs to that person. • In
criminal-defense theory, the procuring agent does not sell, barter, exchange, or make a gift of the drugs to the other person
because the drugs already belong to that person, who merely employs the agent to pick up and deliver them.
- public agent (17c) A person appointed to act for the public in matters relating to governmental administration or public
business.
- real-estate agent (1844) An agent who represents a buyer or seller (or both, with proper disclosures) in the sale or lease of
real property. • A real-estate agent can be either a broker (whose principal is a buyer or seller) or a salesperson (whose principal
is a broker). — Also termed estate agent. Cf. realtor; real-estate broker under broker.
- record agent See insurance agent.
- registered agent (1809) A person authorized to accept service of process for another person, esp. a foreign corporation, in a
particular jurisdiction. — Also termed resident agent. See process agent.
- registered patent agent See patent agent.
- resident agent See registered agent.
- secret agent See secret agent.
- self-appointed agent (18c) 1. Someone who is not authorized to act on behalf of another person or entity but who behaves
as if such authority has been granted. 2. An agent appointed directly by a principal who also has a statutory agent. 3. A plaintiff
in a class-action lawsuit who purports to represent the entire class.
- selling agent (1839) 1. The real-estate broker's representative who sells the property, as opposed to the agent who lists the
property for sale. 2. See showing agent. Cf. listing agent.
- settlement agent (1952) See closing agent.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 3
#None
paragraph
AGENT, Black's Law Dictionary (12th ed. 2024)
- commission agent (1812) An agent whose remuneration is based at least in part on commissions, or percentages of actual
sales. • Commission agents typically work as middlemen between sellers and buyers. — Also termed commercial agent.
- common agent (17c) An agent who acts on behalf of more than one principal in a transaction. Cf. coagent.
- corporate agent (1819) An agent authorized to act on behalf of a corporation; broadly, all employees and officers who have
the power to bind the corporation.
- county agent See juvenile officer under officer (1).
- del credere agent (del kred-ə-ray or kray-də-ray) (1822) An agent who guarantees the solvency of the third party with whom
the agent makes a contract for the principal. • A del credere agent receives possession of the principal's goods for purposes
of sale and guarantees that anyone to whom the agent sells the goods on credit will pay promptly for them. For this guaranty,
the agent receives a higher commission for sales. The promise of such an agent is almost universally held not to be within the
statute of frauds. — Also termed del credere factor.
- diplomatic agent (18c) A national representative in one of four categories: (1) ambassadors, (2) envoys and ministers
plenipotentiary, (3) ministers resident accredited to the sovereign, or (4) chargés d'affaires accredited to the minister of foreign
affairs.
- double agent (1935) 1. A spy who finds out an enemy's secrets for his or her principal but who also gives secrets to the
enemy. 2. See dual agent (2).
- dual agent (1881) 1. See coagent. 2. An agent who represents both parties in a single transaction, esp. a buyer and a seller.
— Also termed (in sense 2) double agent.
- emigrant agent (1874) One engaged in the business of hiring laborers for work outside the country or state.
- enrolled agent See enrolled agent.
- escrow agent See escrow agent.
- estate agent See real-estate agent.
- fiscal agent (18c) A bank or other financial institution that collects and disburses money and services as a depository of private
and public funds on another's behalf.
- foreign agent (1938) Someone who registers with the federal government as a lobbyist representing the interests of a foreign
country or corporation.
- forwarding agent (1837) 1. freight forwarder. 2. A freight-forwarder who assembles less-than-carload shipments (small
shipments) into carload shipments, thus taking advantage of lower freight rates.
- general agent (17c) 1. An agent authorized to transact all the principal's business of a particular kind or in a particular place.
• Among the common types of general agents are factors, brokers, and partners. Cf. special agent (1). 2. Insurance. An agent
with the general power of making insurance contracts on behalf of an insurer.
- government agent (1805) 1. An employee or representative of a governmental body. 2. A law-enforcement official, such as
a police officer or an FBI agent. 3. An informant, esp. an inmate, used by law enforcement to obtain incriminating statements
from another inmate.
- gratuitous agent (1822) An agent who acts without a right to compensation.
- high-managerial agent (1957) 1. An agent of a corporation or other business who has authority to formulate corporate policy
or supervise employees. — Also termed superior agent. 2. See superior agent (1).
- implied agent See apparent agent.
- independent agent (17c) An agent who exercises personal judgment and is subject to the principal only for the results of the
work performed. Cf. nonservant agent.
- innocent agent (1805) Criminal law. A person whose action on behalf of a principal is unlawful but does not merit prosecution
because the agent had no knowledge of the principal's illegal purpose; a person who lacks the mens rea for an offense but who
is tricked or coerced by the principal into committing a crime. • Although the agent's conduct was unlawful, the agent might
not be prosecuted if the agent had no knowledge of the principal's illegal purpose. The principal is legally accountable for the
innocent agent's actions. See Model Penal Code § 2.06(2)(a).
- insurance agent (1866) Someone authorized by an insurer to sell its policies; specif., an insurer's representative who solicits or
procures insurance business, including the continuance, renewal, and revival of policies. — Also termed producer; (in property
insurance) recording agent; record agent.
- jural agent See jural agent.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 2
#None
paragraph
AGENT, Black's Law Dictionary (12th ed. 2024)
Black's Law Dictionary (12th ed. 2024), agent
AGENT
Bryan A. Garner, Editor in Chief
Preface to the Twelfth Edition | Guide to the Dictionary | Legal Maxims | Bibliography of Books Cited
agent (15c) 1. Something that produces an effect <an intervening agent>. See cause (1); electronic agent. 2. Someone who is
authorized to act for or in place of another; a representative <a professional athlete's agent>. — Also termed commissionaire.
See agency. Cf. principal, n.(1); employee.
“Generally speaking, anyone can be an agent who is in fact capable of performing the functions involved. The agent normally
binds not himself but his principal by the contracts he makes; it is therefore not essential that he be legally capable to contract
(although his duties and liabilities to his principal might be affected by his status). Thus an infant or a lunatic may be an agent,
though doubtless the court would disregard either's attempt to act if he were so young or so hopelessly devoid of reason as to
be completely incapable of grasping the function he was attempting to perform.” Floyd R. Mechem, Outlines of the Law of
Agency 8–9 (Philip Mechem ed., 4th ed. 1952).
“The etymology of the word agent or agency tells us much. The words are derived from the Latin verb, ago, agere; the noun
agens, agentis. The word agent denotes one who acts, a doer, force or power that accomplishes things.” Harold Gill Reuschlein
& William A. Gregory, The Law of Agency and Partnership § 1, at 2–3 (2d ed. 1990).
- agent not recognized (2002) Patents. A patent applicant's appointed agent who is not registered to practice before the U.S.
Patent and Trademark Office. • A power of attorney appointing an unregistered agent is void. See patent agent.
- agent of necessity (1857) An agent that the law empowers to act for the benefit of another in an emergency. — Also termed
agent by necessity.
- apparent agent (1823) Someone who reasonably appears to have authority to act for another, regardless of whether actual
authority has been conferred. — Also termed ostensible agent; implied agent.
- associate agent (1993) Patents. An agent who is registered to practice before the U.S. Patent and Trademark Office, has been
appointed by a primary agent, and is authorized to prosecute a patent application through the filing of a power of attorney. • An
associate agent is often used by outside counsel to assist in-house counsel. See patent agent.
- bail-enforcement agent See bounty hunter.
- bargaining agent (1935) A labor union in its capacity of representing employees in collective bargaining.
- broker-agent See broker.
- business agent See business agent.
- case agent See case agent.
- clearing agent (1937) Securities. A person or company acting as an intermediary in a securities transaction or providing
facilities for comparing data regarding securities transactions. • The term includes a custodian of securities in connection with
the central handling of securities. Securities Exchange Act § 3(a)(23)(A) (15 USCA § 78c(a)(23)(A)). — Also termed clearing
agency.
- closing agent (1922) An agent who represents the purchaser or buyer in the negotiation and closing of a real-property
transaction by handling financial calculations and transfers of documents. — Also termed settlement agent. See also settlement
attorney under attorney.
- coagent (16c) Someone who shares with another agent the authority to act for the principal. • A coagent may be appointed by
the principal or by another agent who has been authorized to make the appointment. — Also termed dual agent. Cf. common
agent.
- commercial agent (18c) 1. broker. 2. A consular officer responsible for the commercial interests of his or her country at a
foreign port. 3. See mercantile agent. 4. See commission agent.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 1
27 segments
#None
paragraph
38 Chapter 2 Intelligent Agents
A B
Figure2.2 Avacuum-cleanerworldwithjusttwolocations. Eachlocationcanbecleanor
dirty,andtheagentcanmoveleftorrightandcancleanthesquarethatitoccupies.Different
versions of the vacuum world allow for different rules about what the agent can perceive,
whetheritsactionsalwayssucceed,andsoon.
Perceptsequence Action
[A,Clean] Right
[A,Dirty] Suck
[B,Clean] Left
[B,Dirty] Suck
[A,Clean],[A,Clean] Right
[A,Clean],[A,Dirty] Suck
. .
. .
. .
[A,Clean],[A,Clean],[A,Clean] Right
[A,Clean],[A,Clean],[A,Dirty] Suck
. .
. .
. .
Figure2.3 Partialtabulationofasimpleagentfunctionforthevacuum-cleanerworldshown
in Figure2.2.Theagentcleansthecurrentsquareifitisdirty,otherwiseitmovestotheother
square. Notethatthetableisofunboundedsizeunlessthereisarestrictiononthelengthof
possibleperceptsequences.
Before closing this section, weshould emphasize that thenotion ofanagent ismeant to
be a tool for analyzing systems, not an absolute characterization that divides the world into
agents and non-agents. One could view a hand-held calculator as an agent that chooses the
action of displaying “4” when given the percept sequence “2 + 2 =,” but such an analysis
wouldhardlyaidourunderstanding ofthecalculator. Inasense, allareasofengineering can
be seen as designing artifacts that interact with the world; AI operates at (what the authors
consider to be) the most interesting end of the spectrum, where the artifacts have significant
computational resources andthetaskenvironment requiresnontrivial decision making.
#None
paragraph
62 Chapter 2 Intelligent Agents
Asnoted in Chapter 1, the development of utility theory as a basis for rational behavior
goes back hundreds of years. In AI, early research eschewed utilities in favor of goals, with
some exceptions (Feldman and Sproull, 1977). The resurgence of interest in probabilistic
methods in the 1980s led to the acceptance of maximization of expected utility as the most
general framework for decision making (Horvitz etal., 1988). The text by Pearl(1988) was
thefirstin AItocoverprobabilityandutilitytheoryindepth;itsexpositionofpracticalmethods for reasoning and decision making under uncertainty was probably the single biggest
factor in the rapid shift towards utility-based agents in the 1990s (see Chapter 16). The formalization of reinforcement learning within adecision-theoretic framework also contributed
tothisshift(Sutton, 1988). Somewhatremarkably, almostall AIresearch until veryrecently
hasassumedthattheperformance measurecanbeexactlyandcorrectlyspecifiedintheform
ofautilityfunctionorrewardfunction(Hadfield-Menell etal.,2017a;Russell,2019).
Thegeneral design forlearning agents portrayed in Figure2.15isclassic inthemachine
learning literature (Buchanan et al., 1978; Mitchell, 1997). Examples of the design, as embodiedinprograms,gobackatleastasfaras Arthur Samuel’s(1959,1967)learningprogram
forplayingcheckers. Learningagentsarediscussed indepthin Chapters19–22.
Someearly papers on agent-based approaches are collected by Huhns and Singh (1998)
and Wooldridgeand Rao(1999). Textsonmultiagentsystemsprovideagoodintroduction to
many aspects of agent design (Weiss, 2000a; Wooldridge, 2009). Several conference series
devoted to agents began in the 1990s, including the International Workshop on Agent Theories, Architectures, and Languages (ATAL), the International Conference on Autonomous
Agents (AGENTS),and the International Conference on Multi-Agent Systems (ICMAS).In
2002, thesethree mergedtoformthe International Joint Conference on Autonomous Agents
and Multi-Agent Systems (AAMAS). From 2000 to 2012 there were annual workshops on
Agent-Oriented Software Engineering (AOSE). The journal Autonomous Agents and Multi Agent Systems was founded in 1998. Finally, Dung Beetle Ecology (Hanski and Cambefort,
1991)providesawealthofinterestinginformation onthebehaviorofdungbeetles. You Tube
hasinspiring videorecordings oftheiractivities.
#None
paragraph
Bibliographicaland Historical Notes 61
concept of a controller in control theory is identical to that of an agent in AI. Perhaps sur- Controller
prisingly, AI has concentrated for most of its history on isolated components of agents—
question-answering systems, theorem-provers, vision systems, and so on—rather than on
wholeagents. Thediscussion ofagents inthetextby Genesereth and Nilsson(1987) wasan
influentialexception. Thewhole-agentviewisnowwidelyacceptedandisacentralthemein
recenttexts(Padghamand Winikoff, 2004;Jones,2007;Pooleand Mackworth,2017).
Chapter 1traced the roots ofthe concept of rationality in philosophy and economics. In
AI,theconceptwasofperipheralinterestuntilthemid-1980s,whenitbegantosuffusemany
discussions aboutthepropertechnical foundations ofthefield. Apaperby Jon Doyle(1983)
predicted that rational agent design would come to be seen as the core mission of AI, while
otherpopulartopicswouldspinofftoformnewdisciplines.
Careful attention to the properties of the environment and their consequences for rational agent design is most apparent in the control theory tradition—for example, classical
control systems (Dorf and Bishop, 2004; Kirk, 2004) handle fully observable, deterministic
environments; stochastic optimal control (Kumar and Varaiya, 1986; Bertsekas and Shreve,
2007) handles partially observable, stochastic environments; and hybrid control (Henzinger
and Sastry, 1998; Cassandras and Lygeros, 2006) deals with environments containing both
discrete andcontinuous elements. Thedistinction betweenfully andpartially observable environments is also central in the dynamic programming literature developed in the field of
operations research (Puterman,1994), whichwediscuss in Chapter17.
Although simple reflex agents were central to behaviorist psychology (see Chapter 1),
most AIresearchersviewthemastoosimpletoprovidemuchleverage. (Rosenschein (1985)
and Brooks (1986) questioned this assumption; see Chapter 26.) A great deal of work
has gone into finding efficient algorithms for keeping track of complex environments (Bar Shalometal.,2001;Chosetetal.,2005;Simon,2006),mostofitintheprobabilistic setting.
Goal-based agents are presupposed in everything from Aristotle’s view of practical reasoning to Mc Carthy’s early papers on logical AI. Shakey the Robot (Fikes and Nilsson,
1971; Nilsson, 1984) was the first robotic embodiment of a logical, goal-based agent. A
full logical analysis of goal-based agents appeared in Genesereth and Nilsson (1987), and a
goal-basedprogrammingmethodologycalledagent-orientedprogrammingwasdevelopedby
Shoham (1993). The agent-based approach is now extremely popular in software engineering (Ciancarini and Wooldridge, 2001). It has also infiltrated the area of operating systems,
whereautonomiccomputingreferstocomputersystemsandnetworksthatmonitorandcon- Autonomic
computing
trolthemselves withaperceive–act loopandmachinelearning methods(Kephartand Chess,
2003). Noting that a collection of agent programs designed to work well together in a true
multiagentenvironmentnecessarilyexhibitsmodularity—theprogramssharenointernalstate
and communicate with each other only through the environment—it is common within the
field of multiagent systems to design the agent program of asingle agent as a collection of
autonomous sub-agents. In some cases, one can even prove that the resulting system gives
thesameoptimalsolutions asamonolithicdesign.
The goal-based view of agents also dominates the cognitive psychology tradition in the
area of problem solving, beginning with the enormously influential Human Problem Solving(Newelland Simon,1972)andrunningthroughallof Newell’slaterwork(Newell,1990).
Goals, further analyzed asdesires (general) andintentions (currently pursued), arecentral to
theinfluentialtheoryofagentsdevelopedby Michael Bratman(1987).
#None
paragraph
60 Chapter 2 Intelligent Agents
if the representation of a concept is spread over many memory locations, and each memory
location is employed as part of the representation of multiple different concepts, we call
Distributed thatadistributedrepresentation. Distributed representations aremorerobust against noise
representation
and information loss. With a localist representation, the mapping from concept to memory
location is arbitrary, and if a transmission error garbles a few bits, we might confuse Truck
withtheunrelatedconcept Truce. Butwithadistributedrepresentation, youcanthinkofeach
conceptrepresentingapointinmultidimensionalspace,andifyougarbleafewbitsyoumove
toanearbypointinthatspace,whichwillhavesimilarmeaning.
Summary
This chapter has been something of a whirlwind tour of AI, which we have conceived of as
thescienceofagentdesign. Themajorpointstorecallareasfollows:
• Anagent issomething that perceives and acts in an environment. The agent function
foranagentspecifiestheactiontakenbytheagentinresponsetoanyperceptsequence.
• The performance measure evaluates the behavior of the agent in an environment. A
rational agentactssoastomaximize theexpected valueoftheperformance measure,
giventheperceptsequence ithasseensofar.
• A task environment specification includes the performance measure, the external environment, the actuators, and the sensors. In designing an agent, the first step must
alwaysbetospecifythetaskenvironment asfullyaspossible.
• Taskenvironments varyalongseveralsignificantdimensions. Theycanbefullyorpartiallyobservable,single-agentormultiagent,deterministicornondeterministic,episodic
orsequential, staticordynamic, discreteorcontinuous, andknownorunknown.
• Incaseswheretheperformance measureisunknown orhardtospecify correctly, there
isasignificantriskoftheagentoptimizingthewrongobjective. Insuchcasestheagent
designshouldreflectuncertainty aboutthetrueobjective.
• The agent program implements the agent function. There exists a variety of basic
agent program designs reflecting thekindofinformation madeexplicit and usedinthe
decision process. The designs vary in efficiency, compactness, and flexibility. The
appropriate designoftheagentprogram depends onthenatureoftheenvironment.
• Simplereflexagentsresponddirectlytopercepts,whereasmodel-basedreflexagents
maintain internal state to track aspects of the world that are not evident in the current
percept. Goal-based agents act to achieve their goals, and utility-based agents try to
maximizetheirownexpected “happiness.”
• Allagentscanimprovetheirperformance throughlearning.
Bibliographical and Historical Notes
The central role of action in intelligence—the notion of practical reasoning—goes back at
least as far as Aristotle’s Nicomachean Ethics. Practical reasoning was also the subject of
Mc Carthy’sinfluential paper“Programswith Common Sense”(1958). Thefieldsofrobotics
andcontrol theory are, bytheir verynature, concerned principally withphysical agents. The
#None
paragraph
Section2.4 The Structureof Agents 59
In an atomic representation each state of the world is indivisible—it has no internal Atomic
representation
structure. Consider thetask offindingadriving route from oneendofacountry totheother
viasomesequence ofcities(weaddress thisproblem in Figure3.1onpage64). Forthepurposesofsolvingthisproblem,itmaysufficetoreducethestateoftheworldtojustthenameof
thecity wearein—asingle atom ofknowledge, a“black box” whoseonly discernible propertyisthatofbeingidenticaltoordifferentfromanotherblackbox. Thestandardalgorithms
underlying search and game-playing (Chapters 3–5), hidden Markov models (Chapter 14),
and Markovdecision processes (Chapter17)allworkwithatomicrepresentations.
Afactoredrepresentationsplitsupeachstateintoafixedsetofvariablesorattributes, Factored
representation
each of which can have a value. Consider a higher-fidelity description for the same driving Variable
problem, where we need to be concerned with more than just atomic location in one city or Attribute
another; we might need to pay attention to how much gas is in the tank, our current GPS Value
coordinates, whether or not the oil warning light is working, how much money we have for
tolls,whatstationisontheradio,andsoon. Whiletwodifferentatomicstateshavenothingin
common—they are just different black boxes—two different factored states can share some
attributes (suchasbeingatsomeparticular GPSlocation) andnotothers(suchashaving lots
ofgasorhaving nogas);thismakesitmucheasiertoworkouthowtoturnonestateintoanother. Manyimportantareasof AIarebasedonfactoredrepresentations, includingconstraint
satisfaction algorithms (Chapter 6), propositional logic (Chapter 7), planning (Chapter 11),
Bayesiannetworks(Chapters12–16), andvariousmachinelearning algorithms.
For many purposes, we need to understand the world as having things in it that are related to each other, not just variables withvalues. Forexample, wemightnotice that alarge
truck ahead of us is reversing into the driveway of a dairy farm, but a loose cow is blocking the truck’s path. A factored representation is unlikely to be pre-equipped with the attribute Truck Ahead Backing Into Dairy Farm Driveway Blocked By Loose Cow with value true or
false. Instead, we would need a structured representation, in which objects such as cows Structured
representation
and trucks and their various and varying relationships can be described explicitly (see Figure 2.16(c)). Structured representations underlie relational databases and first-order logic
(Chapters8,9,and10),first-orderprobability models(Chapter15),andmuchofnaturallanguage understanding (Chapters 23 and24). Infact, muchofwhathumans express innatural
language concerns objectsandtheirrelationships.
As we mentioned earlier, the axis along which atomic, factored, and structured representations lieistheaxis ofincreasing expressiveness. Roughly speaking, amoreexpressive Expressiveness
representation cancapture,atleastasconcisely, everythingalessexpressiveonecancapture,
plussomemore. Often,themoreexpressivelanguageismuchmoreconcise;forexample,the
rules of chess can be written in a page or two of a structured-representation language such
as first-order logic but require thousands of pages when written in a factored-representation
language such as propositional logic and around 1038 pages when written in an atomic languagesuchasthatoffinite-stateautomata. Ontheotherhand,reasoningandlearningbecome
more complex as the expressive power of the representation increases. To gain the benefits
ofexpressive representations whileavoiding their drawbacks, intelligent systems forthereal
worldmayneedtooperateatallpointsalongtheaxissimultaneously.
Anotheraxisforrepresentation involvesthemappingofconceptstolocationsinphysical
memory, whether in a computer or in a brain. If there is a one-to-one mapping between
concepts and memory locations, we call that a localist representation. On the other hand, Localist
representation
#None
paragraph
58 Chapter 2 Intelligent Agents
More generally, human choices can provide information about human preferences. For
example, suppose the taxi does not know that people generally don’t like loud noises, and
settles on the idea of blowing its horn continuously as a way of ensuring that pedestrians
knowit’scoming. Theconsequent humanbehavior—covering ears,usingbadlanguage, and
possibly cutting the wires to the horn—would provide evidence to the agent with which to
updateitsutilityfunction. Thisissueisdiscussed furtherin Chapter22.
In summary, agents have a variety of components, and those components can be represented in many ways within the agent program, so there appears to be great variety among
learning methods. Thereis, however, asingle unifying theme. Learning inintelligent agents
canbesummarized asaprocess ofmodification ofeachcomponent oftheagent tobring the
components into closer agreement withthe available feedback information, thereby improvingtheoverallperformance oftheagent.
2.4.7 How the components of agent programs work
Wehavedescribedagentprograms(inveryhigh-levelterms)asconsistingofvariouscomponents,whosefunctionitistoanswerquestionssuchas: “Whatistheworldlikenow?” “What
action should I do now?” “What do my actions do?” The next question for a student of AI
is, “How on Earth do these components work?” It takes about a thousand pages to begin to
answer that question properly, buthere wewantto draw thereader’s attention to somebasic
distinctions among thevarious waysthatthecomponents canrepresent theenvironment that
theagentinhabits.
Roughlyspeaking,wecanplacetherepresentations alonganaxisofincreasingcomplexityandexpressive power—atomic, factored, andstructured. Toillustrate theseideas, ithelps
to consider a particular agent component, such as the one that deals with “What my actions
do.” Thiscomponent describes thechanges thatmightoccur intheenvironment astheresult
of taking an action, and Figure 2.16 provides schematic depictions of how those transitions
mightberepresented.
B C
B C
(a) Atomic (b) Factored (c) Structured
Figure 2.16 Three ways to represent states and the transitions between them. (a) Atomic
representation:astate(suchas Bor C)isablackboxwithnointernalstructure;(b)Factored
representation: a state consists of a vectorof attribute values; valuescan be Boolean, realvalued, or one of a fixed set of symbols. (c) Structured representation: a state includes
objects,eachofwhichmayhaveattributesofitsownaswellasrelationshipstootherobjects.
#None
paragraph
Section2.4 The Structureof Agents 57
tobetheentire agent: ittakes inpercepts and decides onactions. Thelearning element uses
feedback from the critic on how the agent is doing and determines how the performance Critic
elementshouldbemodifiedtodobetterinthefuture.
Thedesignofthelearning elementdependsverymuchonthedesignoftheperformance
element. When trying to design an agent that learns a certain capability, the first question is
not“Howam Igoingtogetittolearnthis?” but“Whatkindofperformanceelementwillmy
agent use to do this once it has learned how?” Given a design for the performance element,
learning mechanismscanbeconstructed toimproveeverypartoftheagent.
The critic tells the learning element how well the agent is doing with respect to a fixed
performance standard. The critic is necessary because the percepts themselves provide no
indication of the agent’s success. For example, a chess program could receive a percept
indicating that it has checkmated its opponent, but it needs a performance standard to know
thatthisisagoodthing;theperceptitselfdoesnotsayso. Itisimportantthattheperformance
standard be fixed. Conceptually, one should think of it as being outside the agent altogether
becausetheagentmustnotmodifyittofititsownbehavior.
The last component of the learning agent is the problem generator. It is responsible Problem generator
for suggesting actions that willlead to new and informative experiences. If the performance
element had its way, it would keep doing the actions that are best, given what it knows, but
iftheagent iswilling toexplore alittle anddosomeperhaps suboptimal actions intheshort
run,itmightdiscovermuchbetteractions forthelongrun. Theproblemgenerator’s jobisto
suggesttheseexploratoryactions. Thisiswhatscientistsdowhentheycarryoutexperiments.
Galileodidnotthinkthatdroppingrocksfromthetopofatowerin Pisawasvaluableinitself.
Hewasnot trying to break the rocks orto modify the brains of unfortunate pedestrians. His
aimwastomodifyhisownbrainbyidentifying abettertheoryofthemotionofobjects.
The learning element can make changes to any of the “knowledge” components shown
intheagentdiagrams(Figures2.9,2.11,2.13,and2.14). Thesimplestcasesinvolvelearning
directly from the percept sequence. Observation of pairs of successive states of the environment can allow the agent to learn “What my actions do” and “How the world evolves” in
response to its actions. For example, if the automated taxi exerts a certain braking pressure
when driving on a wet road, then it will soon find out how much deceleration is actually
achieved, and whether it skids off the road. The problem generator might identify certain
parts of the model that are in need of improvement and suggest experiments, such as trying
outthebrakesondifferent roadsurfaces underdifferentconditions.
Improving the model components of a model-based agent so that they conform better
with reality is almost always a good idea, regardless of the external performance standard.
(In some cases, it is better from a computational point of view to have a simple but slightly
inaccurate model rather than a perfect but fiendishly complex model.) Information from the
externalstandard isneededwhentryingtolearnareflexcomponent orautilityfunction.
For example, suppose the taxi-driving agent receives no tips from passengers who have
been thoroughly shaken up during the trip. The external performance standard must inform
the agent that the loss of tips is a negative contribution to its overall performance; then the
agent might be able to learn that violent maneuvers do not contribute to its own utility. In
a sense, the performance standard distinguishes part of the incoming percept as a reward Reward
(orpenalty)thatprovides directfeedback onthequality oftheagent’s behavior. Hard-wired Penalty
performance standards suchaspainandhungerinanimalscanbeunderstood inthisway.
#None
paragraph
56 Chapter 2 Intelligent Agents
Performance standard
Agent
Environment
Critic Sensors
feedback
changes
Learning Performance
element element
knowledge
learning
goals
Problem
generator
Actuators
Figure2.15 Agenerallearningagent. The“performanceelement”boxrepresentswhatwe
havepreviouslyconsideredtobethewholeagentprogram.Now,the“learningelement”box
getstomodifythatprogramtoimproveitsperformance.
unachievable inpracticebecauseofcomputational complexity,aswenotedin Chapter1. We
alsonotethatnotallutility-based agentsaremodel-based; wewillseein Chapters22and26
Model-freeagent that a model-free agent can learn what action is best in a particular situation without ever
learning exactlyhowthatactionchangestheenvironment.
Finally, all of this assumes that the designer can specify the utility function correctly;
Chapters17,18,and22consider theissueofunknownutilityfunctions inmoredepth.
2.4.6 Learning agents
We have described agent programs with various methods for selecting actions. We have
not, so far, explained how the agent programs come into being. In his famous early paper,
Turing (1950) considers the idea of actually programming his intelligent machines by hand.
Heestimateshowmuchworkthismighttakeandconcludes,“Somemoreexpeditiousmethod
seems desirable.” The method he proposes is to build learning machines and then to teach
them. In many areas of AI, this is now the preferred method for creating state-of-the-art
systems. Any type of agent (model-based, goal-based, utility-based, etc.) can be built as a
learning agent(ornot).
Learninghasanotheradvantage, aswenotedearlier: itallowstheagenttooperateininitiallyunknownenvironments andtobecomemorecompetentthanitsinitialknowledgealone
mightallow. Inthissection,webrieflyintroducethemainideasoflearningagents. Throughout the book, we comment on opportunities and methods for learning in particular kinds of
agents. Chapters19–22gointomuchmoredepthonthelearning algorithms themselves.
A learning agent can be divided into four conceptual components, as shown in Fig Learningelement ure 2.15. The most important distinction is between the learning element, which is re Performance sponsibleformakingimprovements,andtheperformanceelement,whichisresponsiblefor
element
selecting external actions. The performance element is what we have previously considered
#None
paragraph
48 Chapter 2 Intelligent Agents
function TABLE-DRIVEN-AGENT(percept)returnsanaction
persistent: percepts,asequence,initiallyempty
table,atableofactions,indexedbyperceptsequences,initiallyfullyspecified
appendpercepttotheendofpercepts
action←LOOKUP(percepts,table)
returnaction
Figure2.7 The TABLE-DRIVEN-AGENT programisinvokedforeach newperceptandreturnsanactioneachtime. Itretainsthecompleteperceptsequenceinmemory.
2.4.1 Agent programs
The agent programs that we design in this book all have the same skeleton: they take the
current percept as input from the sensors and return an action to the actuators.5 Notice the
differencebetweentheagentprogram,whichtakesthecurrentperceptasinput,andtheagent
function, which maydepend on theentire percept history. The agent program has no choice
but totake just the current percept asinput because nothing moreisavailable from the environment; iftheagent’s actions need todepend onthe entire percept sequence, theagent will
havetorememberthepercepts.
We describe the agent programs in the simple pseudocode language that is defined in
Appendix B. (The online code repository contains implementations in real programming
languages.) Forexample, Figure 2.7shows arather trivial agent program thatkeeps track of
the percept sequence and then uses it to index into a table of actions to decide what to do.
The table—an example of which is given for the vacuum world in Figure 2.3—represents
explicitly the agent function that the agent program embodies. To build a rational agent in
thisway,weasdesignersmustconstructatablethatcontainstheappropriateactionforevery
possible perceptsequence.
Itisinstructivetoconsiderwhythetable-drivenapproachtoagentconstructionisdoomed
tofailure. Let P bethesetofpossibleperceptsandlet T bethelifetimeoftheagent(thetotal
numberofperceptsitwillreceive). Thelookuptablewillcontain∑T |P|t entries. Consider
t=1
the automated taxi: the visual input from a single camera (eight cameras is typical) comes
in at the rate of roughly 70 megabytes per second (30 frames per second, 1080×720 pixels
with24bitsofcolorinformation). Thisgivesalookuptablewithover10600,000,000,000 entries
foranhour’s driving. Eventhelookup table forchess—a tiny, well-behaved fragment ofthe
real world—has (it turns out) at least 10150 entries. In comparison, the number of atoms in
theobservable universe islessthan1080. Thedaunting sizeofthesetables meansthat(a)no
physical agent in this universe will have the space to store the table; (b) the designer would
not have time to create the table; and (c) no agent could ever learn all the right table entries
fromitsexperience.
◮ Despite all this, TABLE-DRIVEN-AGENT does do what we want, assuming the table is
filledincorrectly: itimplementsthedesiredagentfunction.
5 Thereareotherchoices fortheagent program skeleton; forexample, wecouldhavetheagent programsbe
coroutinesthatrunasynchronouslywiththeenvironment. Eachsuchcoroutinehasaninputandoutputportand
consistsofaloopthatreadstheinputportforperceptsandwritesactionstotheoutputport.
#None
paragraph
CHAPTER
INTELLIGENT AGENTS
Inwhichwediscussthenatureofagents,perfectorotherwise,thediversityofenvironments,
andtheresultingmenagerie ofagenttypes.
Chapter 1 identified the concept of rational agents as central to our approach to artificial
intelligence. Inthischapter,wemakethisnotionmoreconcrete. Wewillseethattheconcept
ofrationality canbeappliedtoawidevarietyofagentsoperating inanyimaginable environment. Ourplan inthisbook istousethis concept todevelop asmallset ofdesign principles
forbuilding successful agents—systems thatcanreasonably becalledintelligent.
Webegin by examining agents, environments, and the coupling between them. The observation that some agents behave better than others leads naturally to the idea of a rational
agent—one that behaves as well as possible. How well an agent can behave depends on the
natureoftheenvironment;someenvironmentsaremoredifficultthanothers. Wegiveacrude
categorization ofenvironments andshowhowproperties ofanenvironment influencethedesignofsuitable agentsforthatenvironment. Wedescribeanumberofbasic“skeleton” agent
designs, whichwefleshoutintherestofthebook.
2.1 Agents and Environments
Environment Anagentisanything thatcanbeviewedasperceiving itsenvironmentthrough sensorsand
Sensor actinguponthatenvironmentthroughactuators. Thissimpleideaisillustratedin Figure2.1.
Actuator A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract,
and so on for actuators. A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators. A software agent receives file contents, network
packets, andhumaninput(keyboard/mouse/touchscreen/voice) assensory inputsandactson
the environment by writing files, sending network packets, and displaying information or
generating sounds. Theenvironment could beeverything—the entire universe! Inpractice it
isjustthatpartoftheuniversewhosestatewecareaboutwhendesigningthisagent—thepart
thataffectswhattheagentperceivesandthatisaffectedbytheagent’s actions.
Percept We use the term percept to refer to the content an agent’s sensors are perceiving. An
Percept sequenc◮e agent’sperceptsequenceisthecompletehistoryofeverything theagenthaseverperceived.
Ingeneral, anagent’s choice ofaction atanygiven instant candepend onitsbuilt-in knowledge and on the entire percept sequence observed to date, but not on anything it hasn’t perceived. By specifying the agent’s choice of action for every possible percept sequence, we
have said more or less everything there is to say about the agent. Mathematically speak Agentfunction ing, wesay that an agent’s behavior is described by the agent function that maps any given
perceptsequence toanaction.
#None
paragraph
Section2.4 The Structureof Agents 55
Agent
Environment
Sensors
State
What the world
How the world evolves is like now
What it will be like
What my actions do if I do action A
How happy I will be
Utility
in such a state
What action I
should do now
Actuators
Figure2.14 Amodel-based,utility-basedagent. Itusesamodeloftheworld,alongwitha
utilityfunctionthatmeasuresitspreferencesamongstatesoftheworld. Thenitchoosesthe
actionthatleadstothebestexpectedutility,whereexpectedutilityiscomputedbyaveraging
overallpossibleoutcomestates,weightedbytheprobabilityoftheoutcome.
aim for, none of which can be achieved with certainty, utility provides a way in which the
likelihood ofsuccesscanbeweighedagainsttheimportance ofthegoals.
Partial observability and nondeterminism areubiquitous inthereal world, and so, therefore, is decision making under uncertainty. Technically speaking, a rational utility-based
agent chooses the action that maximizes the expected utility of the action outcomes—that Expected utility
is, the utility the agent expects to derive, on average, given the probabilities and utilities of
each outcome. (Appendix A defines expectation more precisely.) In Chapter 16, we show
thatanyrational agentmustbehave asifitpossesses autilityfunction whoseexpected value
ittriestomaximize. Anagentthatpossessesanexplicitutilityfunctioncanmakerationaldecisionswithageneral-purpose algorithmthatdoesnotdependonthespecificutilityfunction
being maximized. Inthis way, the “global” definition ofrationality—designating asrational
those agent functions that have the highest performance—is turned into a “local” constraint
onrational-agent designs thatcanbeexpressed inasimpleprogram.
The utility-based agent structure appears in Figure 2.14. Utility-based agent programs
appearin Chapters16and17,wherewedesigndecision-making agentsthatmusthandle the
uncertaintyinherentinnondeterministicorpartiallyobservableenvironments. Decisionmakinginmultiagentenvironmentsisalsostudiedintheframeworkofutilitytheory,asexplained
in Chapter18.
Atthis point, the reader maybe wondering, “Is it that simple? Wejust build agents that
maximize expected utility, and we’re done?” It’s true that such agents would be intelligent,
but it’s not simple. A utility-based agent has to model and keep track of its environment,
tasks that have involved a great deal of research on perception, representation, reasoning,
and learning. The results of this research fill many of the chapters of this book. Choosing
theutility-maximizing courseofactionisalsoadifficulttask,requiringingeniousalgorithms
that fill several more chapters. Even with these algorithms, perfect rationality is usually
#None
paragraph
54 Chapter 2 Intelligent Agents
Agent
Environment
Sensors
State
What the world
How the world evolves is like now
What it will be like
What my actions do if I do action A
What action I
Goals should do now
Actuators
Figure 2.13 A model-based,goal-basedagent. It keepstrack of the world state as well as
asetofgoalsitistryingtoachieve,andchoosesanactionthatwill(eventually)leadtothe
achievementofitsgoals.
simply by specifying that destination as the goal. The reflex agent’s rules for when to turn
and when to go straight will work only for a single destination; they must all be replaced to
gosomewherenew.
2.4.5 Utility-based agents
Goals alone are not enough to generate high-quality behavior in most environments. For
example, many action sequences will get the taxi to its destination (thereby achieving the
goal), butsomearequicker, safer,morereliable, orcheaper thanothers. Goalsjustprovide a
crudebinarydistinctionbetween“happy”and“unhappy”states. Amoregeneralperformance
measureshouldallowacomparison ofdifferentworldstatesaccordingtoexactlyhowhappy
theywouldmaketheagent. Because“happy” doesnotsoundveryscientific, economists and
Utility computerscientists usethetermutilityinstead.7
Wehave already seenthat aperformance measure assigns ascoretoanygivensequence
of environment states, so it can easily distinguish between more and less desirable ways of
Utilityfunction getting to the taxi’s destination. An agent’s utility function is essentially an internalization
of the performance measure. Provided that the internal utility function and the external performancemeasureareinagreement,anagentthatchoosesactionstomaximizeitsutilitywill
berationalaccording totheexternalperformance measure.
Letusemphasizeagainthatthisisnottheonlywaytoberational—wehavealreadyseen
a rational agent program for the vacuum world (Figure 2.8) that has no idea what its utility
function is—but, like goal-based agents, autility-based agent has many advantages in terms
of flexibility and learning. Furthermore, in two kinds of cases, goals are inadequate but a
utility-based agent can still make rational decisions. First, when there are conflicting goals,
only some of which can be achieved (for example, speed and safety), the utility function
specifies the appropriate tradeoff. Second, when there are several goals that the agent can
7 Theword“utility”hererefersto“thequalityofbeinguseful,”nottotheelectriccompanyorwaterworks.
#None
paragraph
Section2.4 The Structureof Agents 53
function MODEL-BASED-REFLEX-AGENT(percept)returnsanaction
persistent: state,theagent’scurrentconceptionoftheworldstate
transition model,adescriptionofhowthenextstatedependson
thecurrentstateandaction
sensor model,adescriptionofhowthecurrentworldstateisreflected
intheagent’spercepts
rules,asetofcondition–actionrules
action,themostrecentaction,initiallynone
state←UPDATE-STATE(state,action,percept,transition model,sensor model)
rule←RULE-MATCH(state,rules)
action←rule.ACTION
returnaction
Figure 2.12 A model-based reflex agent. It keeps track of the current state of the world,
usinganinternalmodel.Itthenchoosesanactioninthesamewayasthereflexagent.
maynotbeabletoseearoundthelargetruckthathasstoppedinfrontofitandcanonlyguess
about what may be causing the hold-up. Thus, uncertainty about the current state may be
unavoidable, buttheagentstillhastomakeadecision.
2.4.4 Goal-based agents
Knowingsomethingaboutthecurrentstateoftheenvironmentisnotalwaysenoughtodecide
what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight
on. The correct decision depends on where the taxi is trying to get to. In other words,
as well as a current state description, the agent needs some sort of goal information that Goal
describes situations that are desirable—for example, being at a particular destination. The
agent program can combine this with the model (the same information as was used in the
model-based reflex agent) to choose actions that achieve the goal. Figure 2.13 shows the
goal-based agent’sstructure.
Sometimes goal-based action selection is straightforward—for example, when goal satisfaction results immediately from a single action. Sometimes it will be more tricky—for
example,whentheagenthastoconsider longsequences oftwistsandturnsinordertofinda
waytoachievethegoal. Search(Chapters3to5)andplanning(Chapter11)arethesubfields
of AIdevoted tofindingactionsequences thatachievetheagent’sgoals.
Notice that decision making of this kind is fundamentally different from the condition–
actionrulesdescribed earlier,inthatitinvolvesconsideration ofthefuture—both“Whatwill
happen if Idosuch-and-such?” and“Willthatmakemehappy?” Inthereflexagentdesigns,
this information is not explicitly represented, because the built-in rules map directly from
percepts to actions. The reflex agent brakes when it sees brake lights, period. It has no idea
why. A goal-based agent brakes whenit sees brake lights because that’s the only action that
itpredicts willachieveitsgoalofnothittingothercars.
Although the goal-based agent appears less efficient, it is more flexible because the
knowledge that supports its decisions is represented explicitly and can be modified. For
example,agoal-basedagent’sbehaviorcaneasilybechangedtogotoadifferentdestination,
#None
paragraph
52 Chapter 2 Intelligent Agents
Agent
Environment
Sensors
State
How the world evolves What the world
is like now
What my actions do
What action I
Condition-action rules
should do now
Actuators
Figure2.11 Amodel-basedreflexagent.
Updatingthisinternalstateinformationastimegoesbyrequirestwokindsofknowledge
tobeencodedintheagentprograminsomeform. First,weneedsomeinformationabouthow
the world changes over time, which can be divided roughly into twoparts: the effects of the
agent’sactionsandhowtheworldevolvesindependentlyoftheagent. Forexample,whenthe
agent turns the steering wheelclockwise, the car turns to theright, and when it’s raining the
car’s cameras can get wet. This knowledge about “how the world works”—whether implemented in simple Boolean circuits or in complete scientific theories—is called a transition
Transitionmodel modeloftheworld.
Second, we need some information about how the state of the world is reflected in the
agent’s percepts. For example, when the car in front initiates braking, one or more illuminated red regions appear in the forward-facing camera image, and, when the camera gets
wet, droplet-shaped objects appear in the image partially obscuring the road. This kind of
Sensormodel knowledgeiscalledasensormodel.
Together, thetransition modelandsensormodelallowanagenttokeeptrackofthestate
of the world—to the extent possible given the limitations of the agent’s sensors. An agent
Model-basedagent thatusessuchmodelsiscalledamodel-basedagent.
Figure2.11givesthestructure ofthemodel-based reflexagentwithinternal state,showing how the current percept is combined with the old internal state to generate the updated
descriptionofthecurrentstate,basedontheagent’smodelofhowtheworldworks. Theagent
programisshownin Figure2.12. Theinterestingpartisthefunction UPDATE-STATE,which
is responsible for creating the new internal state description. Thedetails of how models and
states are represented vary widely depending on the type of environment and the particular
technology usedintheagentdesign.
Regardlessofthekindofrepresentation used,itisseldompossiblefortheagenttodeterminethecurrent stateofapartially observable environment exactly. Instead, theboxlabeled
“whattheworldislikenow”(Figure2.11)represents theagent’s“bestguess”(orsometimes
best guesses, if the agent entertains multiple possibilities). For example, an automated taxi
#None
paragraph
Section2.4 The Structureof Agents 51
function SIMPLE-REFLEX-AGENT(percept)returnsanaction
persistent: rules,asetofcondition–actionrules
state←INTERPRET-INPUT(percept)
rule←RULE-MATCH(state,rules)
action←rule.ACTION
returnaction
Figure2.10 Asimplereflexagent. Itactsaccordingtoarulewhoseconditionmatchesthe
currentstate,asdefinedbythepercept.
Even a little bit of unobservability can cause serious trouble. For example, the braking
rule given earlier assumes that the condition car-in-front-is-braking can be determined from
the current percept—a single frame of video. This works if the car in front has a centrally
mounted (and hence uniquely identifiable) brake light. Unfortunately, older models have
different configurations of taillights, brake lights, and turn-signal lights, and it is not always
possible to tell from a single image whether the car is braking or simply has its taillights
on. A simple reflex agent driving behind such a car would either brake continuously and
unnecessarily, or,worse,neverbrakeatall.
Wecan seeasimilar problem arising inthe vacuum world. Suppose that asimple reflex
vacuum agent is deprived of its location sensor and has only a dirt sensor. Such an agent
has just two possible percepts: [Dirty] and [Clean]. It can Suck in response to [Dirty]; what
shoulditdoinresponseto[Clean]? Moving Leftfails(forever)ifithappenstostartinsquare
A, and moving Right fails (forever) ifithappens to start insquare B. Infinite loops are often
unavoidable forsimplereflexagentsoperating inpartially observable environments.
Escape from infinite loops ispossible ifthe agent canrandomize itsactions. Forexam- Randomization
ple, if the vacuum agent perceives [Clean], it might flip a coin to choose between Right and
Left. It is easy to show that the agent will reach the other square in an average of twosteps.
Then, if that square is dirty, the agent will clean it and the task will be complete. Hence, a
randomized simplereflexagentmightoutperform adeterministic simplereflexagent.
Wementionedin Section2.3thatrandomizedbehavioroftherightkindcanberationalin
some multiagent environments. In single-agent environments, randomization is usually not
rational. It is a useful trick that helps a simple reflex agent in some situations, but in most
caseswecandomuchbetterwithmoresophisticated deterministic agents.
2.4.3 Model-based reflex agents
The most effective way to handle partial observability is for the agent to keep track of the
part of the world it can’t see now. That is, the agent should maintain some sort of internal
statethatdepends onthepercepthistoryandtherebyreflectsatleastsomeoftheunobserved Internal state
aspects ofthecurrentstate. Forthebraking problem, theinternal stateisnottooextensive—
just the previous frame from the camera, allowing the agent to detect when twored lights at
theedgeofthevehicle goonoroffsimultaneously. Forotherdriving taskssuchaschanging
lanes,theagentneedstokeeptrackofwheretheothercarsareifitcan’tseethemallatonce.
Andforanydrivingtobepossibleatall,theagentneedstokeeptrackofwhereitskeysare.
#None
paragraph
Section2.2 Good Behavior:The Conceptof Rationality 39
2.2 Good Behavior: The Concept of Rationality
A rational agent is one that does the right thing. Obviously, doing the right thing is better Rationalagent
thandoingthewrongthing,butwhatdoesitmeantodotherightthing?
2.2.1 Performance measures
Moral philosophy has developed several different notions of the “right thing,” but AI has
generallystucktoonenotioncalledconsequentialism: weevaluateanagent’sbehaviorbyits Consequentialism
consequences. Whenanagentisplunkeddowninanenvironment, itgeneratesasequenceof
actionsaccordingtotheperceptsitreceives. Thissequenceofactionscausestheenvironment
togothroughasequence ofstates. Ifthesequence isdesirable, thentheagenthasperformed
well. This notion of desirability is captured by a performance measure that evaluates any Performance
measure
givensequence ofenvironment states.
Humanshavedesiresandpreferences oftheirown,sothenotionofrationality asapplied
to humans has to do with their success in choosing actions that produce sequences of environmentstatesthataredesirablefromtheirpointofview. Machines,ontheotherhand,donot
havedesiresandpreferencesoftheirown;theperformancemeasureis,initiallyatleast,inthe
mindofthedesigner ofthemachine, orinthemindoftheusers themachine isdesigned for.
Wewillseethatsomeagent designs haveanexplicitrepresentation of(aversion of)theperformance measure, whileinother designs theperformance measure isentirely implicit—the
agentmaydotherightthing,butitdoesn’tknowwhy.
Recalling Norbert Wiener’s warning to ensure that “the purpose put into the machine is
the purpose which we really desire” (page 33), notice that it can be quite hard to formulate
aperformance measurecorrectly. Consider, forexample, thevacuum-cleaner agent fromthe
preceding section. Wemightpropose tomeasure performance bytheamount ofdirtcleaned
up in a single eight-hour shift. With a rational agent, of course, what you ask for is what
you get. A rational agent can maximize this performance measure by cleaning up the dirt,
then dumping it all on the floor, then cleaning it up again, and so on. A more suitable performance measure would reward the agent for having a clean floor. For example, one point
could be awarded for each clean square at each time step (perhaps with a penalty for elec- ◭
tricity consumed and noise generated). As a general rule, it is better to design performance
measures according to what one actually wants to be achieved in the environment, rather
thanaccording tohowonethinkstheagentshouldbehave.
Evenwhentheobviouspitfallsareavoided, someknottyproblemsremain. Forexample,
the notion of “clean floor” in the preceding paragraph is based on average cleanliness over
time. Yetthesameaveragecleanliness canbeachievedbytwodifferentagents,oneofwhich
does a mediocre job all the time while the other cleans energetically but takes long breaks.
Which is preferable might seem to be a fine point of janitorial science, but in fact it is a
deep philosophical question with far-reaching implications. Which is better—a reckless life
of highs and lows, or a safe but humdrum existence? Which is better—an economy where
everyonelivesinmoderatepoverty,oroneinwhichsomeliveinplentywhileothersarevery
poor? Weleavethesequestions asanexerciseforthediligent reader.
For most of the book, we will assume that the performance measure can be specified
correctly. Forthereasonsgivenabove,however,wemustacceptthepossibilitythatwemight
put the wrong purpose into the machine—precisely the King Midas problem described on
#None
paragraph
40 Chapter 2 Intelligent Agents
page 33. Moreover, when designing one piece of software, copies of which will belong to
different users, wecannot anticipate the exact preferences ofeach individual user. Thus, we
may need to build agents that reflect initial uncertainty about the true performance measure
andlearnmoreaboutitastimegoesby;suchagentsaredescribedin Chapters16,18,and22.
2.2.2 Rationality
Whatisrational atanygiventimedepends onfourthings:
• Theperformance measurethatdefinesthecriterionofsuccess.
• Theagent’spriorknowledge oftheenvironment.
• Theactionsthattheagentcanperform.
• Theagent’sperceptsequence todate.
D ra e t fi io n n i a ti l o a n g o en f t a ◮ Thisleadstoadefinitionofarationalagent:
Foreachpossible perceptsequence, a rationalagentshouldselect anaction thatis expectedtomaximizeitsperformancemeasure,giventheevidenceprovidedbythepercept
sequenceandwhateverbuilt-inknowledgetheagenthas.
Consider thesimplevacuum-cleaner agentthat cleans asquare ifitisdirty andmovestothe
other square ifnot; thisistheagentfunction tabulated in Figure2.3. Isthisarational agent?
That depends! First, we need to say what the performance measure is, what is known about
theenvironment, andwhatsensorsandactuators theagenthas. Letusassumethefollowing:
• The performance measure awards one point for each clean square at each time step,
overa“lifetime” of1000timesteps.
• The “geography” of the environment is known a priori (Figure 2.2) but the dirt distributionandtheinitiallocationoftheagentarenot. Cleansquaresstaycleanandsucking
cleans the current square. The Right and Left actions move the agent one square except when this would take the agent outside the environment, in which case the agent
remainswhereitis.
• Theonlyavailable actionsare Right,Left,and Suck.
• Theagentcorrectly perceives itslocation andwhetherthatlocation contains dirt.
Under these circumstances the agent is indeed rational; its expected performance is at least
asgoodasanyotheragent’s.
Onecanseeeasilythatthesameagentwouldbeirrationalunderdifferentcircumstances.
Forexample,onceallthedirtiscleanedup,theagentwilloscillateneedlesslybackandforth;
iftheperformancemeasureincludesapenaltyofonepointforeachmovement,theagentwill
fare poorly. A better agent for this case would do nothing once it is sure that all the squares
are clean. If clean squares can become dirty again, the agent should occasionally check and
re-cleanthemifneeded. Ifthegeographyoftheenvironmentisunknown,theagentwillneed
toexploreit. Exercise2.VACR asksyoutodesignagentsforthesecases.
2.2.3 Omniscience, learning, and autonomy
Omniscience We need to be careful to distinguish between rationality and omniscience. An omniscient
agent knows the actual outcome of its actions and can act accordingly; but omniscience is
impossible in reality. Consider the following example: I am walking along the Champs
Elyse´es one day and I see an old friend across the street. There is no traffic nearby and I’m
#None
paragraph
Section2.2 Good Behavior:The Conceptof Rationality 41
not otherwise engaged, so, being rational, I start to cross the street. Meanwhile, at 33,000
feet, a cargo door falls off a passing airliner,3 and before I make it to the other side of the
street Iamflattened. Was Iirrationaltocrossthestreet? Itisunlikelythatmyobituarywould
read“Idiotattemptstocrossstreet.”
Thisexampleshowsthatrationality isnotthesameasperfection. Rationalitymaximizes
expected performance, while perfection maximizes actual performance. Retreating from a
requirement ofperfection isnotjustaquestion ofbeingfairtoagents. Thepointisthatifwe
expect an agent to dowhatturns outafter thefact tobe thebest action, it willbeimpossible
todesignanagenttofulfillthisspecification—unless weimprovetheperformance ofcrystal
ballsortimemachines.
Our definition of rationality does not require omniscience, then, because the rational
choice depends only on the percept sequence to date. We must also ensure that we haven’t
inadvertently allowedtheagenttoengageindecidedly underintelligent activities. Forexample,ifanagentdoesnotlookbothwaysbeforecrossingabusyroad,thenitsperceptsequence
will not tell it that there is a large truck approaching at high speed. Does our definition of
rationality saythatit’snow OKtocrosstheroad? Farfromit!
First,itwouldnotberationaltocrosstheroadgiventhisuninformativeperceptsequence:
theriskofaccidentfromcrossingwithoutlookingistoogreat. Second,arationalagentshould
choose the“looking” action before stepping into thestreet, because looking helps maximize
the expected performance. Doing actions in order to modify future percepts—sometimes
called information gathering—is animportant partofrationality andiscovered indepth in Information
gathering
Chapter 16. Asecond example ofinformation gathering is provided bythe exploration that
mustbeundertaken byavacuum-cleaning agentinaninitiallyunknownenvironment.
Ourdefinitionrequiresarationalagentnotonlytogatherinformationbutalsotolearnas Learning
muchaspossible fromwhatitperceives. Theagent’sinitial configuration couldreflectsome
prior knowledge of the environment, but asthe agent gains experience this maybe modified
and augmented. There are extreme cases in which the environment is completely known a
priori and completely predictable. In such cases, the agent need not perceive or learn; it
simplyactscorrectly.
Ofcourse,suchagentsarefragile. Considerthelowlydungbeetle. Afterdiggingitsnest
and laying its eggs, it fetches a ball of dung from a nearby heap to plug the entrance. If the
ballofdungisremovedfromitsgraspenroute,thebeetlecontinuesitstaskandpantomimes
plugging the nest with the nonexistent dung ball, never noticing that it is missing. Evolution has built an assumption into the beetle’s behavior, and when itis violated, unsuccessful
behavior results.
Slightly more intelligent is the sphex wasp. The female sphex will dig a burrow, go out
and sting a caterpillar and drag it to the burrow, enter the burrow again to check all is well,
drag the caterpillar inside, and lay its eggs. Thecaterpillar serves as afood source when the
eggs hatch. So far so good, but if an entomologist moves the caterpillar a few inches away
while the sphex is doing the check, it will revert to the “drag the caterpillar” step of its plan
andwillcontinuetheplanwithoutmodification,re-checkingtheburrow,evenafterdozensof
caterpillar-moving interventions. The sphex is unable to learn that its innate plan is failing,
andthuswillnotchange it.
3 See N.Henderson,“Newdoorlatchesurgedfor Boeing747jumbojets,”Washington Post,August24,1989.
#None
paragraph
42 Chapter 2 Intelligent Agents
Totheextentthatanagentreliesonthepriorknowledgeofitsdesigner ratherthanonits
Autonomy ownperceptsandlearningprocesses,wesaythattheagentlacksautonomy. Arationalagent
should be autonomous—it should learn what it can to compensate for partial or incorrect
prior knowledge. For example, a vacuum-cleaning agent that learns to predict where and
whenadditional dirtwillappearwilldobetterthanonethatdoesnot.
As a practical matter, one seldom requires complete autonomy from the start: when the
agent hashadlittle ornoexperience, itwouldhave toactrandomly unless thedesigner gave
some assistance. Just as evolution provides animals with enough built-in reflexes to survive
longenoughtolearnforthemselves,itwouldbereasonabletoprovideanartificialintelligent
agent with some initial knowledge as well as an ability to learn. After sufficient experience
ofitsenvironment, thebehaviorofarationalagentcanbecomeeffectivelyindependent ofits
prior knowledge. Hence, the incorporation of learning allows one todesign asingle rational
agentthatwillsucceedinavastvarietyofenvironments.
2.3 The Nature of Environments
Now that we have a definition of rationality, we are almost ready to think about building
Taskenvironment rational agents. First, however, we must think about task environments, which are essentiallythe“problems” towhichrational agentsarethe“solutions.” Webeginbyshowinghow
to specify a task environment, illustrating the process with a number of examples. We then
showthattaskenvironments comeinavarietyofflavors. Thenatureofthetaskenvironment
directly affectstheappropriate designfortheagentprogram.
2.3.1 Specifying the task environment
In our discussion of the rationality of the simple vacuum-cleaner agent, we had to specify
theperformance measure, theenvironment, andtheagent’s actuators and sensors. Wegroup
allthese undertheheading ofthetaskenvironment. Fortheacronymically minded, wecall
PEAS thisthe PEAS(Performance, Environment, Actuators, Sensors)description. Indesigning an
agent,thefirststepmustalwaysbetospecifythetaskenvironment asfullyaspossible.
The vacuum world was a simple example; let us consider a more complex problem:
an automated taxi driver. Figure 2.4 summarizes the PEAS description for the taxi’s task
environment. Wediscusseachelementinmoredetailinthefollowingparagraphs.
First, what is the performance measure to which we would like our automated driver
toaspire? Desirablequalities include gettingtothecorrectdestination; minimizingfuelconsumptionandwearandtear;minimizingthetriptimeorcost;minimizingviolationsoftraffic
laws and disturbances to other drivers; maximizing safety and passenger comfort; maximizingprofits. Obviously, someofthesegoalsconflict,sotradeoffs willberequired.
Next,whatisthedriving environment that thetaxiwillface? Anytaxidriver mustdeal
with a variety of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The
roads contain other traffic, pedestrians, stray animals, road works, police cars, puddles, and
potholes. The taxi must also interact with potential and actual passengers. There are also
some optional choices. The taxi might need to operate in Southern California, where snow
is seldom aproblem, orin Alaska, where it seldom is not. It could always be driving on the
right, orwemightwantittobeflexibleenough todriveontheleftwhenin Britain or Japan.
Obviously, themorerestricted theenvironment, theeasierthedesignproblem.
#None
paragraph
Section2.3 The Natureof Environments 43
Agent Type Performance Environment Actuators Sensors
Measure
Taxidriver Safe,fast, Roads,other Steering, Cameras,radar,
legal, traffic,police, accelerator, speedometer,GPS,engine
comfortable pedestrians, brake,signal, sensors,accelerometer,
trip,maximize customers, horn,display, microphones,touchscreen
profits, weather speech
minimize
impacton
otherroad
users
Figure2.4 PEASdescriptionofthetaskenvironmentforanautomatedtaxidriver.
The actuators for an automated taxi include those available to a human driver: control
overtheengine through theaccelerator andcontrol oversteering andbraking. Inaddition, it
will need output to a display screen or voice synthesizer to talk back to the passengers, and
perhapssomewaytocommunicatewithothervehicles, politely orotherwise.
Thebasicsensorsforthetaxiwillincludeoneormorevideocamerassothatitcansee,as
wellas lidar and ultrasound sensors todetect distances toother cars and obstacles. Toavoid
speeding tickets, the taxi should have a speedometer, and to control the vehicle properly,
especially on curves, it should have an accelerometer. To determine the mechanical state of
the vehicle, it will need the usual array of engine, fuel, and electrical system sensors. Like
many human drivers, it might want to access GPSsignals so that it doesn’t get lost. Finally,
itwillneedtouchscreen orvoiceinputforthepassenger torequestadestination.
In Figure 2.5, we have sketched the basic PEAS elements for a number of additional
agent types. Further examples appear in Exercise 2.PEAS. The examples include physical
as well as virtual environments. Note that virtual task environments can be just as complex
asthe“real” world: forexample, asoftware agent(orsoftware robot orsoftbot) that trades Softwareagent
on auction and reselling Websites deals with millions of other users and billions of objects, Softbot
manywithrealimages.
2.3.2 Properties of task environments
The range of task environments that might arise in AI is obviously vast. We can, however,
identify a fairly small number of dimensions along which task environments can be categorized. These dimensions determine, to a large extent, the appropriate agent design and the
applicability of each of the principal families of techniques for agent implementation. First
welistthedimensions, thenweanalyzeseveraltaskenvironments toillustrate theideas. The
definitionshereareinformal;laterchaptersprovidemoreprecisestatementsandexamplesof
eachkindofenvironment.
Fully observable vs. partially observable: If an agent’s sensors give it access to the Fullyobservable
complete state of the environment at each point in time, then we say that the task environ- Partiallyobservable
ment is fully observable. A task environment is effectively fully observable if the sensors
detect all aspects that are relevant tothe choice of action; relevance, inturn, depends on the
#None
paragraph
44 Chapter 2 Intelligent Agents
Agent Type Performance Environment Actuators Sensors
Measure
Medical Healthypatient, Patient,hospital, Displayof
Touchscreen/voice
diagnosissystem reducedcosts staff questions,tests,
entryof
diagnoses,
symptomsand
treatments
findings
Satelliteimage Correct Orbitingsatellite, Displayofscene High-resolution
analysissystem categorizationof downlink, categorization digitalcamera
objects,terrain weather
Part-picking Percentageof Conveyorbelt Jointedarmand Camera,tactile
robot partsincorrect withparts;bins hand andjointangle
bins sensors
Refinery Purity,yield, Refinery,raw Valves,pumps, Temperature,
controller safety materials, heaters,stirrers, pressure,flow,
operators displays chemicalsensors
Interactive Student’sscore Setofstudents, Displayof Keyboardentry,
Englishtutor ontest testingagency exercises, voice
feedback,speech
Figure2.5 Examplesofagenttypesandtheir PEASdescriptions.
performance measure. Fullyobservable environments areconvenient because theagentneed
notmaintainanyinternal statetokeeptrackoftheworld. Anenvironment mightbepartially
observable because of noisy and inaccurate sensors or because parts of the state are simply
missing from the sensor data—for example, a vacuum agent with only a local dirt sensor
cannottellwhetherthereisdirtinothersquares,andanautomatedtaxicannotseewhatother
drivers are thinking. If the agent has no sensors at all then the environment is unobserv Unobservable able. One might think that in such cases the agent’s plight is hopeless, but, as wediscuss in
Chapter4,theagent’s goalsmaystillbeachievable, sometimeswithcertainty.
Single-agent Single-agent vs. multiagent: The distinction between single-agent and multiagent en Multiagent vironments may seem simple enough. Forexample, anagent solving acrossword puzzle by
itself is clearly in a single-agent environment, whereas an agent playing chess is in a twoagent environment. However, there are some subtle issues. First, wehave described how an
entity may be viewed as an agent, but we have not explained which entities must be viewed
as agents. Does an agent A (the taxi driver for example) have to treat an object B (another
vehicle)asanagent,orcanitbetreatedmerelyasanobjectbehavingaccordingtothelawsof
physics, analogous towavesatthebeach orleaves blowing inthewind? Thekeydistinction
iswhether B’sbehavior isbestdescribedasmaximizingaperformancemeasurewhosevalue
depends onagent A’sbehavior.
#None
paragraph
Section2.3 The Natureof Environments 45
Forexample,inchess, theopponent entity Bistrying tomaximizeitsperformance measure, which,bytherulesofchess, minimizesagent A’sperformance measure. Thus,chessis
a competitive multiagent environment. On the other hand, in the taxi-driving environment, Competitive
avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperativemultiagentenvironment. Itisalsopartiallycompetitivebecause,forexample,only Cooperative
onecarcanoccupyaparking space.
The agent-design problems in multiagent environments are often quite different from
thoseinsingle-agent environments; forexample, communication oftenemerges asarational
behaviorinmultiagentenvironments; insomecompetitiveenvironments, randomizedbehaviorisrational becauseitavoids thepitfallsofpredictability.
Deterministic vs. nondeterministic. If the next state of the environment is completely Deterministic
determined by the current state and the action executed by the agent(s), then we say the Nondeterministic
environmentisdeterministic;otherwise,itisnondeterministic. Inprinciple,anagentneednot
worryaboutuncertainty inafullyobservable, deterministic environment. Iftheenvironment
ispartially observable, however,thenitcouldappear tobenondeterministic.
Mostrealsituationsaresocomplexthatitisimpossibletokeeptrackofalltheunobserved
aspects; for practical purposes, they must be treated as nondeterministic. Taxi driving is
clearly nondeterministic in this sense, because one can never predict the behavior of traffic
exactly; moreover, one’s tires may blow out unexpectedly and one’s engine may seize up
without warning. The vacuum world as we described it is deterministic, but variations can
includenondeterministic elementssuchasrandomlyappearing dirtandanunreliable suction
mechanism (Exercise2.VFIN).
Onefinalnote: thewordstochasticisusedbysomeasasynonymfor“nondeterministic,” Stochastic
but we make a distinction between the two terms; we say that a model of the environment
is stochastic if it explicitly deals with probabilities (e.g., “there’s a 25% chance of rain tomorrow”)and“nondeterministic” ifthepossibilities arelistedwithoutbeingquantified (e.g.,
“there’sachanceofraintomorrow”).
Episodic vs. sequential: In an episodic task environment, the agent’s experience is di- Episodic
vided into atomic episodes. In each episode the agent receives a percept and then performs Sequential
a single action. Crucially, the next episode does not depend on the actions taken in previous episodes. Many classification tasks are episodic. For example, an agent that has to
spot defective parts on an assembly line bases each decision on the current part, regardless
of previous decisions; moreover, the current decision doesn’t affect whether the next part is
defective. In sequential environments, on the other hand, the current decision could affect
allfuture decisions.4 Chessand taxidriving aresequential: inboth cases, short-term actions
can have long-term consequences. Episodic environments are much simpler than sequential
environments becausetheagentdoesnotneedtothinkahead.
Static vs. dynamic: If the environment can change while an agent is deliberating, then Static
wesaytheenvironment isdynamic forthatagent;otherwise, itisstatic. Staticenvironments Dynamic
areeasytodealwithbecausetheagentneednotkeeplookingattheworldwhileitisdeciding
on an action, nor need it worry about the passage of time. Dynamic environments, on the
other hand, are continuously asking the agent what it wants to do; if it hasn’t decided yet,
4 Theword“sequential”isalsousedincomputerscienceastheantonymof“parallel.” Thetwomeaningsare
largelyunrelated.
#None
paragraph
46 Chapter 2 Intelligent Agents
that counts as deciding to do nothing. If the environment itself does not change with the
passage of time but the agent’s performance score does, then we say the environment is
Semidynamic semidynamic. Taxidrivingisclearlydynamic: theothercarsandthetaxiitselfkeepmoving
whilethe driving algorithm dithers about whattodonext. Chess, whenplayed withaclock,
issemidynamic. Crosswordpuzzlesarestatic.
Discrete Discrete vs. continuous: The discrete/continuous distinction applies to the state of the
Continuous environment, to the way time is handled, and to the percepts and actions of the agent. For
example, the chess environment has a finite number of distinct states (excluding the clock).
Chess also has a discrete set of percepts and actions. Taxi driving is a continuous-state and
continuous-time problem: the speed and location ofthe taxi and ofthe other vehicles sweep
through arangeofcontinuous values anddososmoothlyovertime. Taxi-driving actions are
also continuous (steering angles, etc.). Input from digital cameras isdiscrete, strictly speaking,butistypically treatedasrepresenting continuously varyingintensities andlocations.
Known Known vs. unknown: Strictly speaking, this distinction refers not to the environment
Unknown itself but to the agent’s (or designer’s) state of knowledge about the “laws of physics” of
the environment. In a known environment, the outcomes (or outcome probabilities if the
environment is nondeterministic) for all actions are given. Obviously, if the environment is
unknown, theagentwillhavetolearnhowitworksinordertomakegooddecisions.
The distinction between known and unknown environments is not the same as the one
betweenfullyandpartiallyobservableenvironments. Itisquitepossibleforaknownenvironment to be partially observable—for example, in solitaire card games, I know the rules but
am still unable to see the cards that have not yet been turned over. Conversely, an unknown
environment can be fully observable—in a new video game, the screen may show the entire
gamestatebut Istilldon’tknowwhatthebuttonsdountil Itrythem.
As noted on page 39, the performance measure itself may be unknown, either because
the designer is not sure how to write it down correctly or because the ultimate user—whose
preferences matter—is not known. Forexample, ataxi driver usually won’t know whether a
new passenger prefers a leisurely or speedy journey, a cautious or aggressive driving style.
A virtual personal assistant starts out knowing nothing about the personal preferences of its
newowner. Insuchcases,theagentmaylearnmoreabouttheperformancemeasurebasedon
furtherinteractionswiththedesigneroruser. This,inturn,suggeststhatthetaskenvironment
isnecessarily viewedasamultiagentenvironment.
The hardest case is partially observable, multiagent, nondeterministic, sequential, dynamic, continuous, and unknown. Taxi driving is hard in all these senses, except that the
driver’senvironment ismostlyknown. Drivingarentedcarinanewcountry withunfamiliar
geography, differenttrafficlaws,andnervouspassengers isalotmoreexciting.
Figure2.6 lists theproperties of anumber offamiliar environments. Note thatthe properties are not always cut and dried. For example, we have listed the medical-diagnosis task
assingle-agent becausethediseaseprocessinapatientisnotprofitablymodeledasanagent;
but amedical-diagnosis system might also haveto dealwithrecalcitrant patients andskepticalstaff,sotheenvironment couldhaveamultiagentaspect. Furthermore, medicaldiagnosis
isepisodic ifone conceives ofthetask asselecting adiagnosis given alistofsymptoms; the
problem is sequential if the task can include proposing a series of tests, evaluating progress
overthecourseoftreatment, handling multiplepatients, andsoon.
#None
paragraph
Section2.1 Agentsand Environments 37
Agent
Sensors
Actuators
Environment
Percepts
?
Actions
Figure2.1 Agentsinteractwithenvironmentsthroughsensorsandactuators.
We can imagine tabulating the agent function that describes any given agent; for most
agents, this would be a very large table—infinite, in fact, unless we place a bound on the
lengthofperceptsequenceswewanttoconsider. Givenanagenttoexperimentwith,wecan,
in principle, construct this table by trying out all possible percept sequences and recording
whichactionstheagentdoesinresponse.1 Thetableis,ofcourse,anexternalcharacterization
of the agent. Internally, the agent function for an artificial agent will be implemented by an
agent program. It is important to keep these two ideas distinct. The agent function is an Agentprogram
abstract mathematical description; the agent program is a concrete implementation, running
withinsomephysicalsystem.
To illustrate these ideas, we use a simple example—the vacuum-cleaner world, which
consists of a robotic vacuum-cleaning agent in a world consisting of squares that can be
either dirty or clean. Figure 2.2 shows a configuration with just two squares, A and B. The
vacuum agent perceives which square it is in and whether there is dirt in the square. The
agentstartsinsquare A. Theavailable actionsaretomovetotheright,movetotheleft,suck
up the dirt, or do nothing.2 One very simple agent function is the following: if the current
square is dirty, then suck; otherwise, move to the other square. A partial tabulation of this
agent function is shown in Figure 2.3 and an agent program that implements it appears in
Figure2.8onpage49.
Looking at Figure 2.3, we see that various vacuum-world agents can be defined simply ◭
byfillingintheright-handcolumninvariousways. Theobviousquestion,then,isthis: What
is the right way to fill out the table? In other words, what makes an agent good or bad,
intelligent orstupid? Weanswerthesequestions inthenextsection.
1 Iftheagentusessomerandomizationtochoose itsactions, thenwewouldhavetotryeachsequence many
timestoidentifytheprobabilityofeachaction. Onemightimaginethatactingrandomlyisrathersilly,butwe
showlaterinthischapterthatitcanbeveryintelligent.
2 Inarealrobot,itwouldbeunlikelytohaveanactionslike“moveright”and“moveleft.” Insteadtheactions
wouldbe“spinwheelsforward”and“spinwheelsbackward.” Wehavechosentheactionstobeeasiertofollow
onthepage,notforeaseofimplementationinanactualrobot.
#None
paragraph
50 Chapter 2 Intelligent Agents
Agent
Environment
Sensors
What the world
is like now
What action I
Condition-action rules
should do now
Actuators
Figure 2.9 Schematic diagram of a simple reflex agent. We use rectangles to denote the
currentinternalstateoftheagent’sdecisionprocess,andovalstorepresentthebackground
informationusedintheprocess.
visualinputtoestablishtheconditionwecall“Thecarinfrontisbraking.” Then,thistriggers
some established connection in the agent program to the action “initiate braking.” We call
Condition–action suchaconnection acondition–action rule,6 writtenas
rule
ifcar-in-front-is-braking theninitiate-braking.
Humansalsohavemanysuchconnections, someofwhicharelearned responses(asfordriving)andsomeofwhichareinnatereflexes(suchasblinkingwhensomething approaches the
eye). In the course of the book, we show several different ways in which such connections
canbelearnedandimplemented.
The program in Figure 2.8 is specific to one particular vacuum environment. A more
general and flexible approach is first to build a general-purpose interpreter for condition–
action rules and then to create rule sets for specific task environments. Figure 2.9 gives the
structure ofthisgeneral programinschematicform,showinghowthecondition–action rules
allow the agent to make the connection from percept to action. Do not worry if this seems
trivial;itgetsmoreinteresting shortly.
An agent program for Figure 2.9 is shown in Figure 2.10. The INTERPRET-INPUT
function generates an abstracted description of the current state from the percept, and the
RULE-MATCH function returns the first rule in the set of rules that matches the given state
description. Note that the description in terms of “rules” and “matching” is purely conceptual; as noted above, actual implementations can be as simple as a collection of logic gates
implementinga Booleancircuit. Alternatively,a“neural”circuitcanbeused,wherethelogic
gatesarereplaced bythenonlinear unitsofartificialneuralnetworks(see Chapter21).
◮ Simplereflexagentshavetheadmirableproperty ofbeingsimple,buttheyareoflimited
intelligence. Theagent in Figure 2.10willwork only ifthecorrect decision canbe madeon
thebasisofjustthecurrentpercept—that is,onlyiftheenvironment isfullyobservable.
6 Alsocalledsituation–actionrules,productions,orif–thenrules.
#None
paragraph
Section2.4 The Structureof Agents 49
function REFLEX-VACUUM-AGENT([location,status])returnsanaction
ifstatus=Dirtythenreturn Suck
elseiflocation=Athenreturn Right
elseiflocation=Bthenreturn Left
Figure2.8 Theagentprogramforasimplereflexagentinthetwo-locationvacuumenvironment. Thisprogramimplementstheagentfunctiontabulatedin Figure2.3.
Thekeychallenge for AIistofindouthowtowriteprograms that, totheextent possible,
produce rationalbehavior fromasmallishprogram ratherthanfromavasttable.
We have many examples showing that this can be done successfully in other areas: for
example, the huge tables of square roots used by engineers and schoolchildren prior to the
1970s havenowbeen replaced byafive-lineprogram for Newton’smethodrunning onelectronic calculators. The question is, can AI do for general intelligent behavior what Newton
didforsquareroots? Webelievetheanswerisyes.
In the remainder of this section, we outline four basic kinds of agent programs that embodytheprinciples underlying almostallintelligent systems:
• Simplereflexagents;
• Model-based reflexagents;
• Goal-basedagents; and
• Utility-based agents.
Each kind of agent program combines particular components in particular ways to generate
actions. Section2.4.6explains ingeneral termshowtoconvert alltheseagents intolearning
agentsthatcanimprovetheperformanceoftheircomponentssoastogeneratebetteractions.
Finally, Section2.4.7describes thevariety ofwaysinwhichthecomponents themselves can
be represented within the agent. This variety provides a major organizing principle for the
fieldandforthebookitself.
2.4.2 Simple reflex agents
Thesimplestkindofagentisthesimplereflexagent. Theseagentsselectactionsonthebasis Simplereflexagent
ofthecurrentpercept,ignoringtherestofthepercepthistory. Forexample,thevacuumagent
whose agent function is tabulated in Figure 2.3is asimple reflexagent, because its decision
is based only on the current location and on whether that location contains dirt. An agent
program forthisagentisshownin Figure2.8.
Noticethatthevacuum agentprogram isverysmallindeed compared tothecorresponding table. The most obvious reduction comes from ignoring the percept history, which cuts
down the number of relevant percept sequences from 4T to just 4. A further, small reduction comesfrom thefact thatwhenthe current square isdirty, theaction does notdepend on
thelocation. Although wehavewritten theagentprogram using if-then-else statements, itis
simpleenough thatitcanalsobeimplementedasa Booleancircuit.
Simplereflexbehaviors occur eveninmorecomplex environments. Imagine yourself as
the driver of the automated taxi. If the car in front brakes and its brake lights come on, then
you should notice this and initiate braking. In other words, some processing is done on the
#None
paragraph
Section2.4 The Structureof Agents 47
Task Environment Observable Agents Deterministic Episodic Static Discrete
Crosswordpuzzle Fully Single Deterministic Sequential Static Discrete
Chesswithaclock Fully Multi Deterministic Sequential Semi Discrete
Poker Partially Multi Stochastic Sequential Static Discrete
Backgammon Fully Multi Stochastic Sequential Static Discrete
Taxidriving Partially Multi Stochastic Sequential Dynamic Continuous
Medicaldiagnosis Partially Single Stochastic Sequential Dynamic Continuous
Imageanalysis Fully Single Deterministic Episodic Semi Continuous
Part-pickingrobot Partially Single Stochastic Episodic Dynamic Continuous
Refinerycontroller Partially Single Stochastic Sequential Dynamic Continuous
Englishtutor Partially Multi Stochastic Sequential Dynamic Discrete
Figure2.6 Examplesoftaskenvironmentsandtheircharacteristics.
We have not included a “known/unknown” column because, as explained earlier, this is
not strictly aproperty of the environment. Forsome environments, such aschess and poker,
it is quite easy to supply the agent with full knowledge of the rules, but it is nonetheless
interestingtoconsiderhowanagentmightlearntoplaythesegameswithoutsuchknowledge.
Thecoderepository associatedwiththisbook(aima.cs.berkeley.edu)includesmultiple environment implementations, together with a general-purpose environment simulator
for evaluating an agent’s performance. Experiments are often carried out not for a single
environment butformanyenvironments drawnfrom anenvironmentclass. Forexample, to Environment class
evaluate a taxi driver in simulated traffic, we would want to run many simulations with different traffic, lighting, and weather conditions. Weare then interested inthe agent’s average
performance overtheenvironment class.
2.4 The Structure of Agents
Sofarwehavetalkedaboutagentsbydescribingbehavior—theactionthatisperformedafter
any given sequence ofpercepts. Nowwemust bite the bullet and talk about how theinsides
work. The job of AI is to design an agent program that implements the agent function— Agentprogram
the mapping from percepts to actions. We assume this program will run on some sort of
computing devicewithphysicalsensorsandactuators—we callthistheagentarchitecture: Agentarchitecture
agent=architecture+program.
Obviously,theprogramwechoosehastobeonethatisappropriateforthearchitecture. Ifthe
program isgoing torecommend actions like Walk,thearchitecture hadbetterhave legs. The
architecture might be just an ordinary PC, or it might be a robotic car with several onboard
computers, cameras, and other sensors. In general, the architecture makes the percepts from
thesensorsavailabletotheprogram,runstheprogram,andfeedstheprogram’sactionchoices
to the actuators asthey are generated. Most ofthis book is about designing agent programs,
although Chapters25and26dealdirectly withthesensorsandactuators.
4 segments
#None
paragraph
AGENT, Black's Law Dictionary (11th ed. 2019)
- stock-transfer agent. (1873) See transfer agent.
- subagent. (18c) 1. A person to whom an agent has delegated the performance of an act for the principal; a person designated
by an agent to perform some duty relating to the agency. • If the principal consents to a primary agent's employment of a
subagent, the subagent owes fiduciary duties to the principal, and the principal is liable for the subagent's acts. — Also termed
subservant. Cf. primary agent; subordinate agent.
“By delegation … the agent is permitted to use agents of his own in performing the function he is employed to perform for
his principal, delegating to them the discretion which normally he would be expected to exercise personally. These agents are
known as subagents to indicate that they are the agent's agents and not the agents of the principal. Normally (though of course
not necessarily) they are paid by the agent. The agent is liable to the principal for any injury done him by the misbehavior of
the agent's subagents.” Floyd R. Mechem, Outlines of the Law of Agency § 79, at 51 (Philip Mechem ed., 4th ed. 1952).
2. See buyer's broker under broker.
- subordinate agent. (17c) An agent who acts subject to the direction of a superior agent. • Subordinate and superior agents
are co-agents of a common principal. See superior agent. Cf. subagent (1).
- successor agent. (1934) An agent who is appointed by a principal to act in a primary agent's stead if the primary agent is
unable or unwilling to perform.
- superior agent. (17c) 1. An agent on whom a principal confers the right to direct a subordinate agent. See subordinate agent.
2. See high-managerial agent (1).
- transfer agent. (1850) An organization (such as a bank or trust company) that handles transfers of shares for a publicly held
corporation by issuing new certificates and overseeing the cancellation of old ones and that usu. also maintains the record of
shareholders for the corporation and mails dividend checks. • Generally, a transfer agent ensures that certificates submitted for
transfer are properly indorsed and that the transfer right is appropriately documented. — Also termed stock-transfer agent.
- trustee-agent. A trustee who is subject to the control of the settlor or one or more beneficiaries of a trust. See trustee (1).
- undercover agent. (1930) 1. An agent who does not disclose his or her role as an agent. 2. A police officer who gathers
evidence of criminal activity without disclosing his or her identity to the suspect.
- undisclosed agent. (1863) An agent who deals with a third party who has no knowledge that the agent is acting on a principal's
behalf. Cf. undisclosed principal under principal (1).
- universal agent. (18c) An agent authorized to perform all acts that the principal could personally perform.
- vice-commercial agent. (1800) Hist. In the consular service of the United States, a consular officer who was substituted
temporarily to fill the place of a commercial agent who was absent or had been relieved from duty.
Westlaw. © 2019 Thomson Reuters. No Claim to Orig. U.S. Govt. Works.
End of Document © 2024 Thomson Reuters. No claim to original U.S. Government Works.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 4
#None
paragraph
AGENT, Black's Law Dictionary (11th ed. 2019)
Black's Law Dictionary (11th ed. 2019), agent
AGENT
Bryan A. Garner, Editor in Chief
Preface | Guide | Legal Maxims | Bibliography
agent (15c) 1. Something that produces an effect <an intervening agent>. See cause (1); electronic agent. 2. Someone who is
authorized to act for or in place of another; a representative <a professional athlete's agent>. — Also termed commissionaire.
See agency. Cf. principal, n. (1); employee.
“Generally speaking, anyone can be an agent who is in fact capable of performing the functions involved. The agent normally
binds not himself but his principal by the contracts he makes; it is therefore not essential that he be legally capable to contract
(although his duties and liabilities to his principal might be affected by his status). Thus an infant or a lunatic may be an agent,
though doubtless the court would disregard either's attempt to act if he were so young or so hopelessly devoid of reason as to
be completely incapable of grasping the function he was attempting to perform.” Floyd R. Mechem, Outlines of the Law of
Agency 8–9 (Philip Mechem ed., 4th ed. 1952).
“The etymology of the word agent or agency tells us much. The words are derived from the Latin verb, ago, agere; the noun
agens, agentis. The word agent denotes one who acts, a doer, force or power that accomplishes things.” Harold Gill Reuschlein
& William A. Gregory, The Law of Agency and Partnership § 1, at 2–3 (2d ed. 1990).
- agent not recognized. Patents. A patent applicant's appointed agent who is not registered to practice before the U.S. Patent
and Trademark Office. • A power of attorney appointing an unregistered agent is void. See patent agent.
- agent of necessity. (1857) An agent that the law empowers to act for the benefit of another in an emergency. — Also termed
agent by necessity.
- apparent agent. (1823) Someone who reasonably appears to have authority to act for another, regardless of whether actual
authority has been conferred. — Also termed ostensible agent; implied agent.
- associate agent. Patents. An agent who is registered to practice before the U.S. Patent and Trademark Office, has been
appointed by a primary agent, and is authorized to prosecute a patent application through the filing of a power of attorney. • An
associate agent is often used by outside counsel to assist in-house counsel. See patent agent.
- bail-enforcement agent. See bounty hunter.
- bargaining agent. (1935) A labor union in its capacity of representing employees in collective bargaining.
- broker-agent. See broker.
- business agent. See business agent.
- case agent. See case agent.
- clearing agent. (1937) Securities. A person or company acting as an intermediary in a securities transaction or providing
facilities for comparing data regarding securities transactions. • The term includes a custodian of securities in connection with
the central handling of securities. Securities Exchange Act § 3(a)(23)(A) (15 USCA § 78c(a)(23)(A)). — Also termed clearing
agency.
- closing agent. (1922) An agent who represents the purchaser or buyer in the negotiation and closing of a real-property
transaction by handling financial calculations and transfers of documents. — Also termed settlement agent. See also settlement
attorney under attorney.
- co-agent. (16c) Someone who shares with another agent the authority to act for the principal. — Also termed dual agent.
Cf. common agent.
- commercial agent. (18c) 1. broker. 2. A consular officer responsible for the commercial interests of his or her country at a
foreign port. 3. See mercantile agent. 4. See commission agent.
- commission agent. (1812) An agent whose remuneration is based at least in part on commissions, or percentages of actual
sales. • Commission agents typically work as middlemen between sellers and buyers. — Also termed commercial agent.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 1
#None
paragraph
AGENT, Black's Law Dictionary (11th ed. 2019)
- common agent. (17c) An agent who acts on behalf of more than one principal in a transaction. Cf. co-agent.
- corporate agent. (1819) An agent authorized to act on behalf of a corporation; broadly, all employees and officers who have
the power to bind the corporation.
- county agent. See juvenile officer under officer (1).
- del credere agent (del kred-ə-ray or kray-də-ray) (1822) An agent who guarantees the solvency of the third party with whom
the agent makes a contract for the principal. • A del credere agent receives possession of the principal's goods for purposes
of sale and guarantees that anyone to whom the agent sells the goods on credit will pay promptly for them. For this guaranty,
the agent receives a higher commission for sales. The promise of such an agent is almost universally held not to be within the
statute of frauds. — Also termed del credere factor.
- diplomatic agent. (18c) A national representative in one of four categories: (1) ambassadors, (2) envoys and ministers
plenipotentiary, (3) ministers resident accredited to the sovereign, or (4) chargés d'affaires accredited to the minister of foreign
affairs.
- double agent. (1935) 1. A spy who finds out an enemy's secrets for his or her principal but who also gives secrets to the
enemy. 2. See dual agent (2).
- dual agent. (1881) 1. See co-agent. 2. An agent who represents both parties in a single transaction, esp. a buyer and a seller.
— Also termed (in sense 2) double agent.
- emigrant agent. (1874) One engaged in the business of hiring laborers for work outside the country or state.
- enrolled agent. See enrolled agent.
- escrow agent. See escrow agent.
- estate agent. See real-estate agent.
- fiscal agent. (18c) A bank or other financial institution that collects and disburses money and services as a depository of
private and public funds on another's behalf.
- foreign agent. (1938) Someone who registers with the federal government as a lobbyist representing the interests of a foreign
country or corporation.
- forwarding agent. (1837) 1. freight forwarder. 2. A freight-forwarder who assembles less-than-carload shipments (small
shipments) into carload shipments, thus taking advantage of lower freight rates.
- general agent. (17c) An agent authorized to transact all the principal's business of a particular kind or in a particular place. •
Among the common types of general agents are factors, brokers, and partners. Cf. special agent.
- government agent. (1805) 1. An employee or representative of a governmental body. 2. A law-enforcement official, such as
a police officer or an FBI agent. 3. An informant, esp. an inmate, used by law enforcement to obtain incriminating statements
from another inmate.
- gratuitous agent. (1822) An agent who acts without a right to compensation.
- high-managerial agent. (1957) 1. An agent of a corporation or other business who has authority to formulate corporate policy
or supervise employees. — Also termed superior agent. 2. See superior agent (1).
- implied agent. See apparent agent.
- independent agent. (17c) An agent who exercises personal judgment and is subject to the principal only for the results of
the work performed. Cf. nonservant agent.
- innocent agent. (1805) Criminal law. A person whose action on behalf of a principal is unlawful but does not merit prosecution
because the agent had no knowledge of the principal's illegal purpose; a person who lacks the mens rea for an offense but who
is tricked or coerced by the principal into committing a crime. • Although the agent's conduct was unlawful, the agent might
not be prosecuted if the agent had no knowledge of the principal's illegal purpose. The principal is legally accountable for the
innocent agent's actions. See Model Penal Code § 2.06(2)(a).
- insurance agent. See insurance agent.
- jural agent. See jural agent.
- land agent. See land agent.
- listing agent. (1927) The real-estate broker's representative who obtains a listing agreement with the owner. Cf. selling agent;
showing agent.
- local agent. (1804) 1. An agent appointed to act as another's (esp. a company's) representative and to transact business within
a specified district. 2. See special agent.
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 2
#None
paragraph
AGENT, Black's Law Dictionary (11th ed. 2019)
- managing agent. (1812) A person with general power involving the exercise of judgment and discretion, as opposed to an
ordinary agent who acts under the direction and control of the principal. — Also termed business agent.
- mercantile agent. (18c) An agent employed to sell goods or merchandise on behalf of the principal. — Also termed commercial
agent.
- nonservant agent. (1920) An agent who agrees to act on the principal's behalf but is not subject to the principal's control
over how the task is performed. • A principal is not liable for the physical torts of a nonservant agent. See independent contractor.
Cf. independent agent; servant.
- ostensible agent. See apparent agent.
- patent agent. (1859) A specialized legal professional — not necessarily a lawyer — who has fulfilled the U.S. Patent and
Trademark Office requirements as a representative and is registered to prepare and prosecute patent applications before the
PTO. • To be registered to practice before the PTO, a candidate must establish mastery of the relevant technology (by holding
a specified technical degree or equivalent training) in order to advise and assist patent applicants. The candidate must also pass
a written examination (the “Patent Bar”) that tests knowledge of patent law and PTO procedure. — Often shortened to agent.
— Also termed registered patent agent; patent solicitor. Cf. patent attorney.
- primary agent. (18c) An agent who is directly authorized by a principal. • A primary agent generally may hire a subagent
to perform all or part of the agency. Cf. subagent (1).
- private agent. (17c) An agent acting for an individual in that person's private affairs.
- process agent. (1886) A person authorized to accept service of process on behalf of another. See registered agent.
- procuring agent. (1954) Someone who obtains drugs on behalf of another person and delivers the drugs to that person. •
In criminal-defense theory, the procuring agent does not sell, barter, exchange, or make a gift of the drugs to the other person
because the drugs already belong to that person, who merely employs the agent to pick up and deliver them.
- public agent. (17c) A person appointed to act for the public in matters relating to governmental administration or public
business.
- real-estate agent. (1844) An agent who represents a buyer or seller (or both, with proper disclosures) in the sale or lease of
real property. • A real-estate agent can be either a broker (whose principal is a buyer or seller) or a salesperson (whose principal
is a broker). — Also termed estate agent. Cf. realtor.
- record agent. See insurance agent.
- registered agent. (1809) A person authorized to accept service of process for another person, esp. a foreign corporation, in
a particular jurisdiction. — Also termed resident agent. See process agent.
- registered patent agent. See patent agent.
- resident agent. See registered agent.
- secret agent. See secret agent.
- selling agent. (1839) 1. The real-estate broker's representative who sells the property, as opposed to the agent who lists the
property for sale. 2. See showing agent. Cf. listing agent.
- settlement agent. (1952) See closing agent.
- showing agent. (1901) A real-estate broker's representative who markets property to a prospective purchaser. • A showing
agent may be characterized as a subagent of the listing broker, as an agent who represents the purchaser, or as an intermediary
who owes an agent's duties to neither seller nor buyer. — Also termed selling agent. Cf. listing agent.
- soliciting agent. (1855) 1. Insurance. An agent with authority relating to the solicitation or submission of applications to an
insurance company but usu. without authority to bind the insurer, as by accepting the applications on behalf of the company.
2. An agent who solicits orders for goods or services for a principal. 3. A managing agent of a corporation for purposes of
service of process.
- special agent. (17c) 1. An agent employed to conduct a particular transaction or to perform a specified act. Cf. general agent.
2. See insurance agent.
- specially accredited agent. (1888) An agent that the principal has specially invited a third party to deal with, in an implication
that the third party will be notified if the agent's authority is altered or revoked.
- statutory agent. (1844) An agent designated by law to receive litigation documents and other legal notices for a nonresident
corporation. • In most states, the secretary of state is the statutory agent for such corporations. Cf. agency by operation of law
(1) under agency (1).
© 2024 Thomson Reuters. No claim to original U.S. Government Works. 3
38 segments
#None
paragraph
Intelligent agents: theory and practice 125
would apply its inference rules wherever possible, in order to generate the deductive closure of its
base beliefs under its deduction rules. We model deductive closure in a function close:
where 6.1--,, rp means that rpcan be proved from 6. using only the rules in p. A belief logic can then be
defined, with the semantics to a modal belief connective [i], where i is an agent, given in terms of the
deduction structured; modelling i's belief system: [i]qi iff rp e c!ose(d;).
Konolige went on to examine the properties of the deduction model at some length, and
developed a variety of proof methods for his logics, including resolution and tableau systems
(Geissler & Konolige, 1986). The deduction model is undoubtedly simple; however, as a direct
model of the belief systems of AI agents, it has much to commend it.
2.4.3 Meta-languages and syntactic modalities
A meta-language is one in which it is possible to represent the properties of another language. A
first-order meta-language is a first-order logic, with the standard predicates, quantifier, terms, and
so on, whose domain contains formulae of some other language, called the object language. Using a
meta-language, it is possible to represent a relationship between a meta-language term denoting an
agent, and an object language term denoting some formula. For example, the meta-language
formula Bel(Janine,[Father(Zeus, Cronos)]) might be used to represent the example (1) that we
saw earlier. The quote marks, [ ... ], are used to indicate that their contents are a meta-language
term denoting the corresponding object-language formula.
Unfortunately, meta-language formalisms have their own package of problems, not the least of
which is that they tend to fall prey to inconsistency (Montague, 1963; Thomason, 1980). However,
there have been some fairly successful meta-language formalisms, including those by Konolige
(1982), Haas (1986), Morgenstern (1987), and Davies (1993). Some results on retrieving consist
ency appeared in the late 1980s (Pcrlis, 1985, 1988; des Rivieres & Levesque, 1986; Turner, 1990).
2.5 Pro-attitudes: goals and desires
An obvious approach to developing a logic of goals or desires is to adapt possible worlds
semantics-sec, e.g .. Cohen and Levesque (1990a), Wooldridge (1994). In this view, each goal
accessible world represents one way the world might be if the agent's goals were realised. However,
this approach falls prey to the side effect problem, in that it predicts that agents have a goal of the
logical consequences of their goals (cf. the logical omniscience problem, discussed above). This is
not a desirable property: one might have a goal of going to the dentist, with the necessary
consequence of suffering pain, without having a goal of suffering pain. The problem is discussed (in
the context of intentions), in Bratman ( 1990). The basic possible worlds model has been adapted by
some researchers in an attempt to overcome this problem (Wainer, 1994). Other, related semantics
for goals have been proposed (Doyle et al., 1991; Kiss & Reichgelt, 1992; Rao& Georgeff, 1991b).
2.6 Theories of agency
All of the formalisms considered so far have focused on just one aspect of agency. However, it is to
be expected that a realistic agent theory will be represented in a logical framework that combines
these various components. Additionally, we expect an agent logic to be capable of representing the
dynamic aspects of agency. A complete agent theory, expressed in a logic with these properties,
must define how the attributes of agency are related. For example, it will need to show how an
agent's information and pro-attitudes are related; how an agent's cognitive state changes over time;
how the environment affects an agent's cognitive state; and how an agent's information and pro
attitudes lead it to perform actions. Giving a good account of these relationships is the most
significant problem faced by agent theorists.
#None
paragraph
Intelligent agents: theory and practice 129
theories as specifications, and agent logics as specification languages, is that the problems and
issues we then face are familiar from the discipline of software engineering: How useful or
expressive is the specification language? How concise are agent specifications? How does one
refine or otherwise transform a specification into an implementation? However, the view of agent
theories as specifications is not shared by all researchers. Some intend their agent theories to be
used as knowledge representation formalisms, which raises the difficult problem of algorithms to
reason with such theories. Still others intend their work to formalise a concept of interest in
cognitive science or philosophy (this is, of course, what Hintikka intended in his early work on
logics of knowledge of belief). What is clear is that it is important to be precise about the role one
expects an agent theory to play.
2. 9 Further reading
For a recent discussion on the role of logic and agency, which lays out in more detail some
contrasting views on the subject, see Israel (1993, pp. 17-24). For a detailed discussion of
intentionality and the intentional stance, see Dennett (1978, 1987). A number of papers on AI
treatments of agency may be found in Allen et al. ( 1990). For an introduction to modal logic, sec
Chellas (1980); a slightly older, though more wide ranging introduction, may be found in Hughes
and Cresswell (1968). As for the use of modal logics to model knowledge and belief, see Halpern
and Moses (1992), which includes complexity results and proof procedures. Related work on
modelling knowledge has been done by the distributed systems community, who give the worlds in
possible worlds semantics a precise interpretation; for an introduction and further references, see
Halpern (1987) and Fagin et al. (1992). Overviews of formalisms for modelling belief and
knowledge may be found in Halpern (1986), Konolige (1986a), Reichgelt (1989a) and Wooldridge
(1992). A variant of the possible worlds framework, called the recursive modelling method, is
described in Gmytrasiewicz and Durfee (1993); a deep theory of belief may be found in Mack
(1994). Situation semantics, developed in the early 1980s and recently the subject of renewed
interest, represent a fundamentally new approach to modelling the world and cognitive systems
(Barwise & Perry, 1983; Devlin, 1991). However, situation semantics are not (yet) in the
mainstream of (D)AJ, and it is not obvious what impact the paradigm will ultimately have.
Logics which integrate time with mental states are discussed in Kraus and Lehmann (1988),
Halpern and Vardi (1989) and Wooldridge and Fisher (1994); the last of these presents a tableau
based proof method for a temporal belief logic. Two other important references for temporal
aspects are Shoham (1988. 1989). Thomas has developed some logics. for representing agent
theories as part of her framework for agent programming languages; see Thomas et al. (1991) and
Thomas (1993) and section 4. For an introduction to temporal logics and related topics, see
Goldblatt (1987) and Emerson (1990). A non-formal discussion of intention may be found in
Bratman (1987), or more briefly (Bratman, 1990). Further work on modelling intention may be
found in Grosz and Sidner (1990), Sadek (1992), Goldman and Lang (1991), Konolige and Pollack
(1993), Bell (1995) and Dongha (1995). Related work, focusing less on single-agent attitudes, and
more on social aspects, is Levesque et al. (1990), Jennings (1993a), Wooldridge (1994) and
Wooldridge and Jennings (1994).
Finally, although we have not discussed formalisms for reasoning about action here, we
suggested above that an agent logic would need to incorporate some mechanism for representing
agent's actions. Our reason for avoiding the topic is simply that the field is so big, it deserves a
whole review in its own right. Good starting points for AI treatments of action arc Allen (1984),
and Allen et al. (1990, 1991). Other treatments of action in agent logics arc based on formalisms
borrowed from mainstream computer science, notably dynamic logic (originally developed to
reason about computer programs) (Hare!, 1984). The logic of seeing to it that has been discussed in
the formal philosophy literature, but has yet to impact on (D)AI (Belnap & Perloff, 1988; Perloff,
1991; Belnap, 1991; Segerberg, 1989).
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 128
interchange format (KIF). KOML provides the agent designer with a standard syntax for
messages, and a number of performatives that define the force of a message. Example performatives include tell, perform, and reply; the inspiration for these message types comes largely from
speech act theory. KIF provides a syntax for message content-KIF is essentially the first-order
predicate calculus, recast in a LISP-like syntax.
2.8 Discussion
Formalisms for reasoning about agents have come a long way since Hintikka's pioneering work on
logics of knowledge and belief (Hintikka, 1962). Within AI, perhaps the main emphasis of
subsequent work has been on attempting to develop formalisms that capture the relationship
between the various elements that comprise an agent's cognitive state; the paradigm example of
this work is the well-known theory of intention developed by Cohen and Levesque (1990a).
Despite the very real progress that has been made, there still remain many fairly fundamental
problems and issues still outstanding.
On a technical level, we can identify a number of issues that remain open. First, the problems
associated with possible worlds semantics (notably, logical omniscience) cannot be regarded as
solved. As we observed above, possible worlds remain the semantics of choice for many
researchers, and yet they do not in general represent a realistic model of agents with limited
resources-and of course all real agents are resource-bounded. One solution is to ground possible
worlds semantics, giving them a precise interpretation in terms of the world. This was the approach
taken in Rosenschein and Kaelbling's situated automata paradigm, and can be very successful.
However, it is not clear how such a grounding could be given to proattitudes such as desires or
intentions (although some attempts have been made (Singh, 1990a; Wooldridge, 1992; Werner,
1990)). There is obviously much work remaining to be done on formalisms for knowledge and
belief, in particular in the area of modelling resource bounded reasoners.
With respect to logics that combine different attitudes, perhaps the most important problems
still outstanding relate to intention. In particular, the relationship between intention and action has
not been formally represented in a satisfactory way The problem seems to be that having an
intention to act makes it more likely that an agent will act, but does not generally guarantee it.
While it seems straightforward to build systems that appear to have intentions (Wooldridge, 1995),
it seems much harder to capture this relationship formally. Other problems that have not yet really
been addressed in the literature include the management of multiple, possibly conflicting
intentions, and the formation, scheduling, and reconsideration of intentions.
The question of exactly which combination of attitudes is required to characterise an agent is
also the subject of some debate. As we observed above, a currently popular approach is to use a
combination of beliefs, desires, and intentions (hence BDI architectures (Rao and Georgeff,
199lb)). However, there are alternatives: Shoham, for example, suggests that the notion of choice
is more fundamental (Shoham, 1990). Comparatively little work has yet been done on formally
comparing the suitability of these various combinations. One might draw a parallel with the use of
temporal logics in mainstream computer science, where the expressiveness of specification
languages is by now a well-understood research area (Emerson & Halpern, 1986). Perhaps the
obvious requirement for the short term is experimentation with real agent specifications, in order
to gain a better understanding of the relative merits of different formalisms.
More general!y, the kinds of logics used in agent theory tend to be rather elaborate, typically
containing many modalities which interact with each other in subtle ways. Very little work has yet
been carried out on the theory underlying such logics (perhaps the only notable exception is
Catach, 1988). Until the general principles and limitations of such multi-modal logics become
understood, we might expect that progress with using such logics will be slow. One area in which
work is likely to be done in the near future is theorem proving techniques for multi-modal logics.
Finally, there is often some confusion about the role played by a theory of agency. The view we
take is that such theories represent specifications for agents. The advantage of treating agent
#None
paragraph
Intelligent agents: theory and practice 127
were used: beliefs and goals. Further attitudes, such as intention, were defined in terms of these. In
related work, Rao and Georgeff have developed a logical framework for agent theory based on
three primitive modalities: beliefs, desires and intentions (Rao & Georgeff, 1991a,b, 1993). Their
formalism is based on a branching model of time (cf. Emerson & Halpern, 1986), in which belief-,
desire- and intention-accessible worlds are themselves branching time structures.
They are particularly concerned with the notion of realism-the question of how an agent's
beliefs about the future affect its desires and intentions. In other work, they also consider the
potential for adding (social) plans to their formalism (Rao & Georgcff, 1992b; Kinny et al., 1992).
2.6.4 Singh
A quite different approach to modelling agents was taken by Singh, who has developed an
interesting family of logics for representing intentions, beliefs, knowledge, know-how, and
communication in a branching-time framework (Singh, 1990, 199la,b; Singh & Asher, 1991); these
articles are collected and expanded in Singh (1994). Singh's formalism is extremely rich, and
considerable effort has been devoted to establishing its properties. However, its complexity
prevents a detailed discussion here.
2.6.5 Werner
In an extensive sequence of papers, Werner has laid the foundations of a general model of agency,
which draws upon work in economics, game theory, situated automata theory, situation semantics,
and philosophy (Werner, 1988, 1989, 1990, 1991). At the time of writing, however, the properties
of this model have not been investigated in depth.
2.6.6 Wooldridge-modelling multi-agent systems
For his 1992 doctoral thesis, Wooldridge developed a family of logics for representing the
properties of multi-agent systems (Wooldridge, 1992; Wooldridge & Fisher, 1992). Unlike the
approaches cited above, Wooldridge's aim was not to develop a general framework for agent
theory. Rather, he hoped to construct formalisms that might be used in the specification and
verification of realistic multi-agent systems. To this end, he developed a simple, and in some sense
general, model of multi-agent systems, and showed how the histories traced out in the execution of
such a system could be used as the semantic foundation for a family of both linear and branching
time temporal belief logics. He then gave examples of how these logics could be used in the
specification and verification of protocols for cooperative action.
2. 7 Communication
Formalisms for representing communication in agent theory have tended to be based on speech act
theory, as originated by Austin (1962), and further developed by Searle (1969) and others (Cohen
& Perrault, 1979; Cohen & Levesque, 1990a). Briefly, the key axiom of speech act theory is that
communicative utterances arc actions, in just the sense that physical actions arc. They are
performed by a speaker with the intention of bringing about a desired change in the world:
typically, the speaker intends to bring about some particular mental state in a listener. Speech acts
may fail in the same way that physical actions may fail: a listener generally has control over her
mental state, and cannot be guaranteed to react in the way that the speaker intends, Much work in
speech act theory has been devoted to classifying the various different types of speech acts. Perhaps
the two most widely recognised categories of speech acts are representatives (o f which informing is
the paradigm example), and directives (of which requesting is the paradigm example).
Although not directly based on work in speech acts (and arguably more to do with architectures
than theories), we shall here mention work on agent communication languages (Genesereth &
Ketchpel, 1994). The best known work on agent communication languages is that by the ARPA
knowledge sharing effort (Patil et al,, 1992). This work has been largely devoted to developing two
related languages: the knowledge query and manipulation language (KQML) and the knowledge
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 126
An all-embracing agent theory is some time off, and yet signficant steps have been taken towards
it. In the following subsections, we briefly review some of this work.
2.6. I Moore-knowledge and action
Moore was in many ways a pioneer of the use of logics for capturing aspects of agency (Moore,
1990). His main concern was the study of knowledge pre-conditions for actions-the question of
what an agent needs to know in order to be able to perform some action. He formalised a model of
ability in a logic containing a modality for knowledge, and a dynamic logic-like apparatus for
modelling action (cf. Hare!, 1984). This formalism allowed for the possibility of an agent having
incomplete information about how to achieve some goal, and performing actions in order to find
out how to achieve it. Critiques of the formalism (and attempts to improve on it) may be found in
Morgenstern (1987) and Lesperance (1989).
2.6.2 Cohen and Levesque-intention
One of the best-known and most influential contributions to the area of agent theory is due to
Cohen and Levesque (1990a). Their formalism was originally used to develop a theory of intention
(as in "I intend to ... "), which the authors required as a pre-requisite for a theory of speech acts
(Cohen & Levesque, 1990b). However, the logic has subsequently proved to be so useful for
reasoning about agents that it has been used in an analysis of conflict and cooperation in multi
agent dialogue (Galliers, 1988a,b), as well as several studies in the theoretical foundations of
cooperative problem solving (Levesque ct al., 1990; Jennings, 1992; Castelfranchi, 1990; Castel
franchi et al., 1992). Here, we shall review its use in developing a theory of intention.
Following Bratman (1990), Cohen and Levesque identify seven properties that must be satisfied
by a reasonable theory of intention:
1. Intentions pose problems for agents, who need to determine ways of achieving them.
2. Intentions provide a "filter" for adopting other intentions, which must not conflict.
3. Agents track the success of their intentions, and arc inclined to try again if their attempts fail.
4. Agents believe their intentions are possible.
5. Agents do not believe they will not bring about their intentions.
6. Under certain circumstances, agents believe they will bring about their intentions.
7. Agents need not intend ail the expected side effects of their intentions.
Given these criteria, Cohen and Levesque adopt a two-tiered approach to the problem of
formalising intention. First, they construct a logic of rational agency, "being careful to sort out the
relationships among the basic modal operators" (Cohen & Levesque, 1990a, p. 221). Over this
framework, they introduce a number of derived constructs, which constitute a "partial theory of
rational action" (Cohen & Levesque, 1990a, p. 221); intention is one of these constructs.
The first major derived construct is the persistent goal. An agent has a persistent goal of rp iff:
l. It has a goal that q; eventually becomes true, and believes that rp is not currently true.
2. Before it drops the goal cp, one of the following conditions must hold: i the agent believes cp has
been satisfied; or ii the agent believes cp will never be satisfied.
It is a small step from persistent goals to a first definition of intention, as in '•intending to act'': an
agent intends to do action a iff it has a persistent goal to have brought about a state wherein it
believed it was about to do (1, and then did a. Cohen and Levesque go on to show how such a
definition meets manyof Bratman'scritcria for a theory of intention (outlined above). A critique of
Cohen and Levesque's theory of intention may be found in Singh (1992).
2.6.3 Rao and Georgeff-belief, desire, intention architectures
As we observed earlier, there is no clear consensus in either the Al or philosophy communities
about precisely which combination of information and pro-attitudes are best suited to characteris
ing rational agents. In the work of Cohen and Levesque, described above, just two basic attitudes
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 124
from belief: it seems reasonable that one could believe something that is false, but one would
hesitate to say that one could know something false. Knowledge is thus often defined as true belief;
i knows cp if i believes <p and <pis true. So defined, knowledge satisfies T. Axiom 4 is called the
positive introspection axiom. Introspection is the process of examining one's own beliefs, and is
discussed in detail in (Konolige, 1986a, Chapter 5). The positive introspection axiom says that an
agent is aware of what it knows. Similarly, axiom 5 is the negative introspective axiom, which says
that an agent is aware of what it doesn't know. Positive and negative introspection together imply
an agent has perfect knowledge about what it does and doesn't know (cf. (Konolige, 1986a,
Equation (5.11), p. 79)). Whether or not the two types of introspection are appropriate properties
for knowledge/belief is the subject of sonie debate. However, it is generally accepted that positive
introspection is a less demanding property than negative introspection, and is thus a more
reasonable property for resource bounded reasoners.
Given the comments above, the axioms KTD45 are often chosen as a logic of (idealised)
knowledge, and KD45 as a logic of (idealised) belief.
2.4 Alternatives to the possible worlds model
As a result of the difficulties with logical omniscience, many researchers have attempted to develop
alternative formalisms for representing belief. Some of these are attempts to adapt the basic
possible worlds model; others represent significant departures from it. In the subsections that
follow, we examine some of these attempts.
2.4.1 Levesque-belief and awareness
In a 1984 paper, Levesque proposed a solution to the logical omniscience problem that involves
making a distinction between explicit and implicit belief (Levesque, 1984). Crudely, the idea is that
an agent has a relatively small set of explicit beliefs, and a very much larger (infinite) set of implicit
beliefs, which includes the logical consequences of the explicit beliefs. To fonnalise this idea,
Levesque developed a logic with two operators; one each for implicit and explicit belief. The
semantics of the explicit belief operator were given in terms of a weakened possible worlds
semantics, by borrowing some ideas from situation semantics (Barwise & Perry, 1983; Devlin,
1991). The semantics of the implicit belief operator were given in terms of a standard possible
worlds approach. A number of objections have been raised to Levesque's model (Reichgelt, 1989b.
p. 135): first, it does not allow quantification-this drawback has been rectified by Lakemeycr
(1991); second, it docs not seem to allow for nested beliefs; third, the notion of a situation, which
underlies Levesque's logic is, if anything, more mysterious than the notion of a world in possible
worlds; and fourth, under certain circumstances, Levesque's proposal still makes unrealistic
predictions about agent's reasoning capabilities.
In an effort to recover from this last negative result, Fagin and Halpern have developed a "logic
of general awareness" based on a similar idea to Levesque's but with a very much simpler semantics
(Fagin & Hapern, 1985). However, this proposal has itself been criticised by some (Konolige,
1986b).
2.4.2 Konolige-the deduction model
A more radical approach to modelling resource bounded believers was proposed by Konolige
(Konolige, 1986a). His deduction model of belief is, in essence, a direct attempt to model the
"beliefs" of :.ymbolic Al systems. Konolige observed that a typical knowledge-based system has
two key components: a database of symbolically represented "beliefs" (which may take the form of
rules. frames, semantic nets, or, more generally, formulae in some logical language), and some
logically incomplete inference mechanism. Konolige modelled such systems in terms of deduction
structures. A deduction structure is a pair d = (ii, p), where ~ is a base set of formulae in some
logical language, and pis a set of inference rules (which may be logically incomplete), representing
the agent's reasoning mechanism. To simplify the formalism, Konolige assumed that an agent
#None
paragraph
Intelligent agents: theory and practice 123
complete axiomatisation of normal modal logic. Similarly, the second property will appear as a rule
of inference in any axiomisation of normal modal logic; it is generally called the necessitation rule.
These two properties turn out to be the most problematic features of normal modal logics when
they are used as logics of knowledge/belief (this point will be examined later).
The most intriguing properties of normal modal logics follow from the properties of the
accessibility relation, R, in models. To illustrate these properties, consider the following axiom
schema: □<p =-<p. It turns out that this axiom is characteristic of the class of models with a reflexive
accessibility relation. (By characteristic, we mean that it is true in all and only those models in the
class.) There are a host of axioms which correspond to certain properties of R: the study of the way
that properties of R correspond to axioms is called correspondence theory. For our present
purposes, we identify just four axioms: the axiom called T (which corresponds to a reflexive
accessibility relation); D (serial accessibility relation); 4 (transitive accessibility relation); and 5
(euclidean accessibility relation):
T □q; => q; D □,p ~ ◊'P
4 □q; => □□q; 5 ◊'P ~ □◊'PThe results of correspondence theory make it straightforward to derive completeness results for a
range of simple normal modal logics. These results provide a useful point of comparison for normal
modal logics, and account in a large part for the popularity of this style of semantics.
To use the logic developed above as an epistemic logic, the formula □q: is read as: "it is known
that rp". The worlds in the model are interpreted as epistemic alternatives, the accessibility relation
defines what the alternatives arc from any given world.
The logic defined above deals with the knowledge of a single agent. To deal with multi-agent
knowledge, one adds to a model structure an indexed set of accessibility rehitions, one for each
agent. The language is then extended by replacing the single modal operator "O" by an indexed set
of unary modal operators { K1}, where i E {1 , ... , n }. The formula K;r:r is read: •'i knows that cp".
Each operator K, is given exactly the same properties as·'□".
The next step is to consider how well normal modal logic serves as a logic of knowledge/belief.
Consider first the necessitation rule and axiom K, since any normal modal system is committed to
these. The necessitation rule tells us that an agent knows all valid formulae. Amongst other things,
this means an agent knows all propositional tautologies. Since there is an infinite number of these,
an agent will have an infinite number of items of knowledge: immediately, one is faced with a
counter-intuitive property of the knowledge operator. Now consider the axiom K, which says that
an agent's knowledge is closed under implication. Together with the necessitation rule, this axiom
implies that an agent's knowledge is closed under logical consequence: an agent believes all the
logical consequences of its beliefs. This also seems counter intuitive. For example, suppose, like
every good logician, our agent knows Pcano's axioms. Now Fermat's last theorem follows from
Pean o's axioms-but it took the combined efforts of some of the best minds over the past century to
prove it. Yet if our agent's beliefs are closed under logical consequence, then our agent must know
it. So consequential closure, implied by necessitation and the K axiom, seems an overstrong
property for resource bounded reasoners.
These two problems-that of knowing all valid formulae, and that of knowledge/belief being
closed under logical consequence-together constitute the famous logical omniscience problem. It
has been widely argued that this problem makes the possible worlds model unsuitable for
representing resource bounded believers-and any real system is resource bounded.
2.3.1 Axioms for knowledge and belief
We now consider the appropriateness of the axioms D. T, 4, and 5 for logics of knowledge/
belief. The axiom D says that an agent's beliefs are non-contradictory; it can be re-written as:
K,cp => -.K, ..,<p, which is read: '•if i knows rp, then i doesn't know -irp'". This axiom seems a
reasonable property of knowledge/belief. The axiom Tis often called the knowledge axiom, since it
says that what is known is true. It is usually accepted as the axiom that distinguishes knowledge
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 122
developed by Kripke (1963).3 Hintikka's insight was to see that an agent's beliefs could be
characterised as a set of possible worlds, in the following way. Consider an agent playing a card
game such as poker.4 In this game, the more one knows about the cards possessed by one's
opponents, the better one is able to play. And yet complete knowledge of an opponent's cards is
generally impossible (if one excludes cheating). The ability to play poker well thus depends, at least
in part, on the ability to deduce what cards are held by an opponent, given the limited information
available. Now suppose our agent possessed the ace of spades. Assuming the agent's sensory
equipment was functioning normally, it would be rational of her to believe that she possessed this
card. Now suppose she were to try to deduce what cards were held by her opponents. This could be
done by first calculating all the various different ways that the cards in the pack could possibly have
been distributed among the various players. (This is not being proposed as an actual card playing
strategy, but for illustration!) For argument's sake, suppose that each possible configuration is
described on a separate piece of paper. Once the process is complete, our agent can then begin to
systematically eliminate from this large pile of paper all those configurations which are not possible,
given what she knows. For example, any configuration in which she did not possess the ace of spades
could be rejected immediately as impossible. Call each piece of paper remaining after this process a
world. Each world represents one state of affairs considered possible, given what she knows.
Hintikka coined the term epistemic alternatives to describe the worlds possible given one's beliefs.
Something true in all our agent's epistemic alternatives could be said to be believed by the agent.
For example, it will be true in all our agent's epistemic alternatives that she has the ace of spades.
On a first reading, this seems a peculiarly roundabout way of characterising belief. but it has two
advantages. First, it remains neutral on the subject of the cognitive structure of agents. It certainly
doesn't posit any internalised collection of possible worlds. It is just a convenient way of
characterising belief. Second, the mathematical theory associated with the formalisation of
possible worlds is extremely appealing (see below).
The next step is to show how possible worlds may be incorporated into the semantic framework
of a logic. Epistemic logics arc usually formulated as normal modal logics using the semantics
developed by Kripke (1963). Before moving on to explicitly epistemic logics, we consider a simple
normal modal logic. This logic is essentially classical propositional logic, extended by the addition
of two operators:'·□" (necessarily), and"." (possibly). Let Prop= {p, q, .. . } be a countable set
of atomic propositions. Then the syntax of the logic is defined by the following rules: (i) if p e Prop
then p isa formula; (ii) if sigmpahi, are formulae, then so are -sigmaand i:p V 1.j J; and (iii) if rp i~ a formula
then so arc □qi and ◊ff. The operators "-," (not) and "V" (o r) have their standard meanings. The
remaining connectives of classical propositional logic can be defined as abbreviations in the usual
way. The formula □q, is read: "necessarily cp" and the formula ◊ff is read: "possibly cp". The
semantics of the modal connectives arc given by introducing an accessibility relation into models for
the language. This relation defines what worlds are considered accessible from every other world.
The formula □qi is then true if q; is true in every world accessible from the current world; ◊rp is true
if rp is true in at least one world accessible from the current world. The two modal operators are
duals of each other, in the sense that the universal and existential quantifiers of first-order logic arc
duals:
It would thus have been possible to take either one as primitive, and introduce the other as a
derived operator. The two basic properties of this logic arc as follows. First, the following axiom
= =
schema is valid: □(q-, 1.j J) (Oq, -=O1.JJ). This axiom is called K, in honour of Kripkc. The second
property is as follows: if <pis valid, then □q; is valid. Now, since K is valid, it will be a theorem of any
3In Hintikka's original work. he used a technique based on "model sets which is equivalent to Kripke's
formalism, though less elegant. See Hughes and Cresswell (1968, pp. 351-352) for a comparison and
discussion of the two techniques.
4This example was adapted from Halpern (1987).
#None
paragraph
Intelligent agents: theory and practice 121
logic fail here? The problem is that the intentional notions-such as belief and desire-are
referentially opaque, in that they set up opaque contexts, in which the standard substitution rules of
first-order logic do not apply. In classical (propositional or first-order) logic, the denotation, or
semantic value, of an expression is dependent solely on the denotations of its sub-expressions. For
example, the denotation of the propositional logic formulap /\ q is a function of the truth-values of
p and q. The operators of classical logic are thus said to be truth functional. In contrast, intentional
notions such as belief are not truth functional. It is surely not the case that the truth value of the
sentence:
Janine believes p (5)
is dependent solely on the truth value of p 2 So substituting equivalents into opaque contexts is not
going to preserve meaning. This is what is meant by referential opacity. Clearly, classical logics are
not suitable in their standard form for reasoning about intentional notions: alternative formalisms
are required.
The number of basic techniques used for alternative formalisms is quite small. Recall, from the
discussion above, that there arc two problems to be addressed in developing a logical formalism for
intentional notions: a syntatic one, and a semantic one. It follows that any formalism can be
characterised in terms of two independent attributes: its language of formulation, and semantic
model (Konolige, 1986a, p. 83).
There are two fundamental approaches to the syntactic problem. The first is to use a modal
language, which contains non-truth-functional modal operators, which arc applied to formulae. An
alternative approach involves the use of a meta-language: a many-sorted first-order language
containing terms that denote formulae of some other object-language. Intentional notions can be
represented using a meta-language predicate, and given whatever axiomatisation is deemed
appropriate. Both of these approaches have their advantages and disadavantagcs, and will be
discussed in the sequel.
As with the syntactic problem, there arc two basic approaches to the semantic problem. The
first, best-known, and probably most widely used approach is to adopt a possible worlds semantics,
where an agent's beliefs, knowledge, goals, and so on, arc characterised as a set of so-called
possible worlds, with and accessibility relation holding between them. Possible worlds semantics
have an associated correspondence theory which makes them an attractive mathematical tool to
work with (Chellas, 1980). However, they also have many associated difficulties, notably the well
known logical omniscience problem, which implies that agents are perfect reasoners (we discuss
this problem in more detail below). A number of variations on the possible-worlds theme have
been proposed, in an attempt to retain the correspondence theory, but without logical omnis
cience. The commonest alternative to the possible worlds model for belief is to use a sentential, or
interpreted symbolic structures approach. In this scheme, beliefs are viewed as symbolic formulae
explicitly represented in a data structure associated with an agent. An agent then believes
sig
i
m
f asigmias
present in its belief data structure. Despite its simplicity, the sentential model works well under
certain circumstances (Konolige, 1986a).
In the subsections that follow, we discuss various approaches in some more detail. We begin
with a close look at the basic possible world:-, model for logics of knowledge (episremic logics) and
logics of belief (doxastic logics).
2.3 Possible worlds semantics
The possible worlds model for logics of knowledge and belief was originally proposed by Hintikka
(1962), and is now most commonly formulated in a normal modal logic using the techniques
2Note, however, that the sentence (5) is itself a proposition, in that its denotation is the value true or false.
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 120
Being an intentional system seems to be a necessary condition for agenthood. but is it a sufficient
condition? In his Master's thesis, Shardlow trawled through the literature of cognitive science and
its component disciplines in an attempt to find a unifying concept that underlies the notion of
agenthood. He was forced to the following conclusion:
"Perhaps there is something more to an agent than its capacity for beliefs and desires, but whatever that
thing is, it admits no unified account within cognitive science." (Shardlow, 1990)
So, an agent is a system that is most conveniently described by the intentional stance; one whose
simplest consistent description requires the intentional stance. Before proceeding, it is worth
considering exactly which attitudes are appropriate for representing agents. For the purposes of
this survey, the two most important categories are information attitudes and pro-attitudes:
desire intention obligation commitment
intention
belief
obligation
information attitudes pro-attitudes l
{ commitment
knowledge
choice
Thus information attitudes are related to the information that an agent has about the world it
occupies, whereas pro-attitudes are those that in some way guide the agent's actions. Precisely
which combination of attitudes is most appropriate to characterise an agent is, as we shall sec later,
an issue of some debate. However, it seems reasonable to suggest that an agent must be
represented in terms of at least one information attitude, and at least one pro-attitude. Note that
pro- and information attitudes are closely !inked, as a rational agent will make choices and form
intentions, etc., on the basis of the information it has about the world. Much work in agent theory is
concerned with sorting out exactly what the relationship between the different attitudes is.
The next step is to investigate methods for representing and reasoning about intentional
notions.
2.2 Representing intentional notions
Suppose one wishes to reason about intentional notions in a logical framework. Consider the
following statement (after Genesereth & Nilsson, 1987, pp. 210-211):
Janine believes Cronos is the father of Zeus. (1)
A naive attempt to translate (1) into first-order logic might result in the following:
Bel(Janine, Father(Zeus, Cronos)) (2)
Unfortunately, this naive translation does not work, for two reasons. The first is syntactic: the
second argument to the Bel predicate is a formula of first-order logic, and is not, therefore, a term.
So (2) is not a well-formed formula of classical first-order logic. The second problem is semantic,
and is potentially more serious. The constants Zeus and Jupiter, by any reasonable interpretation,
denote the same individual: the supreme deity of the classical world. It is therefore acceptable to
write, in first-order logic:
(Zeus= Jupiter). (3)
Given (2) and (3), the standard rules of first-order logic would allow the derivation of the following:
Bel(Janine, Father(Jupiter, Cronos)) (4)
But intuition rejects this derivation as invalid: believing that the father of Zeus is Cronos is not the
same as believing that the father of Jupiter is Cronos. So what is the problem? Why does first-order
#None
paragraph
Intelligent agents: theory and practice 119
These statements make use of a folk psychology, by which human behaviour is predicted and
explained through the attribution of attitudes, such as believing and wanting (as in the above
examples), hoping, fearing and so on. This folk psychology is well established: most people reading
the above statements would say they found their meaning entirely clear, and would not give them a
second glance.
The attitudes employed in such folk psychological descriptions are called the intentional notions.
The philosopher Daniel Dennett has coined the term intentional system to describe entities "whose
behaviour can be predicted by the method of attributing belief, desires and rational acumen"
(Dennett, 1987, p. 49). Dennett identifies different "grades" of intentional system:
"A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires (and no doubt other
intentional states) about beliefs and desires .... A second-order intentional system is more sophisticated; it
has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other
intentional state)-both those of others and its own" (Dennett, 1987, p. 243)
One can carry on this hierarchy of intentionality as far as required.
An obvious question is whether it is legitimate or useful to attribute beliefs, desires, and so on, to
artificial agents. Isn't this just anthropomorphism? Mc Carthy, among others, has argued that there
are occasions when the intentional stance is appropriate:
"To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when
such an ascription expresses the same information about the machine that it expresses about a person. It is
useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or
how to repair or improve it. It is perhaps never logically required even for humans, but expressing
reasonably briefly what is actually known about the state of the machine in a particular situation may
require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be
constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of
mental qualities is most straightforward for machines of known structure such as thermostats and computer
operating systems, but is most useful when applied to entities whose structure is incompletely known."
(Mc Carthy, 1978) (quoted in (Shoham, 1990))
What objects can be described by the intentional stance? As it turns out, more or less anything can.
In his doctoral thesis, Seel showed that even very simple, automata-like objects can be consistently
ascribed intentional descriptions (Seel 1989); similar work by Rosenschein and Kaelbling (albeit
with a different motivation), arrived at a similar conclusion (Rosenschein & Kaelbling, 1986). For
example, consider a light switch:
"It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of
transmitting current at will, who invariably transmits current when it believes that we want it transmitted
and not otherwise; flicking the switch is simply our way of communicating our desires.'' (Shoham, 1990, p.
6)
And yet most adults would find such a description absurd-perhaps even infantile. Why is this?
The answer seems to be that while the intentional stance description is perfectly consistent with the
observed behaviour of a light switch, and is internally consistent,
.. it does not buy us anything, since we essentially understand the mechanism sufficiently to have a
simpler, mechanistic description of its behaviour." (Shoham, 1990, p. 6)
Put crudely, the more we know about a system, the less we need to rely on animistic, intentional
explanations of its behaviour. However, with very complex systems, even if a complete, accurate
picture of the system's architecture and working is available, a mechanistic, design stance
explanation of its behaviour may not be practicable. Consider a computer. Although we might
have a complete technical description of a computer available, it is hardly practicable to appeal to
such a description when explaining why a menu appears when we click a mouse on an icon. In such
situations, it may be more appropriate to adopt an intentional stance description, if that description
is consistent, and simpler than the alternatives. The intentional notions are thus abstraction tools,
which provide us with a convenient and familiar way of describing, explaining, and predicting the
behaviour of complex systems.
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 118
convenience, we identify three key issues, and structure our survey around these (cf. Seel, 1989,
p.1 ):
• Agent theories are essentially specifications. Agent theorists address such questions as: How are
we to conceptualise agents? What properties should agents have, and how are we to formally
represent and reason about these properties?
• Agent architectures represent the move from specification to implementation. Those working in
the area of agent architectures address such questions as: How are we to construct computer
systems that satisfy the properties specified by agent theorists? What software and/or hardware
structures are appropriate? What is an appropriated separation of concerns?
• Agent languages are programming languages that may embody the various principles proposed
by theorists. Those working in the area of agent languages address such questions as: How are
we to program agents? What are the right primitives for this task? How are we to effectively
compile or execute agent programs?
As we pointed out above, the distinctions between these three areas are occasionally unclear. The
issue of agent theories is discussed in the section 2. In section 3, we discuss architectures, and in
section 4, we discuss agent languages. A brief discussion of applications appears in section 5, and
some concluding remarks appear in section 6. Each of the three major sections closes with a
discussion, in which we give a brief critical review of current work and open problems, and a section
pointing the reader to further relevant reading.
Finally, some notes on the scope and aims of the article. First, it is important to realise that we
are writing very much from the point of view of AI, and the material we have chosen to review
clearly reflects this bias. Secondly, the article is not a intended as a review of Distributed AI,
although the material we discuss arguably falls under this banner. We have deliberately avoided
discussing what might be called the macro aspects of agent technology (i.e., those issues relating to
the agent society, rather than the individual (Gasser, 1991), as these issues are reviewed more
thoroughly elsewhere (see Bond and Gasser, 1988, pp. 1-56, and Chaibdraa et al., 1992). Thirdly,
we wish to reiterate that agent technology is, at the time of writing, one of the most active areas of
research in AI and computer science generally. Thus, work on agent theories, architectures, and
languages is very much ongoing. In particular, many ofthe fundamental problems associated with
agent technology can by no means be regarded as solved. This article therefore represents only a
snapshot of past and current work in the field, along with some tentative comments on open
problems and suggestions for future work areas. Our hope is that the article will introduce the
reader to some of the different ways that agency is treated in D(AI), and in particular to current
thinking on the theory and practice of such agents.
2 Agent theories
In the preceding section, we gave an informal overview of the notion of agency. In this section, we
turn our attention to the theory of such agents, and in particular, to formal theories. We regard an
agent theory as a specification for an agent; agent theorists develop formalisms for representing the
properties of agents, and using these formalisms, try to develop theories that capture desirable
properties of agents. Our starting point is the notion of an agent as an entity 'which appears to be
the subject of beliefs, desires, etc.' (Seel, 1989, p. 1). The philosopher Dennett has coined the term
intentional system to denote such systems.
2. 1 Agents as intentional systems
When explaining human activity, it is often useful to make statements such as the following:
Janine took her umbrella because she believed it was going to rain.
Michael worked hard because he wanted to possess a Ph D.
#None
paragraph
Intelligent agents: theory and practice 117
A simple way of conceptualising an agent is thus as a kind of UNIX-like software process, that
exhibits the properties listed above. This weak notion of agency has found currency with a
surprisingly wide range of researchers. For example, in mainstream computer science, the notion
of an agent as a self-contained, concurrently executing software process, that encapsulates some
state and is able to communicate with other agents via message passing, is seen as a natural
development of the object-based concurrent programming paradigm (Agha, 1986; Agha ct al.,
1993).
This weak notion of agency is also that used in the emerging discipline of agent-based software
engineering:
"[Agents} communicate with their peers by exchanging messages in an expressive agent communication
Language. While agents can be as simple as subroutines, typically they are larger entities with some sort of
persistent control." (Gcncscrcth & Kctchpcl, 1994, p.48)
A softbot (software robot) is a kind of agent:
"A softbot is an agent that interacts with a software environment by issuing commands and interpreting the
environments feedback. A softbot's effectors arc commands (e.g. Unix shell commands such as mv or
compress) meant to change the external environments state. A softbot's sensors are commands (e.g. pwd
or ls in Unix) meant to provide . . information." (Etzioni et al., 1994, p.10)
1.1.2 A stronger notion of agency
For some researchers-particularly those working in AI- the term "agent" has a stronger and
more specific meaning than that sketched out above. These researchers generally mean an agent to
be a computer system that, in addition to having the properties identified above, is either
conceptualised or implemented using concepts that arc more usually applied to humans. For
example, it is quite common in AI to characterise an agent using mentalistic notions, such as
knowledge, belief, intention, and obligation (Shoham, 1993). Some AI researchers have gone
further, and considered emotional agents (Bates et al., 1992a; Bates, 1994). (Lest the reader
suppose that this is just pointless anthropomorphism, it should be noted that there are good
arguments in favour of designing and building agents in terms of human-like mental states-sec
section 2.) Another way of giving agents human-like attributes is to represent them visually,
perhaps by using a cartoon-like graphical icon or an animated face (Maes, 1994a, p. 36)-for
obvious reasons, such agents are of particular importance to those interested in human-computer
interfaces.
1.1.3 Other attributes of agency
Various other attributes arc sometimes discussed in the context of agency. For example:
• mobility is the ability of an agent to move around an electronic network (White, 1994);
• veracity is the assumption that an agent will not knowingly communicate false information
(Galliers, 1988b, pp. 159-164);
• benevolence is the assumption that agents do not have conflicting goals, and that every agent will
therefore always try to do what is asked of it (Rosenschein and Genesereth, 1985, p. 91); and
• rationality is (crudely) the assumption that an agent will act in order to achieve its goals, and will
not act in such a way as to prevent its goals being achieved-at least insofar as its beliefs permit
(Galliers, 1988b, pp. 49-54).
(A discussion of some of these notions is given below; various other attributes of agency are
formally defined in (Goodwin, 1993).)
1.2 The structure of this article
Now that we have at least a preliminary understanding of what an agent is, we can embark on a
more detailed look at their properties, and how we might go about constructing them. For
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 116
control has long been a research domain in distributed artificial intelligence (DAI) (Steeb et al.,
1988); various types of information manager, that filter and obtain information on behalf of their
users, have been prototyped (Maes, 1994a); and systems such as those that appear in the third
scenario are discussed in (Mc Gregor, 1992; Levy ct al., 1994). The key computer-based com
ponents that appear in each of the above scenarios are known as agents. It is interesting to note that
one way of defining Al is by saying that it is the subfield of computer science which aims to construct
agents that exhibit aspectsof intelligent behaviour. The notion of an "agent" is thus central to AI. It
is perhaps surprising, therefore, that until the mid to late 1980s, researchers from mainstream AI
gave relatively little consideration to the issues surrounding agent synthesis. Since then, however,
there has been an intense flowering of interest in the subject: agents are now widely discussed by
researchers in mainstream computer science, as well as those working in data communications and
concurrent systems research, robotics, and user interface design. A British national daily paper
recently predicted that:
"Agent-hased computing (ABC) is likely to be the next significant breakthrough in software development.··
(Sargent, 1992)
Moreover, the UK-based consultancy firm Ovum has predicted that the agent technology industry
would be worth some US$3.5 billion worldwide by the year 2000 (Houlder, 1994). Researchers
from both industry and academia arc thus taking agent technology seriously: our aim in this paper is
to survey what we perceive to be the most important issues in the design and construction of
intelligent agents, of the type that might ultimate appear in applications such as those suggested by
the fictional scenarios ahove. We begin our article, in the following sub-section, with a discussion
on the subject of exactly what an agent is.
I. I What is an agent?
Carl Hewitt recently remarked1 that the question what is an agent? is embarrassing for the agent
based computing community in just the same way that the question what is intelligence? is
embarrassing for the mainstream AI community. The problem is that although the term is widely
used, by many people working in closely related areas, it defies attempts to produce a single
universally accepted definition. This need not necessarily be a problem: after all, if many people
are successfully developing interesting and useful applications, then it hardly matters that they do
not agree on potentially trivial terminological details. However, there is also the danger that unless
the issue is discussed, "agent" might become a "noise'" term, subject to both abuse and misuse, to
the potential confusion of the research community. It is for this reason that we briefly consider the
question.
We distinguish two general usages of the term "agent": the first is weak, and relatively
uncontentious; the second is stronger, and potentially more contentious.
I. I. I A Weak Notion of Agency
Perhaps the most general way in which the term agent is used is to denote a hardware or (more
usually) software-based computer system that enjoys the following properties:
• autonomy: agents operate without the direct intervention of humans or others, and have some
kind of control over their actions and internal state (Castelfranchi, 1995);
• social ability: agents interact with other agents (and possibly humans) via some kind of agent
communication language (Genesereth & Ketchpel, 1994);
• reactivity: agents perceive their environment (which may be the physical world, a user via a
graphical user interface, a collection of other agents, the Internet, or perhaps all of these
combined), and respond in a timely fashion to changes that occur in it;
• pro-activeness: agents do not simply act in response to their environment, they are able to exhibit
goal-directed behaviour by taking the initiative.
1A t the Thirteenth International Workshop on Distributed Al.
#None
paragraph
The Knowledge Engineering Review, Vol. 10:2, 1995, 115-152
Intelligent agents: theory and practice
MICHAEL WOOLDRIDGE1 and NICHOLAS R. JENNINGS2
1 Department of Computing Manchester Metropolitan University Chester Street, Manchester MI 5GD, UK
(M. Wooldridge@doc.mmu.ac.uk)
Department
of Electronic F.ngineering, Queen Mary & Westfield College, Mile End Road, London El 4N.S, UK
( N. R.Jennings@gm w.ac.uk)
Abstract
The concept of an agent has become important in both artificial intelligence (AI) and mainstream
computer science. Our aim in this paper is to point the reader at what we perceive to be the most
important theoretical and practical issues associated with the design and construction of intelligent
agents. For convenience, we divide these issues into three areas (though as the reader will see, the
divisions arc at times somewhat arbitrary). Agent theory is concerned with the question of what an
agent is, and the use of mathematical formalisms for representing and reasoning about the
properties of agents. Agent architectures can he thought of as software engineering models of
agents; researchers in this area are primarily concerned with the problem of designing software or
hardware systems that will satisfy the properties specified by agent theorists. Finally, agent
languages are software systems for programming and experimenting with agents; these languages
may embody principles proposed by theorists. The paper is not intended to serve as a tutorial
introduction to all the issues mentioned; we hope instead simply to identify the most important
issues, and point to work that elaborates on them. The article includes a short review of current and
potential applications of agent technology.
1 Introduction
We begin our article with descriptions of three events that occur sometime in the future:
I. The key air-traffic control systems in the country of Ruritania suddenly fail, due to freak
weather conditions. Fortunately, computerised air-traffic control systems in neighbouring
countries negotiate between themselves to track and deal with all affected flights, and the
potentially disastrous situation passes without major incident.
2. Upon logging in to your computer, you are presented with a list of email messages, sorted into
order of importance by your personal digital assistant (PDA). You are then presented with a
similar list of news articles; the assistant draws your attention to one particular article, which
describes hitherto unknown work that is very close to your own. After an electronic discussion
with a number of other PD As, your PDA has already obtained a relevant technical report for
you from an FTPsite, in the anticipation that it will be of interest.
3. You are editing a file, when your PDA requests your attention: an email message has arrived,
that containsnotification about a paper you sent to an important conference, and the PDA
correctly predicted that you would want to sec it as soon as possible. The paper has been
accepted, and without prompting, the PDA begins to look into travel arrangements, by
consulting a number of databases and other networked information sources. A short time later,
you are presented with a summary of the cheapest and most convenient travel options.
We shall not claim that computer systems of the sophistication indicated in these scenarios are just
around the corner, but serious academic research is underway into similar applications: air-traffic
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 152
Weerasooriya, D, Rao, A and Ramamohanarao, K, 1995. "Design of a concurrent agent-oriented language"
In: M Wooldridge and NR Jennings (eds.) Intelligent Agents: Theories, Architectures, and Languages
(LNAJ Volume 890), pp 386----402, Springer-Verlag.
Weihmayer, Rand Velthuijsen, H, 1994. "Application of distributed AI and cooperative problem solving to
telecommunications" In: J Liebowitz and D Prcreau (eds.) Al Approaches to Telecommunications and
Network Management, IOS Press.
Werner, E, 1988. "Toward a theory of communication and cooperation for multiagent planning" In: MY
Var di (ed.) Proceedings of the Second Conference on Theoretical Aspects of Reasoning About Knowledge,
pp 129-144, Morgan Kaufmann.
Werner, E, 1989. "Cooperating agents: A unified theory of communication and social structure" lq: L Gasser
and M Huhns (eds.) Distributed Artificial Intelligence Volume 11, pp 3---36, Pitman.
Werner, E, 1990. "Wh.at·can agents do together: A semantics of cooperative ability" In: Proceedings of the
Ninth European Conference on Artificial Intelligence (ECAJ-90), pp 694-701, Stockholm, Sweden.
Werner, E, 1991. "A unified view of information, intention and ability" In: Y Demazeau and JP Miiller (eds.)
Decentralized Al 2-Proceedings of the Second European Workshop on Modelling Autonomous Agents
and Multi-Agent Worlds (MAAMA W-90), pp 109-126, Elsevier.
White, JE, 1994. "Telescript technology: The foundation for the electronic marketplace'', White paper,
General Magic, Inc., 2465 Latham Street, Mountain View, CA 94040.
Wilkins, D, 1988. Practical Planning: Extending the Classical Al Planning Paradigm, Morgan Kaufmann.
Wittig, T (ed.) 1992. ARCHO N: An Architecture for Multi-Agent Systems, Ellis Horwood.
Wood, S, 1993. Planning and Decision Making in Dynamic Domains, Ellis Horwood.
Wooldridge, M, 1992. The Logical Modelling of Computational Multi-Agent Systems, Ph D thesis, Depart
ment of Computation, UMIST, Manchester, UK. (Also available as Technical Report MMU-DOC-94-01,
Department of Computing, Manchester Metropolitan University, Chester Street, Manchester, UK.)
Wooldridge, M, 1994. ·"Coherent social action" In: Proceedings of the Ele J.'enth European Conference on
Artificial Intelligence (ECAI-94), pp 279-283, Amsterdam, The Netherlands.
Wooldridge, M, 1995. "This is MYWORLD: The logic of an agent-oriented testbed for DAI" In: M
Wooldridge and NR Jennings (eds.) Intelligent Agents: Theories, Architectures, and Languages (LNAI
Volume 890), pp 160-178, Springer-Verlag.
Wooldridge, M and Fisher M, 1992. "A first-order branching time logic of multi-agent systems" In:
Proceedings of the Tenth European Conference on Artificial Intelligence (ECAl-92), pp 234-238, Vienna,
Austria.
Wooldridge, Mand Fisher M, 1994. "A decision procedure for a temporal belief logic" In: OM Gabbay and
HJ Ohlbach. (eds.) Temporal Logic-Proceedings of the First International Conference (LNAI Volume
827), pp 317-331, Springer-Verlag.
Wooldridge, M and Jennings NR, 1994. "Formalizing the cooperative problem solving process" In:
Proceedings of the Thirteenth International Workshop on Distributed Artificial Intelligence (IWDAJ-94), pp
403---417, Lake Quinalt, WA.
Yonczawa, A (ed.) 1990. ABCL-An Object-Oriented Concurrent System, MIT Press.
#None
paragraph
Intelligent agents: theory and practice 151
Sargent, P, 1992. "Back to school for a brand new ABC" In: The Guardian, 12 March, p 28.
Schoppe N;, MJ, 1987. "Universal plans for reactive robots in unpredictable environments" In: Proceedings of
the Tenth International Joint Conference on Artificial Intelligence ( /JCAl-87), pp 1039-1046, Milan, Italy.
Schwuttke, UM and Quan, AG, 1993. '·Enhancing performance of cooperating agents in real-time diagnostic
systems" In: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (JJCAl93), pp 332-337, Chambfay, France.
Searle, JR, 1969. Speech Acts: An Essay in the Philosophy of Language, Cambridge University Press.
Seel, N, 1989. Agent Theories and Architectures, Ph D thesis, Surrey University, Guildford, UK.
Segerbcrg, K, 1989. "Bringing it about" Journal of Philosophical Logic 18 327-347.
Shardlow, N, 1990. "Action and agency in cognitive science", Master's thesis, Department of Psychology,
University of Manchester, Oxford Road, Manchester M13 9PL, UK.
Shoham, Y, 1988. Reasoning About Change: Time and Causation from the Standpoint of A rtificial Intelligence,
MIT Press.
Shoham, Y, 1989. "Time for action: on the relation between time, knowledge and action" In: Proceedings of
the Eleventh /nternational Joint Conferenceon Artificial Intelligence (l JCAl-89), pp 954-959, Detroit, Ml.
Shoham, Y, 1990. "Agent-oriented programming", Technical Report STAN-CS-1335-90, Computer Science
Department, Stanford University, Stanford, CA 94305.
Shoham, Y, 1993. "Agent-oriented programming" Artificial Intelligence 60 (1) 51-92.
Singh, MP, l990a. '·Group intentions" In: Proceedings of the Tenth International Workshop on Distributed
Artificial Intelligence (IWDAI-90).
Singh, MP. 1990b. "Towards a theory of situated know-how" In: Proceedings of the Ninth European
Conference on Artificial Intelligence (ECAJ-90), pp 604-609, Stockholm, Sweden.
Singh, MP, 1991a. "Group ability and structure" In: Y Dcmazeau and JP MUiler (eds.) Decentralized Al 2Proceedings of the Second European Workshop on Modelling Autonomous Agents and Multi-Agent Worlds
(MAAMAW-90), pp 127-146, Elsevier.
Singh, MP, 1991b '·Towards a formal theory of communication for multi-agent systems" In: Proceedings of rhe
Twelfth International Joint Conference on Artificial Intelligence (l JCAI-91), pp 69-74, Sydney, Australia.
Singh, MP, 1992. "A critical examination of the Cohen-Levesque theory of intention" In: Proceedings of the
Tenth European Conference on Artificial Intelligence (ECAl-92), pp 364-368. Vienna, Austria.
Singh, MP, 1994. Multiagent Systems: A Theoretical Framework for Intentions, Know-How, and Communi
cations (LNAI Volume 799), Springer-Verlag.
Singh, MP and Asher, NM, 1991. "Towards a formal theory of intentions" In: Logics in Al-Proceedings of
the European Workshop JELIA-90 (LNA! Volume 478), pp 472-486, Springer-Verlag.
Smith, RG, 1980. A Framework for Distributed Problem Solving, UMI Research Press.
Steeb, R, Cammarata S, Hayes-Roth FA, Thorndyke PW and Wesson RB, 1988. "Distributed intelligence for
air fleet control" In: AH Bond and L Gasser (eds.) Readings in Distributed Artificial Intelligence, pp 90101, Morgan Kaufmann.
Steels, L, 1990. "Cooperation between distributed agents through self organization" In: Y Demazeau and JP
MUiler (eds.) Decentralized Al-Proceedings of the First European Workshop on Modelling Autonomous
Agents in Multi-Agent Worlds (MAAMA W-89), pp 175-196, Elsevier.
Thomas, SR, 1993. PLACA, an Agent Oriented Programming Language, Ph D thesis, Computer Science
Department, Stanford University, Stanford, CA 94305. (Available as technical report STAN-CS-931487).
Thoma~, SR, Shoham Y, Schwartz A and Kraus S, 1991. "Preliminary thoughts on an agent description
language" International Journal of Intelligent Systems 6 497-508.
Thomason, R, 1980. "A note on syntactical treatments of modality" Synthese 44 391-395.
Turner, R, 1990. Truth and Modality for Knowledge Representation, Pitman.
Varga, LZ, Jennings, NR and Cockburn, D, 1994. "Integrating intelligent systems into a cooperating
community for electricity distribution management" International Journal of Expert Systems with Appli
cations 1 (4) 563-579.
Vere, Sand Bickmore, T, 1990. "A basic agent" Computational Intelligence 6 41-60.
Voorhees, EM, 1994. •'Software agents for information retrieval" In: 0 Etzioni (ed.) Software Agents
Papers from the 1994 Spring Symposium (Technical Report SS-94-03), pp 126--129, AAAI Press.
Wainer, J, 1994. "Yet another semantics of goals and goal priorities" In: Proceedings of the Eleventh
European Conference on Artificial Intelligence (ECIA-94), pp 269-273, Amsterdam, The Netherlands.
Wavish, P, 1992. "Exploiting emergent behaviour in multi-agent systems" In: E Werner and Y Demazeau
(eds.) Decentralized Al 3-Proceedings of the Third European Workshop on Modelling Autonomous
Agents and Multi-Agent Worlds (MAAMA W-91), pp 297-310, Elsevier.
Wavish, P and Graham, M, 1995. "Role, skills, and behaviour: a situated action approach to organising
systems of interacting agents" In: M Wooldridge and NR Jennings (eds.) Intelligent Agents: Theories,
Archi1ectures, and Languages (LNAI Volume 890), pp 371-385, Springer-Verlag.
#None
paragraph
M. WOOLDRIDGE AND N[CHOLAS JENNINGS 150
MUiler, JP and Pischel, M, 1994. "Modelling interacting agents in dynamic environments .. In: Proceedings of
the Eleventh European Conference on Artificial Intelligence (ECAI-94), pp 709-713, Amsterdam, The
Netherlands.
Mi.i\ler, JP, Pischcl, M and Thiel, M, 1995. "Modelling reactive behaviour in vertically layered agent
architectures" In: M Wooldridge and NR Jennings (eds.) Intelligent Agents: Theories, Architectures, and
Languages (LNAI Volume 890), pp 261-276, Springer-Verlag.
Newell, A and Simon, HA, 1976. "Computer science as empirical enquiry" Communications of the ACM 19
113-126.
Nilsson, NJ, 1992. "Towards agent programs with circuit semantics", Technical Report STAN-CS-92-1412,
Computer Science Department, Stanford University, Stanford, CA 94305.
Norman, TJ and Long, D, 1995. "Goal creation in motivated agents" In: M Wooldridge and NR Jennings
(eds.) Intelligent Agents: Theories, Architectures, and Languages (LNAI Volume 890), pp 277-290,
Springer-Verlag.
Papazoglou, MP, Laufman, SC and Sel\is, TK, 1992. "An organizational framework for cooperating
intelligent information systems" Journal of Intelligent and Cooperative Information Systems 1 (1) 169-202.
Parunak, HVD, 1995. "Applications of distributed artificial intelligence in industry'" In: GMP O'Hare and
NR Jennings (eds.) Foundations of Distributed Al, John Wiley.
Patil, RS, Fikes, RE, Patel-Schneider, PF, Mc Kay, D, Finin, T, Gruber, T and Neches, R, 1992. "The
DARPA knowledge sharing effort: Progress report" In: C Rich, W Swartout and B Nebel (eds.)
Proceedings of Knowledge Representation and Reasoning (KR&R-92), pp 777-788.
Perlis, D, 1985. "Languages with self reference I: Foundations" Artificial Intelligence 25 301-322.
Perlis, D, 1988. "Languages with self reference II: Knowledge, belief, and modality" Artificial Intelligence 34
179-212.
Perloff, M, 1991. "STIT and the language of agency" Synthese 86 379-408.
Poggi, A, 1995. "DAISY: An object-oriented system for distributed artificial intelligence" In· M Wooldridge
and NRJennings (eds.) Intelligent Agents: Theories, Architectures, and Languages (LNA!Volume890). pp
341-354, Springer-Verlag.
Pollack, ME and Ringuette, M, 1990. "Introducing the Tileworid: Experimentally evaluating agent architec
tures" In: Proceedings of the Eighth National Conference on Artificial Intelligence (AAAl-90), pp 183-189,
Boston, MA.
Rao, AS and Georgeff, MP, 1991a. '"Asymmetry thesis and side-effect problems in linear time and branching
time intention logics'" In: Proceedings oft he Twelfth International Joint Conference on Artificial Intelligence
(IJCAl-91), pp 498-504, Sydney, Australia.
Rao, AS and Georgeff, MP, 1991b. "Modeling rational agents within a BDI-architccture·• In: R Fikes and E
Sandewal! (eds.) Proceedings of Knowledge Representation and Reasoning (KR&R-9/), pp 473-484,
Morgan Kaufmann.
Rao, AS and Georgeff, MP, 1992a. "An abstract architecture for rational agents"' In: C Rich, W Swartout and
B Nebel (eds.) Proceedings of Knowledge Representation and Reasoning (KR&R-92), pp 439-449.
Rao, AS and Georgeff, MP, 1992b. "Social plans: Preliminary report" In: E Werner and Y Demazeau (eds.)
Decentralized Al 3-Proceedings of the Third European Workshop on Modelling Autonomous Agents and
Multi-Agent Worlds (MAAMA W-9I), pp 57-76, Elsevier.
Rao, AS and Georgeff, MP, 1993. "A model-theoretic approach to the verification of situated reasoning
systems" In: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI93), pp 318-324, Chambery, France.
Reichgclt, H, 1989a "A comparison of first-order and modal logics of time" In: P Jackson, H Reichge!t and F
van Harmclen (eds.) Logic Based Knowledge Representation, pp 143-176, MIT Press.
Reichgelt. H, 198%. "Logics for reasoning about knowledge and belief" Knowledge Engineering Review4 (2)
119-139.
Rosenschein, JS and Genesereth, MR, 1985 "Deals among rational agents·• In: Proceedings of the Ninth
International Joint Conference on Anificial Intelligence (Il CAl-85), pp 91-99, Los Angeles, CA
Rosenschein, S, 1985. "Formal theories of knowledge in AI and robotics'' New Generation Computing, pp
345-357.
Rosenschein, S and Kae\bling, LP, 1986. '·Toe synthesis of digital machines with provable epistemic
properties" In: JY Halpern (ed.) Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning
About Knowledge, pp 83-98, Morgan Kaufmann.
Russell, SJ and Wefald, E, 1991 Do the Right Thing-Studies in Limited Rationality. MIT Press.
Sacerdoti. E, 1974. "Planning in a hierarchy of abstraction spaces" Artificial Intelligence 5 115-135.
Sacerdoti, E, 1975. '"The non-linear nature of plans'' In: Proceedings of the Fourth International Joint
Conference on Artificial Intelligence ( Il CAl-75), pp 206--214. Stanford, CA.
Sadek. MD, 1992. '·A study in the logic of intention" In: C Rich, W Swartout and B Nebel (eds.) Proceedings
of Knowledge Representation and Reasoning (KR&R-92). pp 462-473.
#None
paragraph
Intelligent agents: theory and practice 149
Kiss, G and Reichgelt, H, 1992. "Towards a semantics of desires" In: E Werner and Y Demazeau (eds.)
Decentralized Al 3-Proceedings of the Third European Workshop on Modelling Autonomous Agents and
Multi-Agent Worlds (MAAMAW-91), pp 115-128, Elsevier.
Konolige, K, 1982. "A first-order formalization of knowledge and action for a multi-agent planning system"
In: JE Hayes, D Michie and Y Pao (eds.) Machine Intelligence JO, pp 41-72, Ellis Horwood.
Konolige, K, 1986a. A Deduction Model of Belief, Pitman.
Konolige, K, 1986b. "What awareness isn't: A sentential view of implicit and explicit belief (position paper)"
In: JY Halpern (ed.) Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning About
Knowledge, pp 241-250, Morgan Kaufmann.
Konolige, Kand Pollack, ME, 1993. "A representationalist theory of intention" In: Proceedings of the
Thirteenth International Joint Conference on Artificial Intelligence (IJCAJ-93), pp 390--395, Chambefy,
France.
Kraus, Sand Lehmann, D (1988) "Knowledge, belief and time" Theoretical Computer Science 58 155-174.
Kripke, S, 1963. "Semantical analysis of modal logic" Zeitschrift f Ur Mathematische Logik und Grundlagen
der Mathematik 9 67-96.
Lakemeyer, G, 1991. "A computationally attractive first-order logic of belief" In: JELIA-90: Proceedings of
the European Workshop on Logics in AI (LNAI Volume 478), pp 333-347, Springer-Verlag.
Lesperance, Y, 1989. "A formal account of self knowledge and action" In: Proceedings of the Eleventh
Imernational Joint Conference on Artificial Intelligence (IJCAI-89), pp 868-874, Detroit, Ml.
Levesque, HJ, 1984. "A logic of implicit and explicit belief" In: Proceedings oft he Fourth National Conference
on Artificial Intelligence (AAAI-84), pp 198-202, Austin, TX.
Levesque, HJ, Cohen, PR and Nunes, JHT, 1990. "On acting together" In: Proceedings of the Eighth National
Conference on Artificial Intelligence (AAAJ-90), pp 94-99, Boston, MA.
Levy, A Y, Sagiv, Y and Srivastava, D, 1994. "Towards efficient information gathering agents" In: 0 Etzioni
(ed.) Software Agents-Papers from the 1994 Spring Symposium (Technical Report SS-94-03), pp 64-70,
AAAI Press.
Mack, D, 1994. "A new formal model of belief" In: Proceedings of the Eleventh European Conference on
Artificial Intelligence (ECAI-94), pp 573-577, Amsterdam, The Netherlands.
Maes, P, 1989. "The dynamics of action selection" In: Proceedings of the Eleventh International Joint
Conference on Artificial Intelligence (IJCAI-89), pp 991-997, Detroit, MI.
Maes, P (ed.) 1990a. Designing Autonomous Agents, MIT Press.
Maes, P, 1990b. "Situated agents can have goals" In: P Maes (ed.) Designing Autonomous Agents, pp49-70,
MIT Press.
Maes, P, 1991. "The agent network architecture (ANA)" SIGA RT Bulletin 2 (4) 115-120.
Maes, P, 1994a. "Agents that reduce work and information overload" Communications of the ACM 37 (7) 3140.
Macs, P, 1994b. "Social interface agents: Acquiring competence by learning from users and other agents" In:
0 Etzioni (ed.) Software Agents-Papers from the J 994 Spring Symposium (Technical Report SS-94-03), pp
71-78, AAA! Press.
Mc Cabe, FG and Clark, KL, 1995. "April-agent process interaction language" In: M Wooldridge and NR
Jennings (eds.) Intelligent Agents: Theories, Architectures, and Languages (LNAI Volume 890), pp 324340, Springer-Verlag.
Mc Carthy, J, 1978. "Ascribing mental qualities to machines." Technical report, Stanford University Al Lab.,
Stanford, CA 94305.
Mc Gregor, SL, 1992. "Prescient agents" In: D Coleman (ed.) Proceedings of Groupware-92, pp 228-230.
Montague, R, 1963. "Syntactical treatments of modality, with corollaries on reflexion principles and finite
axiomatizations" Acta Philosophica Fennica 16153-167.
Moore, RC, 1990. '"A formal theory of knowledge and action" In: JF Allen, J Hendler and A Tate (eds.)
Readings in Planning, pp 480-519, Morgan Kaufmann.
Morgenstern, L, 1987. "Knowledge preconditions for actions and plans" In: Proceedings of the Tenth
International Joint Conference on Artificial Intelligence (JJCAI-87), pp 867-874, Milan, Italy.
Mori, K, Torikoshi, H, Nakai, Kand Masuda, T, 1988. "Computer control system for iron and steel plants"
Hitachi Review 37 (4) 251-258.
Morley, RE and Schelberg, C, 1993. "An analysis of a plant-specific dynamic scheduler'· In: Proceedings of the
NSF Workshop on Dynamic Scheduling, Cocoa Beach, Florida.
Mukhopadhyay, U, Stephens, Land Huhns, M, 1986. "An intelligent system for document retrieval in
distributed office environments'' Journal of the American Society for Information Science 37 123-135.
Millier, JP, 1994. "A conceptual model for agent interaction" In: SM Deen (ed.) Proceedings of the Second
International Working Conference on Cooperating Knowledge Based Systems (CKBS-94), pp 213-234,
DAKE Centre, University of Keele, UK.
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 148
Halpern, JY, 1986. "Reasoning about knowledge: An overview" In: JY Halpern (ed.) Proceedings oft he 1986
Conference on Theoretical Aspects of Reasoning About Knowledge, pp 1-18, Morgan Kaufmann.
Halpern, JY, 1987. "Using reasoning about knowledge to analyze distributed systems" Annual Review of
Computer Science 2 37--68.
Halpern, JY and Moses, Y, 1992. "A guide to completeness and complexity for modal logics of knowledge and
belief' Artificial Intelligence 54 319-379.
Halpern, JY and Vardi, MY, 1989. "The complexity of reasoning about knowledge and time. I. Lower
bounds" Journal of Computer and System Sciences 38 195-237.
Hare!, D, 1984. "Dynamic logic" In: D Gabbay and F Guenther (eds.) Handbook of Philosophical Logic
Volume ll-Extensions of Classical Logic, pp 497-604, Reidel.
Haugeneder, H, 1994. IMAGINE final project report.
Haugeneder, Hand Steiner, D, 1994. "A multi-agent approach to cooperation in urban traffic" In: SM Deen
(ed.) Proceedings of the 1993 Workshop on Cooperating Knowledge Based Systems (CKBS-93), pp 83-98,
DAKE Centre, University of Keele, UK.
Haugeneder, H, Steiner, D and Mc Cabe, FG, 1994. "IMAGINE: A framework for building multi-agent
systems" In: SM Deen (ed.) Proceedings of the 1994 Jnternational Working Conference on Cooperating
Knowledge Based Systems (CKBS-94), pp 31--64, DAKE Centre, University of Keele, UK.
Hayes-Roth, B, 1990. "Architectural foundations for real-time performance in intelligent agents" The
Journal of Real-Time Systems 2 99-125.
Hendler, J (ed.) 1992. Artificial intelligence Planning: Proceedings of the First International Conference,
Morgan Kaufmann.
Henz, M, Smolka, G and Wuertz, J, 1993. "Oz-a programming language for multi-agent systems" ln:
Proceedings of the Thirteenth international Joint Conference on Artificial Intelligence ( IJCAI-93), pp 404409, Chamb Cry, France.
Hewitt, C, 1977. "Viewing control structures as patterns of passing messages" Artificial intelligence 8 (3) 323364.
Hintikka, J, 1962. Knowledge and Belief, Cornell University Press.
Houlder, V, 1994. "Special agents" In: Financial Times, 15 August, p 12.
Huang, J, Jennings, NR and Fox. J, 1995. "An ag-ent architecture for distributed medical care" In: M
Wooldridge and NR Jennings (eds.) Intelligent Agents: Theories, Architectures, and Languages (LNA!
Volume 890), pp 219-232, Springer-Verlag.
Hughes, GE and Cresswell, MJ, 1968. lntroduction to Modal Logic, Methuen.
Huhns, MN, Jacobs, N, Ksiezyk, T, Shen, WM, Singh, MP and Cannata, PE, 1992. "Integrating enterprise
information models in Carnot" In: Proceedings of the International Conference on Intelligent and
Cooperative information Systems, pp 32-42, Rotterdam, The Netherlands.
Israel, DJ, 1993. "The role(s) of logic in artificial intelligence" In: DM Gabbay, CJ Hogger and JA Robinson
(eds.) Handbook of Logic in Artificial Intelligence and Logic Programming, pp 1-29, Oxford University
Press.
Jennings, NR, 1992. "On being responsible" In: E Werner and Y Demazeau (eds.) Decentralized Al]
Proceedings of the Third European Workshop on Modelling Autonomous Agents and Multi-Agent Worlds
(MAAMA W-91), pp 93-102, Elsevier.
Jennings. NR, 1993a. "Commitments and conventions: The foundation of coordination in multi-agent
systems" Knowledge Engineering Review 8 (3) 223-250.
Jennings, NR, 1993b. "Specification and implementation of a belief desire joint-intention architecture for
collaborative problem solving" Journal of intelligent and Cooperative Information Systems 2 (3) 289-318.
Jennings, NR, 1995. "Controlling cooperative problem solving in industrial multi-agent systems using joint
intentions" Artificial Intelligence 14 (2) (to appear).
Jennings, NR. Varga, LZ, Aarnts, RP, Fuchs, J and Skarek, P, 1993. "Transforming standalone expert
systems into a community of cooperating agents'' International Journal of Engineering Applications of
Artificial Intelligence 6 (4 ) 317-331.
Kaelbling, LP, 1986. "An architecture for intelligent reactive systems" In: MP George ff and AL Lansky (eds.)
Reasoning About Actions and Plans-Proceedingofthe 1986 Workshop, pp 395-410, Morgan Kaufmann.
Kaelbling, LP, 1991. ·'A situated automata approach to the design of embedded agents" SIG ART Bulletin 2
(4) 85-88.
Kaelbling, LP and Roscnschein, SJ, 1990. "Action and planning in embedded agents" In: P Maes (ed.)
Designing Autonomous Agents, pp 35-48, MIT Press.
Kinny, D, Ljungberg, M, Rao, AS, Sonenberg, E, Tidhar, G and Werner, E, 1992. "Planned team activity"
In: C Castelfranchi and E Werner (eds.) Artificial Social Systems-Selected Papers from the Fourth
European Workshop on Modelling Autonomous Agents and Multi-Agent Worlds, MAAAMA W-92 (LNA!
Volume 830), pp 226-256, Springer-Verlag.
#None
paragraph
Intelligent agents: theory and practice 147
Fikes, RE and Nilsson. N, 1971. "STRIPS: A new approach to the application of theorem proving to problem
solving'· Artificial Intelligence 5 (2) 189-208.
Firby, JA. 1987. "An investigation into reactive planning in complex domains"' In: Proceedings of the Tenth
International Joint Conference on Artificial Intelligence (IJCA/-87), pp 202-206, Milan, Italy.
Fischer, K, Kuhn. N, MUiler, HJ, MUiler, JP and Pischel, M, 1993. ·'Sophisticated and distributed: The
transportation domain" In: Proceedings of the Fifth European Workshop on Modelling Autonomous
Agents and Multi-Agent Worlds (MAAMA W-93), Neuchatel, Switzerland.
Fisher, M, 1994. "A survey of Concurrcnt Metate M-the language and its applications"" In: DM Gabbay and
HJ Ohlbach (eds.) Temporal Logic-Proceedings of the First International Conference (LNAJ Volume
827). pp 480-505, Springer-Verlag.
Fisher, M. 1995. "Representing and executing agent-based systems·· In: M Wooldridge and NR Jennings
(eds.) Intelligent Agents: Theories, Architectures, and Languages (LNAJ Volume 890), pp 307-323,
Springer-Verlag.
Fisher, M and Wooldridge, M, 1993. "Specifying and verifying distributed intelligent systems" In: M
Filgueiras and L Damas (eds.) Progress in Artificial Intelligence-Sixth Portuguese Conference on Anificial
Intelligence (LNAI Volume 727), pp 13-28. Springer-Verlag.
Galliers. JR, 1988a. "A strategic framework for multi-agent cooperative dialogue" In: Proceedings of the
Eighth European Conference on Artificial Intelligence (ECAl-88), pp 415-420, Munich, Germany.
Galliers, JR, 1988b. A Theoretical Framework for Computer Models of Cooperative Dialogue, Acknowledg
ing Multi-Agent Conflict. Ph D thesis, Open University, UK.
Gasser, L, 1991. "Social conceptions of knowledge and action: DAI foundations and open systems semantics"
Artificial Intelligence 47 107-138.
Gasser, L, Braganza, C and Hermann, N, 1987. "MACE: A flexible testbed for distributed AI research" In:
M Huhns (ed.) Distributed Artificial Intelligence, pp 119-152, Pitman.
Gasser, Land Briot, JP, 1992. "Object-based concurrent programming and DAI" In: Distributed Artificial
Intelligence: Theory and Praxis. pp 81-108, Kluwer Academic.
Geissler, C and Konolige. K, 1986. "A resolution method for quantified modal logics of knowledge and
helief'· In: JY Halpern (ed.) Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning About
Knowledge, pp 309-324, Morgan Kaufmann.
Genescreth, MR and Ketchpe L SP, 1994. "Software agents" Communications of the ACM 37 (7) 48-53.
Genesereth. MR and Nils~on, N, 1987. Logical Foundations of Artificial Intelligence, Morgan Kaufmann.
Gcorgeff, MP, 1987. "Planning" Annual Review of Computer Science 2 359-400.
Georgeff, MP and lngrand, FF, 1989. "Decision-making in an embedded reasoning system" In: Proceedings
of the Eleventh International Joint Conference on Artificial Intelligence ( IJCAl-89), pp 972-978, Detroit,
ML
Gcorgcff. MP and Lansky, AL (eds.) 1986. Reasoning About Actions & Plans-Proceedings of the 1986
Workshop, Morgan Kaufmann.
Georgeff, MP and Lansky, AL, 1987. "Reactive reasoning and planning In: Proceedings of the Sixth National
Conference on Artificial Intelligence (AAAJ-87), pp 677-682, Seattle, WA.
Ginsberg, M. 1993. Essentials of Artificial Intelligence, Morgan Kaufmann.
Gmytrasiewicz, P and Durfee. EH, 1993. "Elements of a utilitarian theory of knowledge and action" In:
Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (IJCA/-93), pp 396--
402, Chamb Cry. France.
Goldblatt, R. 1987. Logics of Time and Computation, Centre for the Study of Language and lnformation
Lccturc Notes Series. (Distributed by Chicago University Press.)
Goldman, RP and Lang. RR, 1991 "Intentions in time"', Technical Report TUTR 93-101, Tulane University.
Goodwin, R. 1993. "Formalizing properties of agents", Technical Report CMU-CS-93-159, School of
Computer Science. Carnegie-Mellon University, Pittsburgh, PA.
Greif. I, 1994. "Desktop agent~ in group-enabled products" Communications of the ACM 37 (7) 100-105.
Grosz, BJ and Sidner, CL, 1990. ''Plans for discourse" In: PR Cohen, J Morgan and ME Pollack (eds.)
Intentions in Communication. pp 417-444, MIT Press.
Gruber. TR, 1991. "The role of common ontology in achieving sharable, reusable knowledge bases'· In: R
Fikes and E Sandcwall (eds.) Proceedings of Knowledge Representation and Reasoning (KR&R-91),
Morgan Kaufmann.
Guha, RV and Lcnat, DB, 1994. "Enabling agents to work together" Communications of the ACM 37 (7) 127142.
Haas, A, 1986. "A syntactic theory of belief and knowledge" Artificial Intelligence 28 (3) 245-292.
Haddadi, A. 1994. "A hybrid architecture for multi-agent systems" In: SM Deen (ed.) Proceedings of the 1993
Workshop 011 Cooperating Knowledge Based Systems (CKBS-93), pp 13-26. DAKE Centre, University of
Keele, UK.
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 146
Chaib-draa, B, Moulin, B, Mandiau, Rand Millot, P, 1992. "Trends in distributed artificial intelligence··
Artificial Intelligence Review 6 35-66.
Chang, E, 1987. "Participant systems" In: M Huhns (ed.) Distributed Artificial Intelligence, pp 311-340,
Pitman.
Chapman, D, 1987. "Planning for conjunctive goals" Artificial Intelligence 32 333--378.
Chapman, D and Agre, P, 1986. "Abstract reasoning as emergent from concrete activity" In: MP Georgeff
and AL Lansky (eds.) Reasoning About Actions & Plans~Proceedings of the 1986 Workshop pp 411-424,
Morgan Kaufmann.
Chel\as, B, 1980. Modal Logic: An Introduction, Cambridge University Press.
Chu, D, 1993. "l.C. PROLOG JI: A language for implementing multi-agent systems" In: SM Deen (ed.)
Proceedings of the 1992 Workshop on Cooperating Knowledge Based Systems (CKBS-92), pp 61-74,
DAKE Centre, University of Keele, UK.
Cohen, PR, Greenberg ML, Hart OM and Howe AE. 1989. "Trial by fire: Understanding the design
requirements for agents in complex environments" Al Magazine 10 (3) 32-48.
Cohen, PR and Levesque, HJ, 1990a. "'Intention is choice with commitment" Artificial Intelligence 42 213261.
Cohen, PR and Levesque, HJ, 1990h. ·'Rational interaction as the basis for communication" In: PR Cohen, J
Morgan and ME Pollack (eds.) Intentions in Communication, pp 221-256. MIT Press.
Cohen, PR and Perrault, CR, 1979. ·'Elements of a plan based theory of speech acts" Cognitive Science 3177212.
Connah, D and Wavish. P, 1990. '·An experiment in cooperation" In: Y Demazeau and J-P Miiller (eds.)
Decentralized AI-Proceedings of the First European Workshop on Modelling Autonomous Agents in
Multi-Agent Worlds (MAAMA W-89), pp 197-214, Elsevier.
Cutkosky, MR, Engelmorc, RS, Fikes. RE, Gcnesereth. MR, Gruber, T, Mark, WS, Tenenbaum, JM and
Weber, JC, 1993. "PACT: An experiment in integrating concurrent engineering systems'' IEEE Computer
26 (l) 28-37.
Davies, NJ, 1993. Truth, Modality, and Action, Ph D thesis, Department of Computer Science, University of
Essex, Colchester, UK.
Dean, TL and Wellman, MP. 1991. Planning and Control, Morgan Kaufmann.
Dennett, DC, 1978. Brainstorm~, MIT Press.
Dennett, DC. 1987. The Intentional Stance, MIT Press.
des Rivieres. J and Levesque. HJ, 1986. ''The consistency of ~yntactical treatments of knowledge" In: JY
Halpern ( ed,) Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning About Knowledge,
pp 115-130, Morgan Kaufmann.
Devlin, K, 1991. Logic and Information, Cambridge University Press.
Dongha, P, 1995 "Toward a formal model of commitment for rc~ource-bounded agent~" In: M Wooldridge
and NR Jennings (eds.) Intelligent Agents: Theories, Architectures, and Languages ( LNAJ Volume 890), pp
86--101. Springer-Verlag.
Down~, J and Rcichgelt, II, 1991. "Integrating classical and rcm.:tive planning within an architecture for
autonomous agents·· In: J Hertzberg (ed,) European Worhhop on Planning (LNAI Volume 522), pp 13--
26.
Doyle. J, Shoham, Y and Wellman, MP, 1991. ''A logic of relative desire" In: ZW Ras and M Zemankova
(cd~.) Methodologies for Intelligent Systems-Sixth International Symposium, ISMIS-91 (LNAI Volume
542). Springer-Verlag.
Emerson EA, 1990. "Temporal and modal logic" In: J van Leeuwen (ed.) Handhookof Throretical Computer
Science. pp 996--1072, Elsevier.
Emerson, EA and Halpern, JY, 1986. "'Sometimes' and 'not never' revisited: on branching time versus linear
time temporal logic" Journal of the ACM 33 (1) 151-178.
Etzioni, 0. Lesh, N and Segal, R, 1994. ·'Building soft bots for UNIX'' In: 0 Etzioni (ed.) Software Agents
Paper~ from the 1994 Spring Symposium (Technical Report SS-94-03), pp 9-16, AAAI Press.
Fagin, Rand Halpern, JY, 1985. "Belief, awarenes~. and limited reasoning" In: Proceedings of the Ninth
International Joint Conference on Artificial Intelligence (IJCAl-85), pp 480--490. Los Angeles, CA.
Fagin. R, Halpern, JY and Yard!, MY, 1992. "What can machines know? on the properties of knowledge in
distributed system~·· Journal of rhe A CM 39 (2) 328--376.
Ferguson IA. 1992a. Touring Machines: An Architecture for Dynamic, Rational, Mobile Agents, Ph D thesis,
Clare Hall, University of Cambridge, UK. (Also available as Technical Report No. 273, University of
Cambridge Computer Laboratory.)
Ferguson. IA, 1992h. ·Towar<ls an architecture for adaptive, rational, mobile agents·• In: E Werner and Y
Demazeau (eds.) Decentralized Al 3-Proceedings of the Third European Workshop on Modelling
Autonomous Agents and Multi-Agent Worlds (MAAMA W-91), pp 249-262, Elsevier.
#None
paragraph
Intelligent agents: theory and practice 145
Agre, P and Chapman, D, 1987. "PENGI: An implementation of a theory of activity" In: Proceedings of the
Sixth National Conference on Artificial Intelligence (AAAI-87), pp 268-272, Seattle, WA.
Allen, JF, 1984. "Towards a general theory of action and time" Artificial Intelligence 23 (2) 123--154.
Allen, JF, Hendler, J and Tate, A (eds.), 1990. Readings in Planning. Morgan Kaufmann.
Allen, JF, Kautz, H, Pelavin, Rand Tenenberg, J, 1991. Reasoning About Plans. Morgan Kaufmann.
Ambros-Ingerson, J and Steel, S, 1988. "Integrating planning, execution and monitoring" In: Proceedings of
the Seventh National Conference on Artificial Intelligence (AAAI-88), pp 83-88, St. Paul, MN.
Austin, JL, 1962. How to Do Things With Words. Oxford University Press.
Ay!ett, Rand Eustace, D, 1994. "Multiple cooperating robots-combining planning and behaviours'' In: SM
Deen (ed) Proceedings of the 1993 Workshop on Cooperating Knowledge Based Systemv (CKBS-93), pp 3--
11. DAKE Centre, University of Keele, UK.
Haecker, RM (ed.) 1993. Readings in Groupware and Computer-Supported Cooperative Work. Morgan
Kaufmann.
Barringer, H, Fisher, M, Gabbay, D, Gough, G and Owens, R, 1989. "Metate M: A framework for
programming in temporal logic" In: REX Workshop on Stepwise Refinement of Distributed Systems:
Models, Formalisms, Correctness (LNCS Volume 430) pp 94-129. Springer-Verlag.
Barwise, J and Perry, J, 1983. Situations and Attitudes, MIT Press.
Bates, J, 1994. "The role of emotion in believable agents" Communications of the ACM 37 (7) 122-125.
Bates, J, Bryan Loyall, A and Scott Reilly, W, 1992a. "An architecture for action, emotion, and social
behaviour". Technical Report CMU-CS-92-144, School of Computer Science, Carnegie-Mellon Univer
sity, Pittsburgh, PA.
Bates, J, Bryan Loyall, A and Scott Reilly, W, 19926. "Integrating reactivity, goals, and emotion in a broad
agent". Technical Report CMU-CS-92-142, School of Computer Science, Carnegie-Mellon University,
Pittsburgh, PA.
Bell, J, 1995. "Changing attitudes". In: M Wooldridge and NR Jennings (eds.) Intelligent Agents: Theories,
Architectures, and Languages (LNAI Volume 890), pp 40--55, Springer-Verlag.
Belnap, N, 1991. ''Backwards and forwards in the modal logic of agency'' Philosophy and Phenomenological
Research LI (4) 777-807.
Belnap, N and Perloff, M, 1988. "Seeing to it that: a canonical form for agentives" Theoria 54175-199.
Bond, AH and Gasser, L (eds.) 1988. Readings in Distribwed Artificial Intelligence, Morgan Kaufmann.
Bratman, ME, 1987. Intentions, Plans, and Practical Reason, Harvard University Press.
Bratman, ME, 1990. "What is intention?" In: PR Cohen, JL Morgan and ME Pollack (eds.) Intentions in
Communication, pp 15-32, MIT Press.
Bratman, ME, Israel, DJ and Pollack, ME, 1988. "Plans and resource-bounded practical reasoning''
Computational Intelligence 4 349-355.
Brooks, RA, 1986. "A robust layered control system for a mobile robot" IEEE Journal of Robotics and
Aulomation 2 (1) 14-23.
Brooks, RA, 1990. "Elephants don't play chess" In: P Maes (ed.) Designing Aulonomous Agents, pp 3-15,
MIT Press.
Brooks, RA, 1991a. "Intelligence without reason" In: Proceedings of the Twelflh International Joint
Conference on Artificial Intelligence (JJCAl-91), pp 569-595, Sydney, Australia.
Brooks, RA, 19916. "Intelligence without representation" Artificial Intelligence 47 139-159.
Burmeister, Band Sundermeyer, K. 1992. ''Cooperative problem solving guided by intentions and percep
tion" In: E Werner and Y Demazeau (eds) Decentralized Al 3-Proceedings of the Third European
Workshop on Modelling Autonomous Agents and Multi-Agent Worlds (MAAMAW-91), pp 77-92,
Elsevier.
Bussman, Sand Demazeau, Y, 1994. "An agent model combining reactive and cognitive capabilities" In:
Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS-94), Munich,
Germany.
Caste!franchi, C, 1990. "Social power" In: Y Demazeau and J-P MU!ler(eds.) Decentralized Al-Proceedings
of the First European Workshop on Modelling Autonomous Agents in Multi-Agent Worlds (MAAMA W8Y), pp 49--62, Elsevier.
Castelfranchi, C, 1995. "Guarantees for autonomy in cognitive agent architecture" In: M Wooldridge and NR
Jennings (eds.) Intelligent Agents. Theories, Architectures, and Languages (LNAI Volume 890), pp 56--70,
Springer-Verlag.
Castelfranchi, C, Miceli, Mand Cesta, A, 1992. '·Dcpcndcncc relations among autonomous agents" In: E
Werner and 'Y Dema:ceau (eds.) Decentralized Al 3-Proceedings of the Third European Workshop on
Mode Uing Awonomous Agents and Multi-Agent Worlds (MAAMA W-Yl), pp 215-231, Elsevier.
Catach, L, 1988. "Norma! multimodal logics"' In: Proceedings of the Seventh National Conference on Anificial
Intelligence (AAAJ-88), pp 491-495. St. Paul, MN.
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 144
specified articles from a range of document repositories (Voorhees, 1994). Another important
system in this area is called Carnot (Huhns et al., 1992), which allows pre-existing and hetero
geneous database systems to work together to answer queries that are outside the scope of any of
the individual databases.
5.4 Believable agents
There is ohvious potential for marrying agent technology with that of the cinema, computer games,
and virtual reality. The Oz project6 was initiated to develop:
.. artistically interesting, highly interactive, simulated worlds . . to give users the experience of living in
(not merely watching) dramatically rich worlds that include moderately competent, emotional agents"
(Batesetal., 1992b,p. 1)
To construct such simulated worlds, one must first develop believable agents: agents that "provide
the illusion oflife, thus permitting the audience's suspension of disbelief" (Bates, 1994, p. 122). A
key component of such agents is emotion: agents should not be represented in a computer game or
animated film as the flat, featureless characters that appear in current computer games. They need
to show emotions; to act and react in a way that resonates in tune with our empathy and
understanding of human behaviour. The Oz group have investigated various architectures for
emotion (Bates et al., 1992a), and have developed at least one prototype implementation of their
ideas (Bates, 1994).
6 Concluding remarks
This paper has reviewed the main concepts and issues associated with the theory and practice of
intelligent agents. It has drawn together a very wide range of material, and has hopefully provided
an insight into what an agent is, how the notion of an agent can be formalised, how appropriate
agent architectures can be designed and implemented, how agents can be programmed, and the
types of applications for which agent-based solutions have been proposed. The subject matter of
this review is important because it is increasingly felt, both within academia and industry, that
intel!igent agents will be a key technology as computing systems become ever more distributed,
interconnected, and open. In such environments, the ability of agents to autonomously plan and
pursue their actions and goals, to cooperate, coordinate, and negotiate with others, and to respond
flexibly and intelligently to dynamic and unpredictable situations will lead to significant improve
ments in the quality and sophistication of the software systems that can be conceived and
implemented, and the application areas and problems which can be addressed.
Acknowledgements
Much of this paper was adapted from the first author's 1992 Ph D thesis (Wooldridge, 1992), and as
such this work was supported by the UK Science and Engineering Research Council (now the
EPSRC). We arc grateful to those people who read and commented on earlier drafts of this article,
and in particular to the participants of the 1994 workshop on agent theories, architectures, and
languages for their encouragement, enthusiasm, and helpful feedback. Finally, we would like to
thank the referees of this paper for their perceptive and helpful comments.
References
Adorni, G and Poggi, A, 1993. "An object-oriented language for distributed artificial intelligence" Inter
national Journal of Man-Machine Studies 38 435-453.
Agha, G, 1986. ACTORS: A Model of Concurrent Computation in Distributed Systems. MIT Press.
Agha, G, Wegner, P and Yonezawa, A (eds.), 1993. Research Directions in Concurrent Objec1-0riented
Programming. MIT Pre:.s.
6 Not to be confused with the Oz programming language (Henz ct al., 1993).
#None
paragraph
Intelligent agents: theory and practice 143
accelerator control (Jennings et al., 1993), intelligent document retrieval (Mukhopadhyay et al.,
1986), patient care (Huang et al., 1995), telecommunications network management (Weihmayer &
Velthuijsen, 1994), spacecraft control (Schwuttke & Quan, 1993), computer integrated manufac
turing (Parunak, 1995), concurrent engineering (Cutkosky et al., 1993), transportation manage
ment (Fischer et al., 1993), job shop scheduling (Morley & Schelberg, 1993), and steel coil
processing control (Mori et al., 1988). The classic reference to DAI is Bond and Gasser (1988),
which includes both a comprehensive review article and a collection of significant papers from the
field; a more recent review article is Chaib-draa et al. (1992).
5.2 Interface agents
Macs defines interface agents as:
"[C]omputer programs that employ artificial Intelligence techniques in order to provide assistance to a user
dealing with a particular application .... The metaphor is that of a personal assistant who is collaborating
with the user in the same work environment." (Macs, 1994h, p. 71)
There are many interface agent prototype applications: for example, the New T system is a
USENET news tilter (along the lines mentioned in the second scenario that introduced this article)
(Maes, 1994a, pp. 38-39). A Ncw T agent is trained by giving it series of examples, illustrating
articles that the user would and would not choose to read. The agent then begins to make
suggestions to the user, and is given feedback on its suggestions. New T agents are not intended to
remove human choice, but to represent an extension of the human's wishes: the aim is for the agent
to be able to bring to the attention of the user articles of the type that the user has shown a
consistent interest in. Similar ideas have been proposed by Mc Gregor, who imagines prescient
agents-intelligent administrative assistants that predict our actions, and carry out routine or
repetitive administrative procedures on our behalf (Mc Gregor, 1992).
There is much related work being done by the computer supported cooperative work (CSCW)
community. CSCW is informally defined by Haecker to be "computer assisted coordinated activity
such as problem solving and communication carried out by a group of collaborating individuals"
(Haecker, 1993, p. l). The primary emphasis of CSCW is on the development of (hardware and)
software tools to support collaborative human work-the term gruupware has been coined to
describe such tools. Various authors have proposed the use of agent technology in groupware. For
example, in his participant systems proposal, Chang suggests systems in which humans collaborate
with not only other humans, but also with artificial agents (Chang, 1987). We refer the interested
reader to the collection of papers edited by Haecker (1993) and the article by Greif (1994) for more
details on CSCW,
5.3 Information agents and cooperative information systems
An information agent is an agent that has access to at least one, and potentially many information
sources, and is able to collate and manipulate information obtained from these sources to answer
queries posed by users and other information agents (the network of interoperating information
sources are often referred to as intelligent and cooperative information systems (Papazoglou et al.,
1992)). The information sources may be of many types, including, for example, traditional
databases as well as other information agents. Finding a solution to a query might involve an agent
accessing information sources over a network. A typical scenario is that of a user who has heard
about somebody at Stanford who has proposed something called agent-oriented programming,
The agent is asked to investigate, and, after a careful search of various FTP sites, returns with an
appropriate technical report, as well as the name and contact details of the researcher involved. A
number of studies have been made of information agents, including a theoretical study of how
agents are able to incorporate information from different sources (Levy et al., 1994; Gruber, 1991 ),
as well a prototype system called IRA (information retrieval agent) that is able to search for loosely
#None
paragraph
M. WOOLDRIDGE A:-1D NICHOLAS JENNINGS 142
agency discussed in this paper) is particularly important, as it potentially makes agent technology
available to a user base that is industrially (rather than academically) oriented.
While the development of various languages for agent-based applications is of undoubted
importance, it is worth noting that all of the academically produced languages mentioned above are
in some sense prototypes. Each was designed either to illustrate or examine some set of principles,
and these languages were not, therefore, intended as production tools. Work is thus needed, both
to make the languages more robust and usable, and to investigate the usefulness of the concepts
that underpin them. As with architectures, work is needed to investigate the kinds of domain for
which the different languages are appropriate.
Finally, we turn to the relationship between an agent language and the corresponding theories
that we discussed in section 2. As with architectures, it is possible to divide agent languages into
various different categories. Thus AGENT0, PLACA, Concurrent Metate M, APRIL, and MAIL
arc deliberative languages, as they arc all based on traditional symbolic AI techniques. ABLE, on
the other hand, is a purely reactive language. With AGENT0 and PLACA, there is a clear (if
informal) relationship between the programming language and the logical theory the language is
intended to realise. In both cases, the programming language represents a subset of the
corresponding logic, which can be interpreted directly. However, the relationship between logic
and language is not formally defined. Like these two languages, Concurrent Metate M is intended
to correspond to a logical theory. But the relationship hetween Concurrent Metate M and the
corresponding logic is much more closely defined, as this language is intended to be a directly
executable version of the logic. Agents in Concurrent Metate M, however, are not defined in terms
of mentalistic constructs. For a discussion on the relationship between Concurrent Metate M and
AGENT0-like languages, see Fisher (1995).
4.2 Further reading
A recent collection of papers on concurrent object systems is Agha ct al. (1993). Various languages
have been proposed that marry aspects of object-based systems with aspects of Shoham·s agent
oriented proposal. Two examples are AGENTSPEAK and DAISY. AGENTSPEAK is loosely
based on the PRS agent architecture, and incorporates aspects of concurrent-object technology
(Weerasooriya et al., 1995). In contrast, DAISY is based on the concurrent-object language CUBL
(Adorni & Poggi, 1993), and incorporates aspects of the agent-oriented proposal (Poggi, 1995).
Other languages of interest include OZ (Henz et al., 1993) and IC PRO LOG Il (Chu, 1993).
The latter, as its name suggests, is an extension of PROLOG, which includes multiple-threads,
high-level communication primitives, and some object-oriented features.
5 Applications
Although this article is not intended primarily as an applications review. it is nevertheless worth
pausing to examine some of the current and potential applications of agent technology.
5.1 Cooperative problem solving and distributed Al
As we observed in section 1, there has been a marked flowering of interest in agent technology
since the mid-1980s. This interest is in part due to the upsurge of interest in Distributed Al
Although DAI encompasses most ofthc issues we have discussed in this paper, it should be stressed
that the dassical emphasis in DAI has been on macro phenomena (the social level), rather than the
micro phenomena (the agent level) that we have been concerned with in this paper. DAI thus looks
at such issues as how a group of agents can be made to cooperate in order to efficiently solve
problems, and how the activities of such a group can be efficiently coordinated. DAI researchers
have applied agent technology in a variety of areas. Example applications include power systems
management (Wittig, 1992; Varga et al., 1994), air-traffic control (Steeb et al., 1988), particle
#None
paragraph
Intelligent agents: theory and practice 141
designed to provide the core features required to realise most agent architectures and systems.
Thus APRIL provides facilities for multi-tasking (via processes, which are treated as first-class
objects, and a Unix-like fork facility), communication (with powerful message-passing facilities
supporting network-transparent agent-to-agent links); and pattern matching and symbolic process
ing capabilities. The generality of APRIL comes at the expense of powerful abstractions-an
APRIL system builder must implement an agent or system architecture from scratch using
APRIL's primitives. In contrast, the MAIL language provides a rich collection of pre-defined
abstractions, including plans and multi-agent plans. APRIL was originally envisaged as the
implementation language for MAIL. The MAIL system has been used to implement several
prototype multi-agent systems, including an urban traffic management scenario (Haugeneder and
Steiner, 1994).
4.0.6 General Magic, lnc.-TELESCRIPT
TELESCRIPT is a language-based environment for constructing agent societies that has been
developed by General Magic, Inc.: it is perhaps the first commercial agent language.
TELESCRIPT technology is the name given by General Magic to a family of concepts and
techniques they have developed to underpin their products. There are two key concepts in
TELESCRIPT technology: places and agents. Places are virtual locations that are occupied by
agents. Agents are the providers and consumers of goods in the electronic marketplace applications
that TELESCRIPTwas developed to support. Agents are software processes, and are mobile: they
are able to move from one place to another, in which case their program and state are encoded and
transmitted across a network to another place, where execution recommences. Agents are able to
communicate with one-another: if they occupy different places, then they can connect across a
network, in much the standard way; if they occupy the same location, then they can meet one
another.
Four components have been developed by General Magic to support TELESCRIPT tech
nology. The first is the TELESCRIPT language. This language "is designed for carrying out
complex communication tasks: navigation, transportation, authentication, access control, ·and so
on" (White, 1994, p.17). The second component is the TELESCRIPT engine. An engine acts as an
interpreter for the TELESCRIPT language, maintains places, schedules agents for execution,
manages communication and agent transport, and finally, provides an interface with other
applications. The third component is the TELESCRIPT protocol set. These protocols deal
primarily with the encoding and decoding of agents, to support transport between places. The final
component is a set of software tools to support the development of TELESCRIPT applications.
4.0.7 Connah and Wavish-ABLE
A group at Philips research labs in the UK have developed an Agent Behaviour Language (ABLE),
in which agents are programmed in terms of simple, rule-like licences (Connah & Wavish, 1990;
Wavish, 1992). Licences may include some representation of time (though the language is not
based on any kind of temporal logic): they loosely resemble behaviours in the subsumption
architecture (see above). ABLE can be compiled down to a simple digital machine, realised in the
"C" programming language. The idea is similar to situated automata, though there appears to
be no equivalent theoretical foundation. The result of the compilation process is a very fast
implementation, which has been used to control a Compact Disk-Interactive (CD-I) application.
ABLE has recently been extended to a version called Real-Time ABLE (RTA) (Wavish &
Graham, 1995).
4.1 Discussion
The emergence of various language-based software tools for building agent applications is clearly
an important development for the wider acceptance and use of agent technology. The release of
TELESCRIPT, a commercial agent language (albeit one that does not embody the strong notion of
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 140
=
CA~ open(door)8 Bl CA~ open (door)8.
This formula is read: "if at time 5 agent a can ensure that the door is open at time 8, then at time 5
agent b believes that at time 5 agent a can ensure that the door is open at time 8".
Corresponding to the logic is the AGENT0 programming language. In this language, an agent is
specified in terms of a set of capabilities (things the agent can do), a set of initial beliefs and
commitments, and a set of commitment rules. The key component, which determines how the agent
acts, is the commitment rule set. Each commitment rule contains a message condition, a mental
condition, and an action. To determine whether such a rule fires, the message condition is matched
against the messages the agent has received; the mental condiiion is matched against the beliefs of
the agent. If the rule fires, then the agent becomes committed to the action. Actions may be private,
corresponding to an internally executed subroutine, or communicative, i.e., sending messages.
Messages are constrained to be one of three types: "requests" or "unrequests" to perform or refrain
from actions, and "inform" messages, which pass on information-Shoham indicates that he took
his inspiration for these message types from speech act theory (Searle, 1969; Cohen & Perrault,
1979). Request and unrequest messages typically result in the agent's commitments being
modified; inform messages result in a change to the agent's beliefs.
4.0.3 Tltomas-PLACA
AGENT0 was only ever intended as a prototype, to illustrate the principles of AOP. A more
refined implementation was developed by Thomas, for her 1993 doctoral thesis (Thomas, 1993).
Her Planning Communicating Agents (PLACA) language was intended to address one severe
drawback to AGENT0: the inability of agents to plan, and communicate requests for action via
high-level goals. Agents in PLACA are programmed in much the same way as in AGENT0, in
terms of mental change rules. The logical component of PLACA is similar to AGENTO's, but
includes operators for planning to do actions and achieve goals. The semantics of the logic and its
properties are examined in detail. However, PLACA is not at the "production" stage; it is an
experimental language.
4.0.4 Fisher-Concurrent Metate M
One drawback with both AGENT0 and PLACA is that the relationship between the logic and
interpreted programming language is only loosely defined: in neither case can the programming
language be said to truly execute the associated logic. The Concurrent Metate M language
developed by Fisher can make a stronger claim in this respect (Fisher, 1994). A Concurrent
Metate M system contains a number of concurrently executing agents. each of which is able to
communicate with its peers via asynchronous broadcast message passing. Each agent is pro
grammed by giving it a temporal logic specification of the behaviour that it is intended the agent
should exhibit. An agent's specification is executed directly to generate its behaviour. Execution of
the agent program corresponds to iteratively building a logical model for the temporal agent
specification. It is possible to prove that the procedure used to execute an agent specification is
correct, in that if it is possible to satisfy the specification, then the agent will do so (Barringer et al.,
1989).
The logical semantics of Concurrent Metate M are closely related to the semantics of temporal
logic itself. This means that, amongst other things, the specification and verification of Concurrent
Metate M systems is a realistic proposition (Fisher & Wooldridge, 1993). At the time of writing,
only prototype implementations of the language are available; full implementations are expected
soon.
4.0.5 The IMAGINE Project-APRIL and MAIL
,APRIL (Mc Cabe & Clark, 1995) and MAIL (Haugeneder ct al., 1994) are two languages for
developing multi-agent applications that were developed as part of the ESPRIT project IMAGINE
(Haugenedcr, 1994). The two languages arc intended to fulfil quite different roles. APRIL was
#None
paragraph
Intelligent agents: theory and practice 139
plans, which are essentially decision trees that can be used to efficiently determine an appropriate
action in any situation (Schoppers, 1987). Another proposal for building "reactive planners"
involves the use of reactive action packages (Firby, 1987).
Other hybrid architectures are described in Hayes-Roth (1990), Downs and Reichgelt (1991),
Aylett and Eustace (1994) and Bussmann and Demazeau (1994).
4 Agent languages
As agent technology becomes more established, we might expect to see a variety of software tools
become available for the design and construction of agent-based systems; the need for software
support tools in this area was identified as long ago as the mid-1980s (Gasser et al., 1987). The
emergence of a number of prototypical agent languages is one sign that agent technol0gy is
becoming more widely used, and that many more agent-based applications are likely to be
developed in the near future. By an agent language, we mean a system that allows one to program
hardware or software computer systems in terms of some of the concepts developed hy agent
theorists. At the very least, we expect such a language to include some structure corresponding to
an agent. However, we might also expect to sec some other attributes of agency (beliefs, goals, or
other mentalistic notions) used to program agents. Some of the languages we consider below
embody this strong notion of agency; others do not. However, all have properties that make them
interesting from the point of view of this review.
4.0. I Concurrent object languages
Concurrent object languages are in many respects the ancestors of agent languages. The notion of a
self-contained concurrently executing object, with some internal state that is not directly accessible
to the outside world, responding to messages from other such objects, is very close to the concept of
an agent as we have defined it. The earliest concurrent object framework was Hewitt's Actor model
(Hewitt, 1977; Agha, 1986); another well-known example is the ABCL system (Yonezawa, 1990).
For a discussion on the relationship between agents and concurrent object programming, see
Gasser and Briot (1992).
4.0.2 Shoham-agent-oriented programming
Yoav Shoham has proposed a "new programming paradigm, based on a societal view of
computation" (Shoham, 1990, p. 4; 1993). The key idea that informs this agent-oriented program
ming (AOP) paradigm is that of directly programming agents in terms of the mentalistic,
intentional notions that agent theorists have developed to represent the properties of agents. The
motivation behind such a proposal is that, as we observed in section 2, humans use the intentional
stance as an abstraction mechanism for representing the properties of complex systems. ln the same
way that we use the intentional stance to describe humans, it might be useful to use the intentional
stance to program machines.
Shoham proposes that a fully developed AOP system will have three components:
• a logical system for defining the mental state of agents;
• an interpreted programming language for programming agents;
• an "agentification" process, for compiling agent programs into low-level executable systems.
At the time of writing, Shoham has only published results on the first two components. (In Shoham
(1990, p. 12), he wrote that "the third is still somewhat mysterious to me", though later in the paper
he indicated that he was thinking along the lines of Rosenschcin and Kaelbling·s situated automata
paradigm (Rosenschcin & and Kaelbling, 1986).) Shoham·s first attempt at an AOP language was
the AGENT0 system. The logical component of this system is a quantified multi-modal logic,
allowing direct reference to time. No semantics are given, but the logic appears to be based on
Thomas et al. (1991). The logic contains three modalities: belief, commitment and ability. The
following is an acceptable formula of the logic, illustrating its key properties:
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 138
relationships in Al, of which a particularly relevant example is Rao and Georgeff (1992a). This
article discusses the relationship between the abstract BDI logics developed by Rao et al. for
reasoning about agents, and an abstract "agent interpreter", based on the PRS. However, the
relationship between the logic and the architecture is not formalised; the BDI logic is not used to
give a formal semantics to the architecture, and in fact it is difficult to see how such a logic could he
used for this purpose. A serious attempt to define the semantics of a (somewhat simple) agent
architecture is presented in Wooldridge (1995), where a formal model of the system My World, in
which agents are directly programmed in terms of beliefs and intentions, is used as the basis upon
which to develop a logic for reasoning about My World systems. Although the logic contains
modalities for representing beliefs and intentions, the semantics of these modalities are given in
terms of the agent architecture itself, and the problems associated with possible worlds do not,
therefore, arise; this work builds closely on Konolige's models of the beliefs of symbolic AI systems
(Konoligc, 1986a). However, more work needs to be done using this technique to model more
complex architectures, before the limitations and advantages of the approach are well-understood.
Like purely deliberative architectures, some reactive systems are also underpinned by a
relatively transparent theory. Perhaps the best example is the situated automata paradigm, where
an agent is specified in terms of a logic of knowledge, and this specification is compiled down to a
simple digital machine that can he realistically said to realise its corresponding specification.
However, for other purely reactive architectures, based on more ad hoc principles, it is not clear
that there is any transparent underlying theory. It could be argued that hybrid systems also tend to
bead hoc, in that while their structures are well-motivated from a design point of view, it is not clear
how one might reason about them, or what their underlying theory is. In particular, architectures
that contain a number of independent activity producing subsystems, which compete with each
other in real time to control the agent's activities, seem to defy attempts at formalisation. It is a
matter of debate whether this needs he considered a serious disadvantage, but one argument is that
unless we have a good theoretical model of a particular agent or agent architecture, then we shall
never really understand why it works. This is likely to make it difficult to generalise and reproduce
results in varying domains.
3.5 Further reading
Most introductory textbooks on Al discuss the physical symbol system hypothesis; a good recent
example of such a text is Ginsberg (1993). A detailed discussion of the way that this hypothesis has
affected thinking in symbolic AI is provided in Shardlow (1990). There are many objections to the
symbolic AI paradigm, in addition to those we have outlined above. Again, introductory textbooks
provide the stock criticisms and replies.
There is a wealth of material on planning and planning agents. See Georgeff (1987) for an
overview of the state of the art in planning (as it was in 1987), Allen et al. (1990) for a thorough
collection of papers on planning (many of the papers cited above are included), and Wilkins (1988)
for a detailed description of SIPE, a sophisticated planning system used in a real-world application
(the control of a brewery!) Another important collection of planning papers is Georgeff and
Lansky (1986). The book by Dean and Wellman and the book by Allen et al. contain much useful
related material (Dean and Wellman, 1991; Allen et al., 1991 ). There is now a regular international
conference on planning; the proceedings of the first were published as Hendler (1992).
The collection of papers edited by Maes (1990a) contains many interesting papers on alterna
tives to the symbolic AI paradigm. Kaelbling (1986) presents a clear discussion of the issues
associated with developing resource-bounded rational agents, and proposes an agent architecture
somewhat similar to that developed by Brooks. A proposal by Nilsson for teleo reactive programs
goal directed programs that nevertheless respond to their environment-is described in Nilsson
(1992). The proposal draws heavily on the situated automata paradigm; other work based on this
paradigm is described in Shoham (1990) and Kiss and Reichgelt (1992). Schoppcrs has proposed
compiling plans in advance, using traditional planning techniques, in order to develop universal
#None
paragraph
Intelligent agents: theory and practice 137
model, various patterns of behaviour may be activated, dropped, or executed. As a result of Po B
execution, the plan-based <;omponent and cooperation component may be asked to generate plans
and joint plans respectively, in order to achieve the goals of the agent. This ultimately results in
primitive actions and messages being generated by the world interface.
3.4 Discussion
The deliberative, symbolic paradigm is, at the time of writing, the dominant approach in (D)AL
This state of affairs is likely to continue, at least for the near future. There seem to be several
reasons for this. Perhaps most importantly, many symbolic AI techniques (such as rule-based
systems) carry with them an associated technology and methodology that is becoming familiar to
mainstream computer scientists and software engineers. Despite the well-documented problems
with symbolic AI systems, this makes symbolic AI agents (such as GRATE*, Jennings, 1993b) an
attractive proposition when compared to reactive systems, which have as yet no associated
methodology. The need for a development methodology seems to be one of the most pressing
requirements for reactive systems. Anecdotal descriptions of current reactive systems implemen
tations indicate that each such system must be individually hand-crafted through a potentially
lengthy period of experimentation (Wavish and Graham, 1995). This kind of approach seems
unlikely to be usable for large systems. Some researchers have suggested that techniques from the
domain of genetic algorithms or machine learning might be used to get around these development
problems, though this work is at a very early stage.
There is a pressing need for research into the capabilities of reactive systems, and perhaps in
particular to the types of application for which these types of system are best suited; some
preliminary work has been done in this area, using a problem domain known as the Tile World
(Pollack & Ringuette, 1990) With respect to reactive systems, Ferguson suggests that:
"[T]he strength of purely non-deliberative architectures lies in their ability to exploit local patterns of
activity in their current surroundings in order to generate more or less hardwired action responses .. for a
given set of stimuli Successful operation using this method pre-supposes: i that the complete set of
environmental stimuli required for unambiguously determining action sequences is always present and
readily identifiable-in other words, that the agent's activity can be situationally determined; ii that the
agent has no global task constraints ... which need to be reasoned about at run time; and iii that the agent's
goal or desire system is capable of being represented implicitly in the agent's structure according to a fixed,
pre-compiled ranking scheme." (Ferguson. 1992a, pp. 29-30}
Hybrid architectures, such as the PRS, Touring Machines, Inte RRa P, and COSY, are currently a
very active area of work, and arguably have some advantages over both purely deliberative and
purely reactive architectures. However, an outstanding problem with such architectures is that of
combining multiple interacting subsystems (deliberative and reactive) cleanly, in a well-motivated
control framework. Humans seem to manage different levels of abstract behaviour with compari
tive ease; it is not clear that current hybrid architectures can do so.
Another area where as yet very little work has been done is the generation of goals and
intentions. Most work in AT assumes that an agent has a single, well-defined goal that it must
achieve. But if agents are ever to be really autonomous, and act pro-actively, then they must be
able to generate their own goals when either the situation demands, or the opportunity arises.
Some preliminary work in this area is Norman and Long (1995). Similarly, little work has yet been
done into the management and scheduling of multiple, possibly conflicting goals; some preliminary
work is reported in Dongha (1995).
Finally, we turn to the relationship between agent theories and agent architectures. To what
extent do the agent architectures reviewed above correspond to the theories discussed in section 2?
What, if any, is the theory that underpins an architecture? With respect to purely deliberative
architectures, there is a wealth of underlying theory. The close relationship between symbolic
processing systems and mathematical logic means that the semantics of such architectures can often
be represented as a logical system of some kind. There is a wealth of work establishing such
#None
paragraph
M. WOOLDRIDGE AND '.'JICHOLAS JENNINGS 136
layers, and in particular, to deal with conflicting action proposals from the different layers. The
control framework does this by using control rules.
3.3.3 Burmeister et al.-COSY
The COSY architecture is a hybrid BDI-architecture that includes elements of both the PRS and
IRMA, and was developed specifically for a multi-agent testbed called DASEDIS (Burmeister &
Sundermeyer; Haddadi, 1994). The architecture has five main components: (i) sensors; (ii)
actuators; (iii) communications (iv) cognition; and (v) intention. The first three components are
straightforward: the sensors receive non-communicative perceptual input, the actuators allow the
agent to perform non-communicative actions, and the communications component allows the
agent to send messages. Of the remaining two components, the intention component contains
"long-term goals, attitudes, responsibilities and the like ... the control elements taking part in the
reasoning and decision-making of the cognition component" (Haddadi, 1994, p. 15), and the
cognition component is responsible for mediating between the intentions of the agent and its beliefs
about the world, and choosing an appropriate action to perform. Within the cognition component
is the knowledge base containing the agent's beliefs, and three procedural components: a script
execution component, a protocol execution component, and a reasoning, deciding and reacting
component. A script is very much like a script in Schan k's original sense: it is a stereotypical recipe
or plan for achieving a goal. Protocols are stereotypical dialogues representing cooperation
frameworks such as the contract net (Smith, 1980). The reasoning, deciding and reacting
component is perhaps the key component in COSY. It is made up of a number of other subsystems,
and is structured rather like the PRS and IRMA (see above). An agenda is maintained, that
contains a number of active scripts. These scripts may be invoked in a goal-driven fashion (to satisfy
one of the agent's intentions), or a data-driven fashion (in response to the agent's current
situation). A filter component chooses between competing scripts for execution
3.3.4 Muller et al.~lnte RRa P
Intc RRa P, like Ferguson's Touring Machines, is a layered architecture, with each successive layer
representing a higher level of abstraction than the one below it (Millier & Pischel, 1994; Millier et
al., 1995; Millier, 1994). In Inte RRa P, these layers are further subdivided into two vertical layers:
one containing layers of knowledge bases, the other containing various control components, that
interact with the knowledge bases at their level. At the lowest level is the world interface control
component, and the corresponding world level knowledge base. The world interface component,
as its name suggests, manages the interface between the agent and its environment, and thus deals
with acting, communicating, and perception.
Above the world interface component is the behaviour-based component. The purpose of this
component is to implement and control the basic reactive capability of the agent. This component
manipulates a set of patterns of behaviour (Po B). A Po B is a structure containing a pre-condition
that defines when the Po B is to be activated, various conditions that define the circumstances under
which the Po B is considered to have succeeded or failed, a post-condition (a la STRIPS (Fikes &
Nilsson, 1971)), and an executable body, that defines what action should be performed if the Po B is
executed. (The action may be a primitive, resulting in a call on the agent's world interface. or may
involve calling on a higher-level layer to generate a plan.)
Above the behaviour-based component in Tnte RRa P is the plan-based component. This
component contains a planner that is able to generate single-agent plans in response to requests
from the behaviour-based component. The knowledge-base at this layer contains a set of plans,
including a plan library. The highest layer of Inte RRa P is the cooperation component. This
component is able to generate joint plans, that satisfy the goals of a number of agents, by
elaborating plans selected from a plan library. These plans arc generated in response to requests
from the plan-based component.
Control in lnte RRa P is both data- and goal-driven. Perceptual input is managed by the world
interface, and typically results in a change to the world model. As a result of changes to the world
#None
paragraph
Intelligent agents: theory and practice 135
some kind of precedence over the deliberative one, so that it can provide a rapid response to
important environmental events. This kind of structuring leads naturally to the idea of a layered
architecture, of which Touring Machincs (Ferguson, 1992) and Inte RRa P (Muller & Pischel, 1994)
are good examples. (These architectures are described below.) In such an architecture, an agent's
control subsystems are arranged into a hierarchy, with higher layers dealing with information at
increasing levels of abstraction. Thus, for example, the very lowest layer might map raw sensor
data directly onto effector outputs, while the uppermost layer deals with long-term goals. A key
problem in such architectures is what kind of control framework to embed the agent's subsystems
in, to manage the interactions between the various layers.
3.3.1 Georgeff and Lansky-PRS
One of the best-known agent architectures is the Procedural Reasoning System (PRS), developed
by Georgeff and Lansky (1987). Like IRMA (see above), the PRS is a belief-desire-intention
architecture, which includes a plan library, as well as explicit symbolic representations of beliefs,
desires, and intentions. Beliefs are facts, either about the external world or the system's internal
state. These facts are expressed in classical first-order logic. Desires are represented as system
behaviours (rather than as static representations of goal states). A PRSplan library contains a set of
partially-elaborated plans, called knowledge areas (KA), each of which is associated with an
ini,ocation condition. This condition determines when the KA is to be actil'ated. KAs may be
activated in a goal-driven or data-driven fashion; KAs may also be reactive, allowing the PRS to
respond rapidly to changes in its environment. The set of currently active KAs in a system represent
its intentions. These various data structures are manipulated by a system interpreter, which is
responsible for updating beliefs, invoking KAs, and executing actions. The PRS has been
evaluated in a simulation of maintenance procedures for the space shuttle, as well as other domains
(Georgcff & lngrand, 1989).
3.3.2 Ferguson-Touring Machines
For his 1992 Doctoral thesis, Ferguson developed the Touring Machines hybrid agent architecture
(Ferguson, 1992a,b).5 The architecture consists of perception and action subsystems, which
interface directly with the agent's environment, and three control layers, embedded in a control
framework, which mediates between the layers. Each layer is an independent, activity-producing,
concurrently executing process.
The reactive layer generates potential courses of action in response to events that happen too
quickly for other layers to deal with. It is implemented as a set of situation-action rules, in the style
of Brooks' subsumption architecture (see above).
The planning layer constructs plans and selects actions to execute in order to achieve the agent's
goals. This layer consists of two components: a planner, and a focus of attention mechanism. The
planner integrates plan generation and execution. and uses a library of partially elaborated plans,
together with a topological world map, in order to construct plans that will accomplish the agent's
main goal. The purpose of the focus of attention mechanism is to limit the amount of information
that the planner must deal with, and so improve its efficiency. It does this by filtering out irrelevant
information from the environment.
The modelling layer contains symbolic representations of the cognitive state of other entities in
the agent's environment. These models are manipulated in order to identify and resolve goal
conflicts-situations where an agent can no longer achieve its goals, as a result of unexpected
interference.
The three layers are able to communicate with each other (via message passing), and are
embedded in a control framework. The purpose of this framework is to mediate between the
5 1t is worth noting that Fergeson's thesis gives a good overview of the problems and issues associated with
building rational, resource-bounded agents. Moreover. the description given of the Touring Machines
architecture is itself extremely clear. We recommend it as a point of departure for further reading.
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 134
"[An agent} ... x iss aid to carry the information that p inw orld states, written s I= K(x,p), if for all world
states in which x has the same value as it does ins, the proposition pis true." (Kae!bling & Rosenschcin,
1990, p. 36)
An agent is specified in terms of two components: perception and action. Two programs are then
used to synthesise agents: RULER is used to specify the perception component of an agent;
GAPPS is used to specify the action component.
RULER takes as its input three components:
"[A j specification of the semantics of the [a gent's} inputs ("whenever bit 1 is on, it is raining"); a set of static
facts ("whenever it is raining, the ground is wet"); and a specification of the state transitions of the world ("if
the ground is wet, it stays wet until the sun comes out"). The programmer then specifies the desired
semantics for the output ("if this bit is on, the ground is wet"), and the compiler ... [synthesises] a circuit
whose output will have the correct semantics .... All that declarative '"knowledge" has been reduced to a
very simple circuit." (Kaelb!ing, 1991, p. 86)
The GAPPS program takes as its input a set of goal reduction rules (essentially rules that encode
information about how goals can be achieved), and a top level goal, and generates a program that
can be translated into a digital circuit to realise the goal. Once again, the generated circuit does not
represent or manipulate symbolic expressions; all symbolic manipulation is done at compile time.
The situated automata paradigm has attracted much interest, as it appears to combine the best
elements of both reactive and symbolic, declarative systems. However, at the time of writing, the
theoretical limitations of the approach are not well understood; there are similarities with the
automatic synthesis of programs from temporal logic specifications, a complex area of much
ongoing work in mainstream computer science (see the comments in Emerson, 1990).
3.2.4 Maes-Agent network architecture
Pattie Maes has developed an agent architecture in which an agent is defined as a set of competence
modules (Macs, 1989, 1990b, 1991 ). These modules loosely resemble the behaviours of Brooks'
subsumption architecture (above). Each module is specified by the designer in terms of pre- and
post-conditions (rather like STRIPS operators), and an activation level, which gives a real-valued
indication of the relevance of the module in a particular situation. The higher the activation level of
a module, the more likely it is that this module will influence the behaviour of the agent. Once
specified, a set of competence modules is compiled into a spreading activation network, in which the
modules are linked to one-another in ways defined by their pre-and post-conditions. For example,
if module a has post-condition rp, and module b has pre-condition cp, then a and bare connected by
a successor link. Other types of link include predecessor links and conflicter links. When an agent is
executing, various modules may become more active in given situations, and may be executed. The
result of execution may be a command to an effector unit, or perhaps the increase in activation !eve!
of a successor module.
There are obvious similarities between the agent network architecture and neural network
architectures. Perhaps the key difference is that it is difficult to say what the meaning of a node in a
neural net is; it only has a meaning in the context of the net itself. Since competence modules are
defined in declarative terms, however, it is very much easier to say what their meaning is.
3.3 Hybrid architectures
Many researchers have suggested that neither a completely deliberative nor completely reactive
approach is suitable for building agents. They have argued the case for hybrid systems, which
attempt to marry classical and alternative approaches.
An obvious approach is to build an agent out of two (or more) subsystems: a deliberative one,
containing a symbolic world model, which develops plans and makes decisions in the way proposed
by mainstream symbolic AI; and a reactive one, which is capable of reacting to events that occur in
the environment without engaging in complex reasoning. Often, the reactive component is given
#None
paragraph
Intelligent agents: theory and practice 133
1. Intelligent behaviour can be generated without explicit representations of the kind that symbolic
AI proposes.
2. Intelligent behaviour can be generated without explicit abstract reasoning of the kind that
symbolic AI proposes.
3. Intelligence is an emergent property of certain complex systems.
Brooks identifies two key ideas that have informed his research:
1. Situatedness and embodiment: "Real" intelligence is situated in the world, not in disembodied
systems such as theorem provers or expert systems.
2. Intelligence and emergence: "Intelligent" behaviour arises as a result of an agent's interaction
with its environment. Also, intelligence is "in the eye of the beholder"; it is not an innate,
isolated property.
If Brooks was just a Dreyfus-style critic of AI, his ideas might not have gained much currency.
However, to demonstrate his claims, he has built a number of robots, based on the suhsumption
architecture. A subsumption architecture is a hierarchy of task-accomplishing behaviours. Each
behaviour "competes" with others to exercise control over the robot. Lower layers represent more
primitive kinds of behaviour (such as avoiding obstacles), and have precedence over layers further
up the hierarchy. It should be stressed that the resulting systems are, in terms of the amount of
computation they need to do, extremely simple, with no explicit reasoning of the kind found in
symbolic AI systems. But despite this simplicity, Brooks has demonstrated the robots doing tasks
that would be impressive if they were accomplished by symbolic AI systems. Similar work has been
reported by Steels, who described simulations of "Mars explorer" systems, containing a large
number of subsumption-architecture agents, that can achieve near-optimal performance in certain
tasks (Steels, 1990).
3.2.2 Agre and Chapman-PENG!
At about the same time as Brooks was describing his first results with the subsumption architecture,
Chapman was completing his Master's thesis, in which he reported the theoretical difficulties with
planning described above, and was coming to similar conclusions about the inadequacies of the
symbolic AI model himself. Together with his co-worker Agre, he began to explore alternatives to
the AI planning paradigm (Chapman & Agre, 1986).
Agre observed that most everyday activity is ··routine" in the sense that it requires little-if
any-new abstract reasoning. Most tasks, once learned, can be accomplished in a routine way, with
little variation. Agre proposed that an efficient agent architecture could be based on the idea of
''running arguments". Crudely, the idea is that as most decisions are routine, they can be encoded
into a low-level structure (such as a digital circuit), which only needs periodic updating, perhaps to
handle new kinds of problems. His approach was illustrated with the celebrated PENGI system
(Agre & Chapman, 1987). PENGI is a simulated computer game, with the central character
controlled using a scheme such as that outlined above.
3.2.3 Rosenschein and Kaelhling-situated automata
Another sophisticated approach is that of Rosenschein and Kaclbling (Rosenschein, 1985;
Rosenschein & Kaelbling, 1986; Kaelbling & Rosenschcin, 1990; Kaelbling, 1991). In their situated
automata paradigm, an agent is specified in declarative terms. This specification is then compiled
down to a digital machine, which satisfies the declarative specification. This digital machine can
operate in a provably time-bounded fashion; it does not do any symbol manipulation, and in fact no
symbolic expressions arc represented in the machine at all. The logic used to specify an agent is
essentially a modal logic of knowledge (see above). The technique depends upon the possibility of
giving the worlds in possible worlds semantics a concrete interpretation in terms of the states of an
automaton:
#None
paragraph
M. WOOLDRIDGE AND NICHOLAS JENNINGS 132
which monitors the environment in order to determine further options for the agent; a filtering
process; and a deliberation process. The filtering process is responsible for determining the subset
of the agent's potential courses of action that have the property of being consistent with the agent's
current intentions. The choice between competing options is made by the deliberation process. The
IRMA architecture has been evaluated in an experimental scenario known as the Tileworld
(Pollack & Ringuette, 1990).
3.1.3 Vere and Bickmore-HOMER
An interesting experiment in the design of intelligent agents was conducted by Vere and Bickmore
(1990). They argued that the enabling technologies for intelligent agents are sufficiently developed
to be able to construct a prototype autonomous agent, with linguistic ability, planning and acting
capabilities, and so on. They developed such an agent, and christened it HOMER. This agent is a
simulated robot submarine, which exists in a two-dimensional "Seaworld'', about which it has only
partial knowledge. HOMER takes instructions from a user in a limited subset of English with about
an 800 word vocubulary; instructions can contain moderately sophisticated temporal references.
HOMER can plan how to achieve its instructions (which typically relate to collecting and moving
items around the Seaworld), and can then execute its plans, modifying them as required during
execution. The agent has a limited episodic memory, and using this, is able to answer questions
about its past experiences.
3.2.4 Jennings-GRATE*
GRATE* is a layered architecture in which the behaviour of an agent is guided by the mental
attitudes of beliefs, desires, intentions and joint intentions (Jennings, 1993b). Agents are divided
into two distinct parts: a domain level system and a cooperation and control layer. The former
solves problems for the organisation; be it in the domain of industrial control, finance or
transportation. The latter is a meta-level controller which operates on the domain level system with
the aim of ensuring that the agent's domain level activities are coordinated with those of others
within the community. The cooperation layer is composed of three generic modules: a control
module which interfaces to the domain level system, a situation assessment module and a
cooperation module. The assessment and cooperation modules provide an implementation of a
model of joint responsibility (Jennings, 1992), which specifics how agents should act both locally
and towards other agents whilst engaged in cooperative problem solving. The performance of a
GRATE* community has been evaluated against agents which only have individual intentions, and
agents which behave in a selfish manner, in the domain of electricity transportation management.
A significant improvement was noted when the situation became complex and dynamic (Jennings,
1995).
3.2 Alternative approaches: reactive architectures
As we observed above, there arc many unsolved (some would say insoluble) problems associated
with symbolic Al. These problems have led some researchers to question the viability of the whole
paradigm, and to the development of what are generally known as reactive architectures. For our
purposes, we shall define a reactive architecture to be one that does not include any kind of central
symbolic world model, and does not use complex symbolic reasoning.
3.2. l Brooks-behaviour languages
Possibly the most vocal critic of the symbolic AI notion of agency has been Rodney Brooks, a
researcher at MIT who apparently became frustrated by AI approaches to building control
mechanisms for autonomous mobile robots. In a 1985 paper, he outlined an alternative architec
ture for building agents, the so called subsumption architecture (Brooks, 1986). The review of
alternative approaches begins with Brooks' work.
In recent papers, Brooks (1990, 1991a,b) has propounded three key theses:
#None
paragraph
Intelligent agents: theory and practice 131
reasoning, have turned out to be extremely difficult (cf. the CYC project (Guba & Lenat, 1994)).
The underlying problem seems to be the difficulty of theorem proving in even very simple logics,
and the complexity of symbol manipulation algorithms in general: recall that first-order logic is not
even decidable, and modal extensions to it (including representations of belief, desire, time, and so
on) tend to be highly undecidable. Thus, the idea of building "agents as theorem provers"-what
might be called an extreme logicist view of agency-although it is very attractive in theory, seems,
for the time being at least, to be unworkable in practice. Perhaps more troubling for symbolic AI is
that many symbol manuipulation algorithms of interest are intractable. lt seems hard to build
useful symbol manipulation algorithms that will he guaranteed to terminate with useful results in an
acceptable fixed time bound. And yet such algorithms seem essential if agents are to operate in any
real-world. time-constrained domain. Good discussions of this point appear in Kaelbling (1986)
and Russell and Wefald (1991).
It is because of these problems that some researchers have looked to alternative techniques for
building agents; such alternatives are discussed in section 3.2. First, however, we consider efforts
made within the symbolic Al community to construct agents.
3.1.1 Planning agents
Since the early 1970s, the AI planning community has been closely concerned with the design of
artificial agents; in fact, it seems reasonable to claim that most innovations in agent design have
come from this community. Planning is essentially automaticprngamming: the design of aeourse of
action that, when executed, will result in the achievement of some desired goal. Within the
symbolic AI community, it has long been assumed that some form of Al planning system will be a
central component of any artificial agent. Perhaps the best-known early planning system was
STRIPS (Fikes & Nilsson, 1971). This system takes a symbolic description of both the world and a
desired goal state, and a set of action descriptions, which characterise the pre- and post-conditions
associated with various actions. It then attempts to find a sequence of actions that will achieve the
goal, by using a simple means-ends analysis. which essentially involves matching the post
conditions of actions against the desired goal. The STRIPS planning algorithm was very simple,
and proved to be ineffective on problems of even moderate complexity. Much effort was
subsequently devoted to developing more effective techniques. Two major innovations were
hierarchical and non-linear planning (Sacerdoti, 1974, 1975). However, in the mid 1980s, Chapman
established some theoretical results which indicate that even such refined techniques will ultimately
turn out to be unusable in any time-constrained system (Chapman, 1987). These results have had a
profound influence on subsequent AI planning research; perhaps more than any other, they have
caused some researchers to question the whole symbolic AI paradigm, and have thus led to the
work on alternative approaches that we discuss in section 3.2.
In spite of these difficulties, various attempts have been made to construct agents whose primary
component is a planner. For example: the Integrated Planning, Execution and Monitoring (IPEM)
system is based on a sophisticated non-linear planner (Ambros-Ingerson and Steel, 1988); Wood's
AUTODRIVE system has planning agents operating in a highly dynamic environment (a traffic
simulation) (Wood, 1993); Etzioni has built "soft bots" that can plan and act in a Unix environment
(Etzioni et al., 1994); and finally, Cohen's PHOENIX system includes planner-based agents that
operate in the domain of simulated forest fire management (Cohen et al., 1989).
3.1.2 Rratman, Israel and Pollack-IRMA
In section 2, we saw that some researchers have considered frameworks for agent theory based on
beliefs, desires, and intentions (Rao & Georgeff, 1991b). Some researchers have also developed
agent architectures based on these attitudes. One example is the Intelligent Resource-hounded
Machine Architecture (IRMA) (Bratman et a!., 1988). This architecture has four key symbolic data
structures: a plan library, and explicit representations of beliefs, desires, and intentions. Addition
ally, the architecture has: a reasoner, for reasoning about the world; a means-end analyser, for
determining which plans might be used to achieve the agent's intentions; an opportunity analyser,
#None
paragraph
M. WOOLDRIDGE A:-!D NICHOi.AS JENNINGS 130
3 Agent architectures
Until now, this article has been concerned with agent theory-the construction of formalisms for
reasoning about agents, and the properties of agents expressed in such formalisms. Our aim in this
section is to shift the emphasis from theory to practice. We consider the issues surrounding the
construction of computer systems that satisfy the properties specified by agent theorists. This is the
area of agent architectures. Maes defines an agent architecture as:
"[A] particular methodology for building [agents]. It specifies how ... the agent can be decomposed into
the construction of a set of component modules and how these modules should be made to interact. The
total set of modules and their interactions has to provide an answer to the question of how the sensor data
and the current internal state of the agent determine the actions ... and future internal state of the agent.
An architecture encompasses techniques and algorithms that support this methodology'· (Maes, 1991,
p.115)
Kaelbling considers an agent architecture to be:
"[A] specific collection of software (or hardware) modules, typically designated by boxes with arrows
indicating the data and control flow among the modules. A more abstract view of an architecture is as a
general methodology for designing particular modular decompositions for particular tasks." (Kaelbling,
1991, p.86)
The classical approach to building agents is to view them as a particular type of knowledge-based
system. This paradigm is known as symbolic Al: we begin our review of architectures with a look at
this paradigm, and the assumptions that underpin it.
3.1 Classical approaches: deliberative architectures
The foundation upon which the symbolic AI paradigm rests is the physical-symbol system
hypothesis, formulated by Newell and Simon (1976). A physical symbol system is defined to be a
physically realisable set of physical entities (symbols) that can be combined to form structures, and
which is capable of running processes that operate on those symbols according to symbolically
coded sets of instructions. The physical-symbol system hypothesis then says that such a system is
capable of general intelligent action.
ft is a short step from the notion of a physical symbol system to Mc Carthy's dream of a sentential
processing automaton, or deliberative agent. (The term "deliberative agent" seems to have derived
from Genesercth"s use of the the term "deliberate agent'' to mean a specific type of symbolic
architecture (Genesereth and Nilsson, 1987, pp. 325-327).) We define a deliberative agent or agent
architecture to be one that contains an explicitly represented, symbolic model of the world, and in
which decisions (for example about what actions to perform) arc made via logical (or at least
pseudo-logical) reasoning, based on pattern matching and symbolic manipulation. The idea of
deliberative agents based on purely logical reasoning is highly seductive: to get an agent to realise
some theory of agency one might naively suppose that it is enough to simply give it logical
representation of this theory and "get it to do a bit of theorem proving" (Shardlow. 1990, section
3.2). If one aims to build an agent in this way, then there are at least two important problems to be
solved:
1. The transduction problem: that of translating the real world into an accurate, adequate
symbolic description, in time for that description to be useful.
2. The representation/reasoning problem: that of how to symbolically represent information
about complex real-world entities and processes, and how to get agents to reason with this
information in time for the results to be useful.
The former problem has led to work on vision, speech understanding, learning. etc. The latter has
led to work on knowledge representation, automated reasoning, automatic planning, etc. Despite
the immense volume of work that these problems have generated, most researchers would accept
that neither is anywhere near solved. Even seemingly trivial problems, such as commonsense
Intention
1957
12 segments
#None
paragraph
330 G. E. M. ANSCOMBE
motive. Plato saying to a slave ' I should beat you if I were
not angry ' would be a case. Or a man might have a
policy of never making remarks about a certain person
because he could not speak about that man unenviously
or unadmiringly.
We have now distinguished between a backward-looking
motive and a mental cause, and found that here at any rate
what the agent reports in answer to the question ' Why? ' is
a reason-for-acting if, in treating it as a reason, he conceives
it as something good or bad, and his own action as doing
good or harm. If you could e.g. showthat either the action
for which he has revenged himself, or that in which he has
revenged himself, was quite harmless or beneficial, he ceases
to offer a reason, except prefaced by ' I thought '. If
it is a proposed revenge he either gives it up or changes his
reasons. No such discovery would affect an assertion of
mental causality. Whether in general good and harm
play an essential part in the concept of intention is something
it still remains to find out. So far good and harm have only
been introduced as making a clear difference between a
backward-looking motive and a mental cause. When the
question ' Why? ' about a present action is answered by
description of a future state of affairs, this is already
distinguished from a mental cause just by being future.
Here there does not so far seem to be any need to characterise
intention as being essentially of good or of harm.
Now, however, let us consider this case:
Why did you do it?
Because he told me to.
Is this a cause or a reason i It appears to depend very much
on what the action was or what the circumstances were.
And we should often refuse to make any distinction at all
between something's being a reason and its being a cause
of the kind in question; for that was explained as what one
is after if one asks the agent what led up to and issued in an
action, but being given a reason and accepting it might be
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
INTENTION 329
of ... often comes to the same as saying he did so lest ...
or in order that . . . should not happen.
Leaving then, the topic of motive-in-general or ' inter
pretative ' motive, let us return to backward-looking
motives. Why is it that in revenge and gratitude, pity and
remorse, the past event (or present situation) is a reason
for acting, not just a mental cause?
Now the most striking thing about these four is the way
in which good and evil are involved in them. E.g. if I am
grateful to someone, it is because he has done me some
good, or at least I think he has, and I cannot show gratitude
by something that I intend to harm him. In remorse, I
hate some good things for myself; I could not express remorse
by getting myself plenty of enjoyments, or for something that
I did not find bad. If I do something out of revenge which
is in fact advantageous rather than harmful to my enemy,
my action, in its description of being advantageous to him,
is involuntary.
These facts are the clue to our present problem. If an
action has to be thought of by the agent as doing good or
harm of some sort, and the thing in the past as good or bad,
in order for the thing in the past to be the reason for the
action, then this reason shows not a mental cause but a
motive. This will come out in the agent's elaborations on
his answer to the question 'Why?'
It might seem that this is not the most important point,
but that the important point is that a proposed action can be
questioned and the answer be a mention of something past.
'I am going to kill him.'-' Why?'-' He killed my father.'
But do we yet know what a proposal to act is; other than a
prediction which the predictor justifies, if he does justify it,
by mentioning a reason for acting? and the meaning of the
expression ' reason for acting ' is precisely what we are at
present trying to elucidate. Might one not predict mental
causes and their effects? Or even their effects after the
causes have occurred? E.g. 'This is going to make me
angry.' Here it may be worth while to remark that it is a
mistake to think one cannot choose whether to act from a
2 L
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
INTENTION
may easily be inclined to deny both that there is any such
thing as mental causality, and that ' motive ' means anything
but intention. But both of these inclinations are mistaken.
We shall create confusion if we do not notice (a) that
phenomena deserving the name of mental causality exist,
for we can make the question 'Why?' into a request for
the sort of answer that I considered under that head;
( b) that mental causality is not restricted to choices or
voluntary or intentional actions but is of wider application;
it is restricted to the wider field of things the agent knows
about not as an observer, so that it includes some involuntary
actions; (c) that motives are not mental causes; and (d) that
there is application for ' motive ' other than the applications
of' the intention with which a man acts '.
Revenge and gratitude are motives; if I kill a man as an
act of revenge I may say I do it in order to be revenged,
or that revenge is my object; but revenge is not some further
thing obtained by killing him, it is rather that killing him is
revenge. Asked why I killed him, I reply ' Because he
killed my brother.' We might compare this answer, which
describes a concrete past event, to the answer describing a
concrete future state of affairs which we sometimes get in
statements of objectives. It is the same with gratitude,
and remorse, and pity for something specific. These motives
differ from, say, love or curiosity or despair in just this way:
something that has happened (o r is at present happening) is
given as the ground of an action or abstention that is good
or bad for the person (it may be oneself, as with remorse) at
whom it is aimed. And ifwe wanted to explain e.g. revenge,
we should say it was harming someone because he had done
one some harm; we should not need to add some description
of the feelings prompting the action or of the thoughts that
had gone with it. Whereas saying that someone does
something out of, say, friendship cannot be explained in any
such way. I will call revenge and gratitude and remorse
and pity backward-looking motives, and contrast them with
motive-in-general.
Motive-in-general is a very difficult topic which I do
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
326 G. E, M, ANSCOMBE
him from this awful suffering ', or ' to get rid of the swine ';
but though these are forms of expression suggesting objectives,
they are perhaps expressive of the spirit in which the man
killed rather than descriptive of the end to which the killing
was a means-a future state of affairs to be produced by the
killing. And this shows us part of the distinction that there
is between the popular senses of motive and intention. We
should say: popularly, ' motive for an action ' has a rather
wider and more diverse application than ' intention with
which the action was done '.
When a man says what his motive was, speaking popu
larly, and in a sense in which' motive ' is not interchangeable
with ' intention ', he is not giving a ' mental cause ' in the
sense that I have given to that phrase. The fact that the
mental causes were such-and-such may indeed help to make
his claim intelligible. And further, though he may say
that his motive was this or that one straight off and without
lying-i.e. without saying what he knows or even half knows
to be untrue-yet a consideration of various things, which
may include the mental causes, might possibly lead both
him and other people to judge that his declaration of his
own motive was false. But it appears to me that the mental
causes are seldom more than a very trivial item among the
things that it would be reasonable to consider. As for the
importance of considering the motives of an action, as
opposed to considering the intention, I am very glad not to
be writing either ethics or literary criticism, to which this
question belongs.
Motives may explain actions to us; but that is not to say
that they' determine', in the sense of causing, actions. We
do say: ' His love of truth caused him to . . . ' and similar
things, and no doubt such expressions help us to think that
a motive must be what produces or brings about a choice.
But this means rather ' He did this in that he loved the
truth '; it interprets his action.
Someone who sees the confusions involved in radically
distinguishing between motives and intentions and in
defining motives, so distinct, as the determinants of choice,
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
INTENTION
thought or feeling in you? i.e., what did you see or hear or
feel, or what ideas or images cropped up in your mind, and
led up to it? I have isolated this notion of a mental cause
because there is such a thing as this question with this sort
of answer, and because I want to distinguish it from the
ordinary senses of ' motive ' and 'intention', rather than
because it is in itself of very great importance; for I believe
that it is of very little. But it is important to have a clear
idea of it, partly because a very natural conception of
' motive' is that it is what moves (the very word suggests
that)-glossed as ' what causes ' a man's actions etc. And
' what causes ' them is perhaps then thought of as an event
that brings the effect about-though how-i.e. whether it
should be thought of as a kind of pushing in another
medium, or in some other way-is of course completely
obscure.
In philosophy a distinction has sometimes been drawn
between ' motives' and ' intentions in acting' as referring
to quite different things. A man's intention is what he
aims at or chooses; his motive is what determines the aim
or choice; and I suppose that ' determines ' must here be
another word for ' causes '.
Popularly, ' motive ' and ' intention ' are not treated as
so distinct in meaning. E.g. we hear of 'the motive of
gain '; some philosophers have wanted to say that such an
expression must be elliptical; gain must be the intention, and
desire ofgain the motive. Asked for a motive, a man might
say' I wanted to . . . 'which would please such philosophers;
or 'I did it in order to ... ' which would not; and yet
the meaning of the two phrases is here identical. When a
man's motives are called good, this may be in no way distinct
from calling his intentions good-e.g. ' he only wanted to
make peace among his relations '.
Nevertheless there is even popularly a distinction
between the meaning of ' motive ' and the meaning of
'intention'. E.g. if a man kills someone, he may be said
to have done it out of love and pity, or to have done it out
of hatred; these might indeed be ·cast in the forms' to release
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
332 G. E. M. ANSCOMBE
cases are the right ones to consider in order to see the distinction between reason and cause. But it is worth noticing that
what is so commonly said, that reason and cause are everywhere sharply distinct notions, is not true.
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
324 G, E, M. ANSCOMBE
saying that it mentions something future-this is also a case
of a mental cause. For couldn't it be recast in the form:
' Because I wanted . . . ' or ' Out of a desire that . . . '?
If a feeling of desire for an apple affects me and I get up and
go to a cupboard where I think there are some, I might
answer the question what led to this action by mentioning
the desire as having made me ... etc. But it is not in all
cases that ' I did so and so in order to . . . ' can be backed
up by ' I felt a desire that . . . ' I may e.g. simply hear
a knock on the door and go downstairs to open it without
experiencing any such desire. Or suppose I feel an upsurge
of spite against someone and destroy a message he has
received so that he shall miss. an appointment. If I describe
this by saying' I wanted to make him miss that appointment ',
this does not necessarily mean that I had the thought ' If I
do this, he will . . . ' and that it affected me with a desire
of bringing that about which led up to my action. This may
have happened, but need not. It could be that all that
happened was this: I read the message, had the thought
' That unspeakable man ! ' with feelings of hatred, tore the
message up, and laughed. Then if the question ' Why did
you do that? ' is put by someone who makes it clear that
he wants me to mention the mental causes-i.e., what went
on in my mind and issued in the action-I should perhaps
give this account; but normally the reply would be no such
thing. That particular enquiry is not very often inade.
Nor do I wish to say that it always has an answer in cases
where it can be made. One might shrug or say 'I don't
know that there was any definite history of the kind you
mean ', or ' It merely occurred to me . . . '
A ' mental cause ', of course, need not be a mental
event, i.e., a thought or feeling or image; it might be a
knock on the door. But if it is not a mental event, it must
be something perceived by the person affected-e.g. tthe
knock on the door must be heard-so if in this sense anyone
wishes to say it is always a mental event, I have no objection.
A mental cause is what someone would describe if he were
asked the specific question: what produced this action or
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
322 G, E, M. ANSCOMBE
we cannot say 'Ah, but not a reason for acting;' we should
be going round in circles. We need to find the difference
between the two kinds of' reason' without talking about
' acting '; and if we do, perhaps we shall discover what
is meant by ' acting' when it is said with this special
emphasis.
It will hardly be enlightening to say ' in the case of the
sudden start the "reason'' is a cause'; the topic of causality
is in a state of too great confusion; all we know is that this
is one of the places where we do use the word ' cause '.
But we also know that this is rather a strange case of
causality; the subject is able to give a cause of a thought
or feeling or bodily movement in the same kind of
way as he is able to state the place of his pain or the
position of his limbs. Such statements are not based on
observation.
Nor can we say: 'Well, the "reason" for a movement
is a cause, and not a reason in the sense of "reason for
acting ", when the movement is involuntary; it is a reason
as opposed to a cause, when the movement is voluntary and
intentional.' This is partly because in any case the object
of the whole enquiry is really to delineate such concepts
as the voluntary and the intentional, and partly because
one can also give a ' reason ' which is only a ' cause ' for
what is voluntary and intentional. E.g. ' Why are you
walking up and down like that? ' - ' It's that military band;
it excites me.' Or 'What made you sign the document
at last?'-' The thought:" It is my duty" kept hammering
away in my mind until I said to myself" I can do no other",
and so signed.'
Now we can see that the cases where this difficulty
arises are just those where the cause itself, qua cause, (or
perhaps one should rather say the causation itself) is in the
class of things known without observation.
I will call the type of cause in question a ' mental cause '.
Mental causes are possible, not only for actions (' The martial
music excites me, that is why I walk up and down ') but
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
Meeting of the Aristotelian Society at 21, Bedford Square, London,
W.C. l, on 3rd June, 1957, at 7.30 p.m.
XIV.-INTENTION
By G. E. M. ANSCOMBE
What distinguishes actions which are intentional from those
which are not? The answer that suggests itself is that they
are the actions to which a certain sense of the question
'Why?' is given application; the sense is defined as that in
which the answer, if positive, gives a reason for acting. But
this hardly gets us any further, because the questions
' What is the relevant sense of the question " Why? " ' and
' What is meant by " re;;i.son for acting " ? ' are one and the
same.
To see the difficulties here, consider the question ' Why
did you knock the cup off the table?' answered by' I thought
I saw a face at the window and it made me jump.' Now
we cannot say that since the answer mentions something
previous to the action, this will be a cause as opposed to a
reason; for if you ask ' Why did you kill him? ' the answer
' he killed my father ' is surely a reason rather than a cause,
but what it mentions is previous to the action. It is true
that we don't ordinarily think of a case like giving a sudden
start when we speak of a reason for acting. ' Giving a sudden
start ', someone might say, ' is not acting in the sense suggested
by the expression "reason for acting".' Hence, though
indeed we readily say e.g. ' What was the reason for your
starting so violently? ' this is totally unlike ' What is your
reason for excluding so-and-so from your will? ' or ' What
is your reason for sending for a taxi? ' But what is the
difference ? Why is giving a start or gasp not an ' action ',
while sending for a taxi or crossing the road is one? The
answer cannot be ' Because an answer to the question
" why? " may give a reason in the latter cases ', for the
answer may ' give a reason ' in the former cases too; and
2K
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
INTENTION 331
such a thing. And how would one distinguish between cause
and reason in such a case as having hung one's hat on a peg
because one's host said ' Hang up your hat on that peg '?
Nor, I think, would it be correct to say that this is a reason
and not a mental cause because of the understanding of the
words that went into obeying the suggestion. Here one
would be attempting a contrast between this case and, say,
turning round at hearing someone say Boo ! But this case
would not in fact be decisively on one side or the other;
forced to say whether the noise was a reason or a cause,
one would probably decide by how sudden one's reaction
was. Further, there is no question of understanding a
sentence in the following case: ' Why did you waggle your
two fore-fingers by your temples?'-' Becasue he was doing
it; ' but this is not particularly different from hanging one's
hat up because one's host said ' Hang your hat up.'
Roughly speaking, if one were forced to go on with the
distinction, the more the action is described as a mere
response, the more inclined one would be to the word
' cause '; while the more it is described as a response to
something as having a significance that is dwelt on by the
agent, or as a response surrounded with thoughts and
questions, the more inclined one would be to use the word
' reason '. But in very many cases the distinction would have
no point.
This, however, does not mean that it never has a point.
The cases on which we first grounded the distinction might
be called ' full-blown ': that is to say, the case of e.g. revenge
on the one hand, and of the thing that made me jump and
knock a cup off a table on the other. Roughly speaking,
it establishes something as a reason to object to it, not as
when one says ' Noises should not make you jump like that:
hadn't you better see a doctor? ' but in such a way as to
link it up with motives and intentions. ' You did it because
he told you to? But why do what he says? ' Answers like
' he has done a lot for me '; ' he is my father '; ' it would
have been the worse for me if I hadn't ' give the original
answer a place among reasons. Thus the full-blown
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
INTENTION 323
also for feelings and even thoughts. In considering actions,
it is important to distinguish between mental causes and
motives; in considering feelings, such as fear or anger, it
is important to distinguish between mental causes and
objects of feeling. To see this, consider the following
cases:
A child saw a bit of red stuff on a turn in a stairway and
asked what it was. He thought his nurse told him it was a
bit of Satan and felt dreadful fear of it. (No doubt she said
it was a bit of satin.) What he was frightened of was the
bit of stuff; the cause of his fright was his nurse's remark.
The object of fear may be the cause of fear, but, as
Wittgenstein1 remarks, is not as such the cause of fear. (A
hideous face appearing at the window would of course be
both cause and object, and hence the two are easily confused.)
Or again, you may be angry at someone's action, when
what makes you angry is some reminder of it, or someone's
telling you of it.
This sort of cause of a feeling or reaction may be reported
by the person himself, as well as recognised by someone
else, even when it is not the same as the object. Note that
this sort of causality or sense of ' causality ' is so far from
accommodating itself to Hume's explanations that people
who believe that Hume pretty well dealt with the topic of
causality would entirely leave it out of their calculations;
if their attention were drawn to it they might insist that the
word 'cause' was inappropriate or was quite equivocal.
Or conceivably they might try to give a Humeian
account of the matter as far as concerned the outside
observer's recognition of the cause; but hardly for the
patient's.
Now one might think that when the question ' Why? '
is answered by giving the intention with which a person
acts-a case of which I will here simply characterise by
1 Philosophical Investigations, SS4776.
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
#None
paragraph
328 G. E. M. ANSCOMBE
not want to discuss at any length. Consider the statement
that one motive for my signing a petition was admiration
for its promoter, X. Asked 'Why did you sign it?' I
might well say' Well, for one thing, X, who is promoting it,
did ... ' and describe what he did in an admiring way.
I might add ' Of course, I know that is not a ground for
signing it, but I am sure it was one of the things that most
influenced me '-which need not mean:' I thought explicitly
of this before signing.' I say ' Consider this ' really with a
view to saying ' let us not consider it here.' It is too
complicated. The account of motive popularised by
Professor Ryle does not appear satisfactory. He recommends
construing ' he boasted from vanity ' as saying ' he boasted
... and his doing so satisfies the law-like proposition that
whenever he finds a chance of securing the admiration and
envy of others, he does whatever he thinks will produce this
admiration and envy.'2 This passage is rather curious and
roundabout in its way of putting what it seems to say, but
I can't understand it unless it implies that a man could not
be saidto have boasted from vanity unless he always behaved
vainly, or at least very often did so. But this does not seem
to be true.
To give a motive (o f the sort I have labelled ' motive-in
general ', as opposed to backward-looking motives and
intentions) is to say something like ' See the action in this
light.' To explain one's own actions by an account indica
ting a motive is to put them in a certain light. This sort of
explanation is often elicited by the question ' Why? ' The
question whether the light in which one so puts one's action
is a true light is a notoriously difficult one.
The motives admiration, curiosity, spite, friendship, fear,
love of truth, despair and a host of others are either of this
extremely complicated kind, or are forward-looking or
mixed. I call a motive forward-looking if it is an intention.
For example, to say that someone did something for fear
2 The Concept of Mind, p. 89.
This content downloaded from 132.174.234.36 on Fri, 05 Sep 2025 17:48:40 UTC
All use subject to https://about.jstor.org/terms
2 segments
#None
paragraph
AGENT
AGE. Signifies thoee periods In the lives Incapacity for reproduction, eating In ei
of persons of both sexes which enable them ther ses:, and whether arising from struc
to do certain acts which, before they bad tural or other e&Utle8.
arrived at those periods, they were prohibit·
ed from doing. . AGEll'l'llm A. Sax. The true master
The length of time during which a per or owner of a thing. Spelman.
son has lived or a thing has existed.
In the old books, "age" ls commouly used AGE.NHI.N.a.. In Saxon law. A guest
to signify "foll age;" that le, the age of at an Inn, who, ha ,1ng stayed there for
twenty-one years. Litt. I 259. three nights, was then accounted one of the
-Le JPll age. The age at which the person family. Oowell.
acquires full capacity to make his own con
tracts and deeds and transact business general•
ly (age of majority) or to enter into some par AGEXB. Lat. An agent, a conductor.
ticular contract or relation, u, the "legal age or manager of atralrs. Dlstlngulsbed from
of consent" to marriage. See Capwell v. Cap factor, a workman. A plalutltr. Fleta, lib.
well, 21 R. I. 101, 41 Atl. 1005: Montoya de 4, c. 15, I 8.
Antonio v, Miller, 7 N. M. 289, 84 Pac. 40, 21
L.R. A.699. '
AGElf T. One who represents and acts
AGE, Awe, Aln. L. Fr. Water. Kel• for another under the contract or relation
ham. of agency, q. t1.
AGE PBAYEB. A suggestion of DOD• 1p m eci - o . l u . .o A a t g l e o n a er . a l A a ge i n e t n 11 t a is r e o e n i e th e e r m g p e lo tt y 6 e " d 0 l o in r
age, made by an Infant party to a real ac his capacity as a profesalonal man or master
Uon, with a prayer that the proceedings of an art or trade, or one to whom the principal
may be deferred until his full age. It ls confides hie whole bnainesa or all transactions
or functions of a desi,tnated class ; or be ls a
now abolished. St. 11 Geo. IY.; 1 Wm. IV. person who la authorized b:, hia principal to
c. 37, I 10; 1 Lil. Reg. M; S Bl Comm. 800. execute all deeds, sign all contracts, or pur
chase all good1, required in a J:)llrticular trade,
AGENCY A relation, created either by business, or employment. See Story, Ag. I 17;
Butler v. Maples, 9 Wall. 706, 19 L. Ed. 822;
express or Implied contract or by law, where Jaquea v. Todd, 3 Wend. (N. Y.) 00; Spri~g
by one party (called the principal or con• field Engine Co. v. Kenned]', 7 Ind. App. 502,
stltuent) delegates the transaction of some 84 N. E. 866;_ .Cruzan v. Smith. 41 Ind. 297;
lawful business or the authority to do cer Godshaw v. .-:Jtruck, 109 Ky. 285, 58 S. W.
781, 61 L. R. A. 668. A BJ)e Cial agent le one
tain acts for him or In relation to his rights
employed to conduct a -particular transaction or
or property, with more or less discretionary piece of business for hi• principal or authoris•
power, to another person (called the agent, ed to perform a 11peclfied act. Bryant v. Moore,
attorney, proxy, or delegate) who under 26 Me. 87, 46 Am. Dec. 00: Gibson v. Snow
Hardware Co., 94 Ala. 346, 10 South. 304:
takes to manage the affair and render him Coolei v. Perrine, 41 N. J. Law, 325, 32 Am.
an account thereof. • State v. Hubbard, 58 Rep, -10.
Kan. 797, 61 Pac. 290, 39 L. R. A. 860; Agents employed for the aale of goods or mel'
chandlse are called "mercantile agents," and
Sternaman v. Iusurance Co., 170 N. Y. 13,
are of two principal claeses.-brokere and fac
62 N. E. 763, 57 L. R. A. 318, 88 Am. St. tors, (q. 11.;) a factor is sometimes called a
Rep. 625; Wynegar v. State, 157 Ind. 577, "commission agent," or "commission merchant."
62 N. E. 38. Ruu. Mere. ~~ 1.
8:,aoa:,ma. The term "agent" la to be
The contract of agency may be defined to be distinguished from it• 11ynonyms "servant,"
acontract by which one of the contracting par- "representative," and "tru11tee." A servant acts
ties confides the management of some affair, to In behalf of hi, master and under the latter's
be transacted on his account, to the other par direction and authority, but is re,tarded as a
ty, who undertakes to do the business and renmere instrument, and not as the substitute or
der an account of it. 1 Liverm. Prln. & Ag. 2. proxy of the master. Tumer v. Cro BB, 83 Tex.
A contract by which one person, with greater
218, 18 S. W. 578, 16 L. R. A. 262; People
or lees discretionary power, undertakes to rep
resent an.other In certain business relations. v. Treadwell, 69 Cal. 226, 10 Pac. 502. A
representative (such as an executor or an as•
Whart. Ag. 1.
algnee in bankruptcy) owe11 his power and au
A relation between two or more persons, by
thority to the law. whkb puts him In the place
which one party, usually called the agent or
of the person represented, although the latter
attomey. is authorized to do certain acts for, or
may have designated or chosen the represent•·
in relation to the rights or property of the
other, who 111 denominated the principal, con- tlve. A trustee acta in the interest and for the
stituent, or employer. Bouvier. benefit of one person, but by an authority de
rived from another person.
-Agency deed of. A revocable and volun•
tary trust for payment of debts. Wharton.-
Agency of necessity A term sometimes ap- Ia lateraatloaal law; A diplomatic
plied to the kind of implied agency which en agent is a person employed by a sovereign
ables a wife to procure what is reasonably to manage his private atralrs, or those of his
necessary for her maintenance and support on
her hu11band's credit and at his expense, when subjects In his name, at the court of a for
he fails to make proper provision for her neces- eign government. Wollf, Inst. Nat. I 1287,
sities. Bostwick v. Brower, 22 Misc. Rep. 700,
49 N. Y. Supp. 1046. Ia the praotloe of the hoaae of lol Nla
-• prh7 oo-ou. In a JJpeala, solicitors
AGEXESIA. In medical Jurisprudence. and other peraona admitted to practlee In
Impotentta pnerandl ; sexual Impotence; thoee courta In a Blmllar capacl to tl'.tt ot
oogle
Digitized by
#None
paragraph
AGENT 51 AG ILLAR IUS
IOllcltors In ordinary court.a, are technically AGGRAVATIOl'f. Any clrcnmstance at
called "agents." Macph. Prlv. Coun. 00. tending the commission of a crime or tort
-Acent aacl patleat. A phrase Indicating which increases Its guilt or enormity or
the state of a pel'IIOn who Is required to do a adds to Its injurious consequences, but which
thin A'. and is at the same time the pel'IIOn to Is above and beyond the essential constitu
whom it la done.-Looal ageat. One ap
ents of the crime or tort itself.
pointed to act as the rep?Hentative of a cor
poration and transact its bu1ines1 cenerall1 Matter of &g Jtravatlon, -correctly understood,
(or b UBlnea of a particular character) at a giv does not consist in acts of the &ame kind and
en place or within a defined district. See Frick description aa th011e constituting the gist of the
Co. v. Wright, 2S Tex. Civ. App. 340, M S. action, but In something done b1 the defendant,
W. 608; Moore v. Freeman'• Nat. Bank, 92 on the occasion of committing the tresp&.88,
N. C. 594-; Westem, etc., Organ Co. v. Andel' which is. to eome extent, of a different legal
aon, 97 Tex. 432, 79 S. W. 517.-llaaadas character from the principal act complained of.
aaeat. A pel'IIOn who is Invested with general Hathaway v. Rice, 19 Vt. 107.
power, involving the exercise of judgment and
di1eredon, as distinguished from an. ordinary Ia pleadbag. The Introduction of mat
qent or emplo:,6, who acts in an inferior ca ter into the declaration which tends to In
pacit.r.. and under the direction and control of crease the amount of damages, but does uot
1t1perior authority, both in regard to the extent
of the work and the manner of executing the affect the right of action Itself. Steph. Pl.
ame. Reddington v. Mariposa Land & Min. 257; 12 Mod. 597.
Co.. 19 Hun (N. Y.) 406; Taylor v. Granite
State Prov. Au'n, 136 N. Y. 343, 32 N. E. 992{ AGGREGATE. Composed of several ;
32 Am. St. Rep. 749; U. S. v. American Bel
conststlng of mnny penons united together.
Te L Co. (C. C.) 29 fed. 33: Up~r Mieaieaippl
Tranap. Co. v. Whittaker, 16 Wis. 220: Fos 1 BJ. Comm. 469.
! t J e S r N v. . C W h . a r 9 l , e s 2 3 B L et . c h R e . r A L . u 4 m 0 b 0 e , r 4 C 9 o . a 1 m 5 . S S . t D . . R 5 ep 7 . , -Assresate eo:rpo-tio-♦ See Coa PORA·
TION.
859.-Prl Tate ageat. An. agent acting for an
p in i d a i h v e id d u a fr l o m in a h p i1 u bl _ i p c ri a v g a e te n t, a w lf h a o lr s r ; e pr a e s se n d t i s l t t i h n e AGGREGATIO .-E.NTIU-♦ The meet
government in aome adminlatrative capacit:,. Ing of minds. The moment when a contract
Pa•:U. ..-1:. An acent of the public, the 18. complete. A suppoeed derivation of the
atate, or the government; a pe IIIOn appointed word "agreement."
to act for the nubile in some matter pertaining
to the administration of rovemment or the put,;.
lie balrine BS. See Sto_ry, Ag. I 300; Whiteside AGGRESBOR. The party who first of
v. United States. 98 U. S. 2G4. 23 L. Ed. 882. fers violence or offense. He who ·begins a
-a.al-estate a,ieat. Any pel'IIOn whose quarrel or dispute, either 'by threatening or
lmsineu it is to sell, or offer for aale, real ea- striking another.
. tate for others, or to ren.t hout!letl, etoree, or
other buildings, or real estate, or to collect
ftllt for othera. Act .Jul1 13. 1866, c. 49; 14 AGGRIEVED. Having suffered 1088 or
St. at Larire, 118. Carstens v. Mc Rea-q, 1 injury ; . damnlfied : Injured.
Wub. St. 359, 25 Pac. 4TI.
AGGRIEVED PARTY. Under statutes
Aseatff ot oo-•tleatff part pe11- granting the right of appeal to the party
ttleoteat1a. Acting and consenting parties aggrieved by an order or Judgment, the par
are liable to the llll JDe punishment. CS Coke, ty aggrieved ls one whose pecuniary inter
8). est ls directly affected by the adjudleatlon;
one whose right of property may be estab
A.GEIL Lat. la tile olvil law. A. lished or divested thereby. Ruff v. Mont
Geld; land generally. A portion of land in gomery, 83 MIBB. 185, 36 South. 67; Mc Far
claeed by definite boundaries. Munlcipallty land v. Pierce, 161 Ind. M6, 45 N. E. 706;
No. 2 v. Orleans Ootton Preee, 18 La. 167, 36 Lamar v. Lamar, 118 Ga. 684, 45 8. E. 498;
Am. Dec.~ Smith v. Bradstreet, 16 Pick. (Mass.) 264;
Bryant v. Allen, 6 N. H. 116; Wiggin v.
Ia o W Eas Hu law. An acre. Spelman.
Swett, 6 Mete. (Maes.) 194, 39 Am. Dec. 716 ;
Tillinghast v. Brown Unlvenlty, 24 R. I. 179,
AGGEIL Lat. In the civil law. A dam, 62 Atl. 891 ; Lowery v. Lowery, 64 N. C.
bank or mound. Cod. 9, 38; Townsh. Pl. 110; Raleigh v. Rogers, 25 N . .J. Eq. 006. Or
48.
one against whom error bas been commltte1l.
Kinealy v. Macklin, 67 Mo. 93.
AGGRAVATED AIJBA'IJLT. An as
ault with circumstances of aggravation, or A.GILD. In Saxon law. Free from pen
of a heinous character, or with Intent to alty, not subject to the payment of g Ud, or
eommft another crime. In re Burns (C. C.) tcere Qild; • that is, the customary fine or pe
113 Fed. 99'l; Norton v. State, 14 Tex. 303. cuniary compensation for an offense. Spel
See A.sli ULT. man; O>well
Defined In Penn91lvanla u follows: "If any
))el'IIOn shall nnlawfally and malicloualy Inflict AGILER. In Saxon law. An ob8erver
upon another penon, either with or without or Informer.
&DJ weapon or instrument. an1 rrievo U11 bodil1
m o lla t e h rm a e n r ., o r I o J , 4 ' r ' l r B e u O t n D c l . , a w h B e fu r l i s l s h y h a t l l c l . u J. t b . e P s u t g a r u d b l . , l t .r D o . r l. o g w . f o p a u . n 4 d m 3 i 4 s a d , n e y I law A . f f D A J , h 4 ay J w UV ar ■ d . , b L e . r d L w il a t r . d, I n o r o l k d e e E p n e g r l ls o b f
1t7. the berd of cattle 1n a common Gfield. Cow;ell.
oog e
Digitized by
Document Summary
Segmentation Methods
paragraph
Natural paragraph breaks
sentence
Individual sentences
semantic
Meaning-based chunks
langextract
LLM-guided extraction