This seminar describes a general theory of rational cognition and its implementation in OSCAR. The jumping off point for this theory will be the theory of rational agency developed in my book Cognitive Carpentry (MIT Press, 1995), but a great deal of the material presented will be of a more recent vintage. A more philosophical discussion of some of this material can be found in Contemporary Theories of Knowledge, 2nd edition, John L. Pollock and Joe Cruz, Rowman and Littlefield, 1999.
This site contains links to a number of files that can be downloaded. Many of these are .pdf files. I will also make my Powerpoint Slides available, both as Powerpoint Slides (for which you need Powerpoint) and as .pdf files. As .pdf files, they appear both as full-sized files, replicating the Powerpoint slides exactly, and reduced to six slides per page and black-and-white (pdf handouts). Participants in the seminar will find it useful to print the latter and bring them to the seminar.
This is a general paper describing rational cognition in OSCAR:
We begin with a general discussion of how agent architectures are to be evaluated. This leads to a discussion of the concept of rationality, and its use in AI. Finally, the outlines of the OSCAR architecture are presented.
OSCAR's greatest virtue is its ability to reason defeasibly, but this is best understood against the background of deductive reasoning. From that perspective, OSCAR is a natural deduction theorem prover.
The general theory of defeasible reasoning that forms the basis for OSCAR, and its implementation.
OSCAR's defeasible reasoner is employed to enable an autonomous agent to reason about the world around it.
To plan for making the world more to its liking, and agent must be able to reason about the causal consequences of its actions, and more generally, about the causal consequences of events in the world. This gave rise to the Frame Problem. OSCAR incorporates a simple solution to the Frame Problem.
Traditional planning algorithms produce recursively enumerable sets of plans, but it is argued that this is impossible for autonomous rational agents operating in complex environments. Planning must instead be done defeasibly.
"The logical foundations of goal-regression planning in autonomous agents", Artificial Intelligence 106 (1998), 267-335.
Classical planning aims at the construction of plans, but plans are not automatically adoptable even if they will achieve their goals (with appropriate probabilities). We must worry about the probability that the goals will be achieved, the value of the goals, and the execution costs. In other words, planning mut be "decision-theoretic". A general theory of decision theoretic planning is developed.
OSCAR is written in Common LISP. It has been tested in a variety of LISP environments. The preferred environment is Macintosh Common LISP, where it supports graphics not available in other environments. The source code can be downloaded below. The files were prepared on a Mac and should display properly on a Mac, PC, or UNIX. The line breaks will have to be converted for use with Windows NT. (If you are using Netscape Navigator, when downloading the files, hold down the option key while clicking on the file name, and then select "Save File".)
THIS IS NOT FREEWARE. The underlying technology is protected by US patent No. 5,706,406, and the software is copywrited by John L. Pollock.
Permission is granted to use this software for educational and research purposes. For use in a commercial product, a license must be obtained from Artilects L.L.C. Contact John L. Pollock for the details.
UPDATE 9/8/2013. With John's passing, any intellectual property rights, including patents, associated with OSCAR and the OSCAR project were transferred to his widow, Lilian Jacques. It is Lilian's wish that the materials be made available under similar terms; that is:
Permission is granted to use this software for educational and research purposes. For use in a commercial product, a license must be obtained from Lilian Jacques [firstname.lastname@example.org].