Early Arizona AI Researcher
Petroglyph in Santa Catalina foothills, near Finger Rock Canyon,
north of Tucson (age unknown)


The OSCAR Project

John L. Pollock

The objective of the OSCAR Project is twofold. On the one hand, it is to construct a general theory of rational cognition. On the other hand, it is to construct an artificial rational agent (an "artilect") implementing that theory. This is a joint project in philosophy and AI.

These two objectives go hand-in-hand:

To get the theory of rational cognition right, one must test it by applying it to concrete examples. In effect, this is a search for counter-examples. Philosophers have traditionally undertaken such a search "from their armchairs" -- by sitting and thinking about the theory and how it applies to concrete examples. Unfortunately, the complexity of the examples that one can deal with in this way is strictly limited, and experience proves that the accuracy of the assessment of even simple examples is low. The only way to really test the theory is to implement it and apply it to complex examples mechanically. In other words, philosophy needs AI.

Conversely, AI needs philosophy. Our objective in building an artificial rational agent is to construct an agent that draws conclusions and makes decisions that we regard as rational. To make progress in the construction of an agent capable functioning in a wide range of environments, we need a general characterization of what it is to make rational decisions and draw rational conclusions. That is, we need a general theory of rational cognition.

This web page describes the current state of development of the general theory of rational cognition and its implementation in OSCAR. The jumping off point for this theory is the theory of rational agency developed in my book Cognitive Carpentry (MIT Press, 1995), but a great deal of the material presented will be of a more recent vintage. A more philosophical discussion of some of this material can be found in Contemporary Theories of Knowledge, 2nd edition, John L. Pollock and Joe Cruz, Rowman and Littlefield, 1999.

PDF and PPT files

This site contains links to a number of files that can either be viewed from your web browser or downloaded. The main presentations are in the form of .ppt.zip files, which are zipped Powerpoint slide shows. You can download these and view them on your own computer using Powerpoint.

There are also links to papers that expand on the material discussed in the presentations. The papers are generally .pdf files.

For a number of the presentations, there is also associated LISP code, which can be downloaded and run on any Common LISP environment. However, the LISP platform of choice is Macintosh Common LISP.


Rational Cognition in OSCAR

A brief overview of the material presented on this site can be found in:

"Rational Cognition in OSCAR", in the proceedings of ATAL-99, Springer verlag. You can download associated powerpoint slides.


1. Evaluating Agent Architectures

OSCAR is a complex architecture for the construction of artificial rational agents. Before undertaking the construction of such an architecture, it is desirable to consider how a proposed architecture is to be evaluated. Once we have constructed an architecture, how do we know that is is any good?

A distinction can be made between two kinds of agents. Anthropomorphic agents are those that can help human beings rather directly in their intellectual endeavors. These endeavors consist of decision making and data processing. An agent that can help humans in these enterprises must make decisions and draw conclusions that are rational by human standards of rationality. Goal-oriented agents are those that can carry out certain narrowly-defined tasks in the world. Here the objective is to get the job done, and it makes little difference how the agent achieves its design goal. OSCAR is an architecture for anthropomorphic agents, and it is argued that such agents must engage in cognition that is at least loosely modeled on human rational cognition.

Download powerpoint slides.

Relevant papers:

Material on Bayesian epistemology from chapter four of Contemporary Theories of Knowledge, 2nd edition, John L. Pollock and Joe Cruz, Rowman and Littlefield, 1999.

2. Two Concepts of Rationality

The traditional philosophical methodology for constructing theories of rationality involves the use of "thought experiments" and "philosophical intuitions". It is argued that this is best understood as producing a competence theory of human cognition, in the same sense that linguists propose grammatical theories as competence theories of language production.

This methodology produces a theory that incorporates idiosynchratic features of human cognition. For example, psychologists tell us that humans have modus ponens as a built-in inference rule, but not modus tollens. But surely there would be nothing wrong with building modus tollens into an artificial agent. This leads to a more generic concept of rationality better suited to the purposes of AI.

Download powerpoint slides.

Relevant papers:

Chapter One of Rational Cognition, a book in the works.

"Irrationality and Cognition". Presented at the Inland Northwest Philosophy Conference on Knowledge and Skepticism, held April 30-May 2, 2004, in Moscow, ID and Pullman, WA. The strategy of this paper is to throw light on rational cognition and epistemic justification by examining irrationality. I argue that practical irrationality derives from a general difficulty we have in overriding conditioned features likings. Epistemic irrationality is possible because we are reflexive cognizers, able to reason about and redirect some aspects of our own cognition. This has the consequence that practical irrationality can affect our epistemic cognition. I argue that all epistemic irrationality can be traced to this single source. The upshot is that one cannot give a theory of epistemic rationality or epistemic justification without simultaneously giving a theory of practical rationality. A consequence of this account is that a theory of rationality is a descriptive theory, describing contingent features of a cognitive architecture, and it forms the core of a general theory of "voluntary" cognition -- those aspects of cognition that are under voluntary control. It also follows that most of the so-called "rules for rationality" that philosophers have proposed are really just rules describing default (non-reflexive) cognition. It can be perfectly rational for a reflexive cognizer to break these rules. The "normativity" of rationality is a reflection of a built-in feature of reflexive cognition -- when we detect violations of rationality, we have a tendency to desire to correct them. This is just another part of the descriptive theory of rationality. Although theories of rationality are descriptive, the structure of reflexive cognition gives philosophers, as human cognizers, privileged access to certain aspects of rational cognition. Philosophical theories of rationality are really scientific theories, based on inference to the best explanation, that take contingent introspective data as the evidence to be explained.

3. A Schematic View of the OSCAR Architecture

The OSCAR architecture distinguishes between epistemic cognition -- cognition about what to believe -- and practical cognition -- cognition about what to do. The two aspects of cognition interact in several different ways.

Download powerpoint slides.

Relevant papers:

Chapter Two of Rational Cognition, a book in the works.

4. Deductive Reasoning

OSCAR's greatest virtue is its ability to reason defeasibly, but this is best understood against the background of deductive reasoning. From that perspective, OSCAR is a natural deduction theorem prover.

Download powerpoint slides. LISP code and problem-set can be downloaded below.

Relevant papers:

"Natural Deduction", a technical report describing the logic underlying OSCAR's deductive reasoner.

5. Defeasible Reasoning

The general theory of defeasible reasoning that forms the basis for OSCAR, and its implementation.

Download powerpoint slides. LISP code and problem-set can be downloaded below.

Relevant papers:

Chapter Three of Cognitive Carpentry.

"Defeasible Reasoning with Variable Degrees of Justification", John L. Pollock, proposing modifications to OSCAR's defeat status computation.

"Defeasible reasoning". To appear in Reasoning: Studies in Human Inference and its Foundations, (ed.) Lance Rips and Jonathan Adler, Cambridge Univ. Press. This provides an overview of the OSCAR theory of defeasible reasoning and it use in solving the frame problem.

"Logics for defeasible argumentation" by Henry Prakken & Gerard Vreeswijk, a very nice comparison of different logics of defeasible argumentation.

6. Reasoning Defeasibly about Perception and Time

OSCAR's defeasible reasoner is employed to enable an autonomous agent to reason about the world around it.

Download powerpoint slides. LISP code and problem-set can be downloaded below.

Relevant papers:

"Perceiving and reasoning about a changing world", Computational Intelligence 14 (1998), 498-562.

7. Reasoning Defeasibly about Causation -- the Frame Problem

To plan for making the world more to its liking, and agent must be able to reason about the causal consequences of its actions, and more generally, about the causal consequences of events in the world. This gave rise to the Frame Problem. OSCAR incorporates a simple solution to the Frame Problem.

Download powerpoint slides. LISP code and problem-set can be downloaded below.

Relevant papers:

"Perceiving and reasoning about a changing world", Computational Intelligence 14 (1998), 498-562.

8. Goal-Regression Planning in Autonomous Agents

Traditional planning algorithms produce recursively enumerable sets of plans, but it is argued that this is impossible for autonomous rational agents operating in complex environments. Planning must instead be done defeasibly.

Download powerpoint slides.

Relevant papers:

"The logical foundations of goal-regression planning in autonomous agents", Artificial Intelligence 106 (1998), 267-335.

9. The OSCAR Planner

The OSCAR planner turns out to be surprisingly efficient.

Download powerpoint slides, reason-schemas. LISP code and problem-set can be downloaded below.

10. Adopting Plans -- Decision Theoretic Planning

Classical planning aims at the construction of plans, but plans are not automatically adoptable even if they will achieve their goals (with appropriate probabilities). We must worry about the probability that the goals will be achieved, the value of the goals, and the execution costs. In other words, planning mut be "decision-theoretic". A general theory of decision theoretic planning is developed.

Download powerpoint slides.

Relevant papers:

"Against Optimality: The Logical Foundations of Decision-Theoretic Planning". Computational Intellience. " This paper investigates decision-theoretic planning in sophisticated autonomous agents operating in environments of real-world complexity. An example might be a planetary rover exploring a largely unknown planet. It is argued that existing algorithms for decision-theoretic planning are based on a logically incorrect theory of rational decision making. Plans cannot be evaluated directly in terms of their expected values, because plans can be of different scopes, and they can interact with other previously adopted plans. Furthermore, in the real world, the search for optimal plans is completely intractable. An alternative theory of rational decision making is proposed, called 'locally global planning'."

This is also the main topic of my new book, Thinking about Acting: Logical Foundations for Rational Decision-Making, Oxford University Press, 2006.

11. Conclusions




OSCAR is written in Common LISP. It has been tested in a variety of LISP environments. The preferred environment is Macintosh Common LISP, where it supports graphics not available in other environments. The source code can be downloaded below. The files were prepared on a Mac and should work properly on a Mac, PC, or UNIX. The line breaks will have to be converted for use with Windows or UNIX. Start by opening the file "OSCAR-loader.lisp" and editing the oscar-pathname as instructed. Loading that file then loads the other files.

THIS IS NOT FREEWARE. The software is copywrited by John L. Pollock.

Permission is granted to use this software for educational and research purposes. For use in a commercial product, a license must be obtained John L. Pollock.

UPDATE 9/8/2013. With John's passing, any intellectual property rights, including patents, associated with OSCAR and the OSCAR project were transferred to his widow, Lilian Jacques. It is Lilian's wish that the materials be made available under similar terms; that is:

Permission is granted to use this software for educational and research purposes. For use in a commercial product, a license must be obtained from Lilian Jacques [support@johnpollock.us].

Download Code (last updated on 8/04/05)

The OSCAR Manual

This describes the details of the implementation.