Some Notes on Reasoning
by Sunil Vemuri
Propositional Logic
For a more detailed review of Propositional Logic, see Genesereth's CS 157 Course Notes.
Some stuff to know:
- Propositional Logic has Logical constants, but no variables, predicates, or functions
- Each Logical constant can be interpreted as either True or False
- Operatores include negation(-), conjunction(&), disjunction(|), implication(=>), reverse implication(<=), equivalence(<=>)
- The satisfiability of a propositional logic statement can be tested using a truth table . If a given propositional logic sentence has n logical constants, the truth table has 2n entries. Since we can enumerate all possible interpretations of all variables, the problem of satisfiability in propositional logic is decidable .
- Validities (Double Negation, deMorgan's Law, Implication definition, Implication introduction, Implication Distribution, Contradiction)
- Inference rules (Modes Ponens, Modes Tolens, And Introduction, And Elimination)
- Metatheorems: Allow us to determine if something is provable without actually performing a proof. This is nice because doing proofs is time consuming and if we are only interested in knowing whether a sentence is true, we can use metatheorems.
First Order Predicate Logic (FOPL)
For a more detailed review of FOPL, see Genesereth's CS 157 Course Notes.
Some stuff to know:
- Propositional Logic is nice, but it is not expressive enough for more interesting examples.
- FOPL has object constants, variables, terms, relations, functions, universal quantifier, existential quantifier
- Ground sentences have no variables
- A variable is free if it is not within the scope of a quantifier
- A sentence is open if it has free variables, otherwise it is closed
- Interpratation, Variable Assignment, Term Assignment, Satisfiability, Models, Herbrand Universe, Herbrand Interpretation, Homomorphism.
For example, if our database contained {P(A,x), Q(B), ForAll y P(A,C) & Q(y) => R(y)}
We can clearly see that R(B) is logically entailed. However, this would take several reasoning steps. With generalized Modes Ponens, we can do it in one reasoning step. Intuitively, this seems like a macro operation.
Soundness and Completeness
Soundness
- Source (Ginsberg p.119. section 6.4)
- An inference procedue |- will be called sound if whenever p |- q, it is also the case that p |= q.
Completeness
- Source (Ginsberg p.119. section 6.4)
- An inference procedure |- will be called complete if whenever p |= q, it is also the case that p |- q.
- Modes Ponens alone is sound, but not complete. From a given database S there are sentences logically entailed that cannot be proven using Modes Ponens alone.
Godel's Completeness Theorem
Godel's completeness theorem basically states that for first order logic, any sentence that is entailed by another set of sentences can be proven from that set. Which is essentially what was stated above.
Godel's Incompleteness Theorem
- Source (Ginsberg p. 123, section 6.6) Ginsberg also references an "excellent popular introduction to the topic" Hofstadter, Douglass R. "Goedel, Escher, Bach: An Eternal Golden Braid." Harmondsworth, England: Penguin, 1979.
- sufficiently powerful extensions of our description of first-order logic (including equality and the axioms of arithmatic, for example) are necessarily incomplete in that for any particular procedural scheme |-, there will always be sentences that are entailed in the sense of |= but not derivable using |-.
Godel's Incompleteness theorem is a proof of the imposibility of proving. In Godel's completeness proof, we have completeness for first order logic in general. However, Godel proved that when we restrict our attention to the axiomatization of arithmetic (aka mathematical induction schema), this axiomatization is provably incomplete. This means there are true statements in mathematical induction schema that we cannot prove. The significance of this proof is that it shows that we can never prove all the theorems of mathematics within any given system of axioms. Mathematitions at the time believe otherwise, but Godel showed them otherwise.
Resolution
Introduction
- Resolution is also known as the "Two-Fingered method"
- One must first take a set of sentences S and put them in clausal form. Once in clausal form, one negates the goal g and adds the negated goal -g to the original set of sentences S , and then performs resolution on this set of sentences S U -g . If one can infer the empty clause, then the original goal sentence g is logically inferred from the original set of sentences S .
- Resolution is sound (LFAI, Theorem 4.1, page 85)
- However, resolution is not complete in the conventional sense. For example, the tautology {P v -P} is logically implied by every database, but resolution will not produce this clause from the empty database.
- Without sentences involving the equality and inequality relations, resolution is refutation complete ; i.e. given an unsatisfiable set of sentences, resolution is guaranteed to produce the empty clause (LFAI, page 85).
- Refutation completeness is based on Theorem 4.5 (page 89) in LFAI: "If a set S of clauses in unsatisfiable, then there is a resolution deduction of the empty clause from S."
- Is resolution complete? In resolution, we talk about refutation completeness which is different from the traditional view of completeness. In the traditional view of completeness we say that if S |= a then S |- a. Resolution is not complete in this sense. For example, using resolution, one cannot derive {P, -P} from the empty clause, even though it is logicailly entailed from the empty clause.
- To handle equality with resolution, we need any one of (see LFAI, section 4.11):
- Equality axioms - add reflexivity, symmetry and transitivity axioms
- Paramodulation - guarantees refutation completeness with equality
- Demodulation - weaker form of paramodulation, but more efficient
Unification and Occurs Check
Resolution Strategies
- Unconstrained resolution is somewhat inefficient. We can adopt certain strategies to make resolution more efficient. All strategies mentioned here preserve soundness. Some strategies sacrifice completeness for the sake of efficiency and some preserve completeness for only horn clauses.
Deletion Strategies
- Pure Literal Elimination - A literal in a database is pure iff it has no instance that is complementary to an instance of another literal in the database. A clause that contains a pure literal is useless for the purpose of refutation, hence we may remove all clauses with pure literals from the database without losing completeness (LFAI, p. 96-97)
- Tautology Elimination - The presence or absence of tautologies in a set of clauses has no effect on the set's satisfiability. Hence we can eliminate tautologies.
- Subsumption Elimination - A clause M subsumes a clause N iff there exists a substitution o such that when M o is a subset of N
Unit Resolution
- Source: LFAI p. 98
- At least one of the parent clauses is a unit clause; i.e. one containing a single literal
- Not complete - one can have a database with no unit clauses, but is refutation complete. Since there are no unit clauses, no inference steps can be made.
- "it can be shown that there is a unit refutation of a set of horn clauses if and only if it is unsatisfiable"
Input Resolution
- Source: LFAI p. 99
- At least one of the two parent clauses is a member of the initial (i.e., input) database
- Not complete
- Refutation Complete for Horn clauses
Linear Resolution
- Source: LFAI p. 99
- Slight generalization of Input Resolution
- "at least one of the parents is either in the initial database or is an ancestor of the other parent"
- Look on p. 100 of LFAI for an example of this. This method isn't really clear unless you look at that diagram
- This is refutation complete
Set of Support Resolution
- Source: LFAI p. 100
- Take a subset Gamma of the initial database Delta. Gamma is called the set of support.
- "A set of support resolvent is one in which at least one parent is selected from Gamma or is a descendant of Gamma.
- "A set of support deduction is one in which each derived clause is a set of support resolvant
- Set of Support is complete if we choose the set of support carefully. If we choose a set of support such that the remaning sentences are satisfiable, then refutation completeness is guaranteed. Typically, one just declares the goal as the set of support to insure this.
Ordered Resolution
- Source: LFAI p. 102
- "Each clause is treated as a linearly ordered set. Resolution is permitted only on the first literal of each clause"
- Completeness not guarenteed in general, but it is guaranteed for Horn clauses
- "we can get refutation completeness in general by considering resolvents in which the remaining literals from the positive parent follow the remaining literals from the negative parent, as well as the other way around."
Directed Resolution
- Source: LFAI p.102
- I didn't cover this and I'm not sure if I need to.
Prolog
- Genesereth in CS 157 covered this as part of FOPC resolution strategies. He gave a handout in class and I am using this handout and my notes from the lectures as my source (Vishal Sikka gave the lectures)
- Uses ordered input resolution on horn clauses, hence Prolog is not complete
- Sacrifices soundness and completeness for fast execution
- It isn't sound because it does not do occurs check during unification
- There have been attempts to make variations of Prolog complete for FOPC by
- adding occurs check
- adding contrapositive clauses
- model eliminations
Resolution with Equality
Without sentences involving the equality and inequality relations, resolution is refutation complete; i.e. given an unsatisfiable set of sentences, resolution is guaranteed to produce the empty clause (LFAI, page 85). Refutation completeness is based on Theorem 4.5 (page 89) in LFAI: "If a set S of clauses in unsatisfiable, then there is a resolution deduction of the empty clause from S."
Paramodulation and Demodulation are methods of handling equality while still allowing us to use resolution The other common ways are adding equality axioms to the database (transitivity, reflexivity, symmetry).
Paramodulaiton preserves refutation completeness but demodulation sacrifices refutation completeness. Demodulation is in fact a restricted form of paramodulation. Paramodulation can be comptationally expensive, so demodulation can be used in its place.
So, what is paramodulation and demodulation? One way to think about demodulation is performing a substitution. For example, if we have an equality u=v, then wherever u appears in a clause, we can replace it with v. Well, it's not that simple, but it's close. Using the example found in Genesereth's notes:
{P(F(F(A))),B), Q(B)}
{F(F(x))=G(x)}
From here, we can substitute G(x) wherever we find F(F(x)). F(F(x)) does not appear exactly, but we can perform a substitution x<-A on the equality clause and get F(F(A))=G(A). Now, wherever we find F(F(A)) in the first clause, we can replace it with G(A). The resulting clause is:
{P(G(A)), B), Q(B)}
There are a few restrictions on demodulation. First, the equality clause must be a unit clause. So, we cannot do demodulation if the clause was {F(F(x))=G(x), R(x)}. Second, when we unify the clauses, we can only perform the substitution on the unit equality clause.
Paramodulation relaxes these restrictions. Obviously demodulation is not complete because if we have any non-unit clauses with equality, we would not be able to prove some facts that are in fact entailed.
LFAI does not cover this in any detail, but Genesereth does cover it in his CS 157 Course Notes and it is coverd in Russell & Norvig p. 284.
Forward vs. Backward Reasoning
For logical reasoning tasks, one can either search forward from the inital state (forward chaining) or search backward from the goal state (backward chaining).
Forward Chaining
According to Rich and Knight, "Begin by building a tree of move sequences that might be solutions by starting iwth the initial configuration(s) at the root of the tree. Generate the next level of the tree by finding all the rules whose left sides match the root node and using their right sides to create the new configurations. Generate the next level by taking each node generated at the previous level and applying to it all of the rules whose left sides match it. Continue until a configuration that matches the goal state is generated."
Backward Chaining
According to Rich and Knight, "Begin building a tree of move sequences that might be solutions by starting with the goal configuration(s) at the root of the tree. Generate the next level of the tree by finding all the rules whoseright sides match the root node. These are all the rules that, if only we could apply them, would generate the state we wanted. Use the left sides of the rules to generate the nodes at this second level of the tree. Generate the next level of the tree by taking each node at the previous level and finding all the rules whose right sides match it. Then using the corresponding left sides to generate the new nodes. Continue until a node that matches the initial state is generated. This method of reasoning backward from the desired final state is often called goal-directed reasoning ."
Which is better?
Rich and Knight cite four factors:
- Are there more possible start states or goal states? We want to move from the smaller to the larger
- In which direction is the branching factor? We want to proceed in the direction of the smaller branching factor
- Will the program be asked to justify its reasoning process to the user? Proceed in the direction that corresponds to the way the user thinks
- What kind of event is going to trigger a problem-solving episode? If a new fact triggers reasoning, forward chaining is better. If a query triggers reasoning, backward chain.
Rich and Knight cite theorem proving as an application more suited to backward chaining because the branching factor is significantly greater going forward from the axioms than it is going backward from the theorems to the axioms. For example, Prolog uses backward chaining.
Rich and Knight cite symbolic integration as a domain suited for forward chaining for similar branching factor reasons. Symbolic integration involves the problem of taking a mathematical expression with integrals and finding an equivalent integral-free expression. Also, production systems are forward chaining.
Opportunistic Reasoning
There is also the notion of opportunistic reasoning where one chooses either backward or foward reasoning at the most "opportune time". According to Penny Nii, "Blackboard systems model of problem solving is a highly structured special case of opportunistic problem solving. In addition to opportunistic reasoning as a knowledge-application strategy, the blackboard model prescribes the organization of the domain knwledge and all the input and intermediate partial solutions needed to solve the problem."
References: Rich & Knight, Winston's AI Text, Penny Nii's Blackboard Systems paper.
Situation Calculus
Reasoning about action.
Frame Problem
Qualification Problem
Ramification Problem
Higer Order Logics
First order logics can quantify over objects in the universe of discourse. Second order logics can quantify over functions and relations in the unverse of discourse. (LFAI, p.20)
"The language of (full) second order logic is simply the language of first order logic augmented with second order variables , that is, variables ranging over relations and functions (of all arities)." [Leivant]
Some examples of a second order logic statments would be:
(x = y) <--> ForAll R R(x) <--> R(y)
(ForAll P) (Exists x) P(x) --> ForAll x P(x)
Here, we are quantifying over the relation R, whereas in first order logic, this would not be allowed. The consequence is we have a language which is more expressive than first order logic. The notion of satisfiability must be extended to handle quantified functions and predicates.
So, what are some neat things we can do with second order logic that we cannot do with first order. Leviant states a theorem: "The collection of finite structures is not first order definable, but is definable already by a single second order structure"
ForAll f (Inj(f) --> Surj(f)) where
Inj(f) is defined to be ForAll x,y (f(x) = f(y) --> x = y)
Surj(f) is defined to be ForAll u Exists v f(v) = u
Non-Monotonic Reasoning
Closed World Assumption
Inheritance Based Non-Logical Formalisms
Semantic Nets
Frames
Non-Monotonic Logics
Non-Monotonic Logic
- McDermott & Doyle, 1980
Default Logic
- Reiter, 1980
Circumscription
- McCarthy, 1980 & 1986
Autoepistemc Logic
- Moore,
0 comments:
Post a Comment