View
222
Download
0
Category
Tags:
Preview:
Citation preview
Scalable Knowledge Representation and Reasoning
Systems
Scalable Knowledge Representation and Reasoning
Systems
Henry Kautz
AT&T Shannon Laboratories
IntroductionIntroduction
In recent years, we've seen substantial progress in scaling up knowledge representation and reasoning systems
Shift from toy domains to real-world applications
• autonomous systems - NASA Remote Agent
• “just in time” manufacturing - I2, PeopleSoft• deductive approaches to verification - Nitpick (D.
Jackson), bounded model checking (E. Clarke)
• solutions to open problems in mathematics - group theory (W. McCune, H. Zhang)
New emphasis on propositional reasoning and search
Approaches to Scaling Up KR&R
Approaches to Scaling Up KR&R
Traditional approach: specialized languages / specialized reasoning algorithms
• difficult to share / evaluate results
New direction: • compile combinatorial reasoning problems into a
common propositional form (SAT)
• apply new, highly efficient general search engines
Combinatorial Task
SAT Encoding SAT Solver
Decoder
MethodologyMethodology
Compare with use of linear / integer programming packages:
• emphasis on mathematical modeling
• after modeling, problem is handed to a state of the art solver
Compare with reasoning under uncertainty:• convergence to Bayes nets and MDP's
Would specialized solver not be better?
Would specialized solver not be better?
Perhaps theoretically, but often not in practice
Rapid evolution of fast solvers• 1990: 100 variable hard SAT problems
• 1999: 10,000 - 100,000 variables
• competitions encourage sharing of algorithms and implementations
Germany 91 / China 96 / DIMACS-93/97/98
Encodings can compensate for much of the loss due to going to a uniform representation
Two Kinds of Knowledge Compilation
Two Kinds of Knowledge Compilation
Compilation to a tractable subset of logic• shift inference costs offline
• guaranteed fast run-time responseE.g.: real-time diagnosis for NASA Deep Space One -
35 msec response time!
• fundamental limits to tractable compilation
Compilation to a minimal combinatorial core• can reduce SAT size by compiling together
problem spec & control knowledge
• inference for core still NP-hard
• new randomized SAT algorithms - low exponential growth
E.g.: optimal planning with 1018 states!
OUTLINEOUTLINE
I. Compilation to tractable languages• Horn approximations
• Fundamental limits
II. Compilation to a combinatorial core• SATPLAN
III. Improved encodings• Compiling control knowledge
IV. Improved SAT solvers• Randomized restarts
Consider problem of determining if a query follows from a knowledge base
KB q ?
Highly expressive KB languages make querying intractable( ignition_on & engine_off )
( battery_dead V tank_empty )
• require general CNF - query answering is NP-complete
Less expressive languages allow polynomial time query answering
• Horn clauses, binary clauses, DNF
Expressiveness vs. Complexity Tradeoff
Expressiveness vs. Complexity Tradeoff
Tractable Knowledge Compilation
Tractable Knowledge Compilation
Goal: guaranteed fast online query answering• cost shifted to offline compilation
Exact compilation often not possible
Can approximate original theoryyet retain soundness / completeness for queries
(Kautz & Selman 1993, 1996, 1999; Papadimitriou 1994)
expressive source language
tractable target language
Example: Compilation into Horn Example: Compilation into Horn
Source: clausal propositional theories• Inference: NP-complete
– example: (~a V ~b V c V d)
– equivalently: (a & b) (c V d)
Target: Horn theories • Inference: linear time
– at most one positive literal per clause
– example: (a & b) c
• strictly less expressive
Horn BoundsHorn Bounds
Idea: compile CNF into a pair of Horn theories that approximate it
• Model = truth assignment which satisfies a theory
Can logically bound theory from above and below:
LB S UB
• lower bound = fewer models = logically stronger
• upper bound = more models = logically weaker
BEST bounds: LUB and GLB
S q ?
If LUB q then S q• (linear time)
If GLB q then S q• (linear time)
Otherwise, use S directly• (or return "don't know")
Queries answered in linear time lead to improvement in overall response time to a series of queries
Using Approximations for Query Answering
Using Approximations for Query Answering
Computing Horn ApproximationsComputing Horn Approximations
Theorem: Computing LUB or GLB is NP-hard• Amortize cost over total set of queries
• Query-algorithm still correct if weaker bounds are used
anytime computation of bounds desirable
Computing the GLBComputing the GLB
Horn strengthening:• r (p V q) has two Horn-strengthenings:
r p
r q
• Horn-strengthening of a theory = conjunction of one Horn-strengthening of each clause
• Theorem: Each LB of S is equivalent to some Horn-strengthening of S.
Algorithm: search space of Horn-strengthenings for a local maxima (GLB)
Computing the LUBComputing the LUB
Basic strategy: • Compute all resolvents of original theory, and
collect all Horn resolvents
Problem: • Even a Horn theory can have exponentially many
Horn resolvents
Solution:• Resolve only pairs of clauses where exactly one
clause is Horn
• Theorem: Method is complete
Properties of BoundsProperties of Bounds
GLB• Anytime algorithm
• Not unique - any GLB may be used for query answering
• Size of GLB size of original theory
LUB• Anytime algorithm
• Is unique
• No space blow-up for Horn
• Can construct non-Horn theories with exponentially larger LUB
Empirical EvaluationEmpirical Evaluation
1. Hard random theories, random queries
2. Plan-recognition domain
e.g.: query (obs1 & obs2) (goal1 V goal2) ?
Time to answer 1000 queriesoriginal with bounds
rand100 340 45
rand200 8600 51
plan500 8950 620
• Cost of compilation amortized in less than 500 queries
Limits of Tractable CompilationLimits of Tractable Compilation
Some theories have an exponentially-larger clausal form LUB
QUESTION: Can we always find a clever way to keep the LUB small (new variables, non-clausal form, structure sharing, ...)?
Theorem: There do exist theories whose Horn LUB is inherently large:
• any representation that enables polytime inference is exponentially large
– Proof based on non-uniform circuit complexity - if false, polynomial hierarchy collapses to 2
Other Tractable Target Languages
Other Tractable Target Languages
Model-based representations • (Kautz & Selman 1992, Dechter & Pear 1992,
Papadimitriou 1994, Roth & Khardon 1996, Mannila 1999, Eiter 1999)
Prime Implicates • (Reiter & DeKleer 1987, del Val 1995, Marquis 1996,
Williams 1998)
Compilation from nonmonotonic logics • (Nerode 1995, Cadoli & Donini 1996)
Similar limits to compilability hold for all!
Truly Combinatorial ProblemsTruly Combinatorial Problems
Tractable compilation not a universal solution for building scalable KRR systems
• often useful, but theoretical and empirical limits
• not applicable if you only care about a single query: no opportunity to amortize cost of compilation
Sometimes must face NP-hard reasoning problems head on
• will describe how advances in modeling and SAT solvers are pushing the envelope of the size problems that can be handled in practice
Example : PlanningExample : Planning
Planning: find a (partially) ordered set of actions that transform a given initial state to a specified goal state.
• in most general case, can cover most forms of problem solving
• scheduling: fixes set of actions, need to find optimal total ordering
• planning problems typically highly non-linear, require combinatorial search
Some Applications of PlanningSome Applications of Planning
Autonomous systems• Deep Space One Remote Agent (Williams & Nayak 1997)
• Mission planning (Muscettola 1998)
Natural language understanding
• TRAINS (Allen 1998) - mixed initiative dialog
Software agents• Softbots (Etzioni 1994)
• Goal-driven characters in games (Nareyek 1998)
• Help systems - plan recognition (Kautz 1989)
Manufacturing• Supply chain management (Crawford 1998)
Software understanding / verification• Bug-finding (goal = undesired state) (Jackson 1998)
State-space PlanningState-space Planning
State = complete truth assignment to a set of variables (fluents)
Goal = partial truth assignment (set of states)
Operator = a partial function State Statespecified by three sets of variables:
precondition, add list, delete list
(STRIPS, Fikes & Nilsson 1971)
Abdundance of Negative Complexity Results
Abdundance of Negative Complexity Results
I. Domain-independent planning: PSPACE-complete or worse
• (Chapman 1987; Bylander 1991; Backstrom 1993)
II. Bounded-length planning: NP-complete• (Chenoweth 1991; Gupta and Nau 1992)
III. Approximate planning: NP-complete or worse
• (Selman 1994)
PracticePractice
Traditional domain-independent planners can generate plans of only a few steps.
Most practical systems try to eliminate search:• Tractable compilation
• Custom, domain-specific algorithms
Scaling remains problematic when state space is large or not well understood!
Planning as SatisfiabilityPlanning as Satisfiability
SAT encodings are designed so that plans correspond to satisfying assignments
Use recent efficient satisfiability procedures (systematic and stochastic) to solve
Evaluation performance on benchmark instances
SATPLANSATPLAN
axiomschemas instantiated
propositionalclauses
satisfyingmodelplan
length
problemdescription
SATengine(s)
instantiate
interpret
SAT EncodingsSAT Encodings
Propositional CNF: no variables or quantifiers
Sets of clauses specified by axiom schemas• Use modeling conventions (Kautz & Selman 1996)
• Compile STRIPS operators (Kautz & Selman 1999)
Discrete time, modeled by integers• upper bound on number of time steps
• predicates indexed by time at which fluent holds / action begins
– each action takes 1 time step
– many actions may occur at the same step
fly(Plane, City1, City2, i) at(Plane, City2, i +1)
Solution to a Planning ProblemSolution to a Planning Problem
A solution is specified by any model (satisfying truth assignment) of the conjunction of the axioms describing the initial state, goal state, and operators
Easy to convert back to a STRIPS-style plan
Satisfiability Testing ProceduresSatisfiability Testing Procedures
Systematic, complete procedures• Davis-Putnam (DP)
– backtrack search + unit propagation (1961)
– little progress until 1993 - then explosion of improved algorithms & implementations
satz (1997) - best branching heuristic
See SATLIB 1998 / Hoos & Stutzle:
csat, modoc, rel_sat, sato, ...
Stochastic, incomplete procedures• Walksat (Kautz, Selman & Cohen 1993)
– greedy local search + noise to escape local minima
– outperforms systematic algorithms on random formulas, graph coloring, … (DIMACS 1993, 1997)
Walksat ProcedureWalksat Procedure
Start with random initial assignment.
Pick a random unsatisfied clause.
Select and flip a variable from that clause:• With probability p, pick a random variable.
• With probability 1-p, pick greedily– a variable that minimizes the number of unsatisfied
clauses
Repeat until time limit reached.
Planning Benchmark Test SetPlanning Benchmark Test Set
Extension of Graphplan benchmark setGraphplan (Blum & Furst 1995) - best domain-independent
state-space planning algorithm
logistics - complex, highly-parallel transportation domain, ranging up to
• 14 time slots, unlimited parallelism
• 2,165 possible actions per time slot
• optimal solutions containing 150 distinct actions
Problems of this size (1018 configurations) not previously handled by any state-space planning system
Scaling Up Logistics PlanningScaling Up Logistics Planning
0.01
0.1
1
10
100
1000
10000
rocket.a
rocket.b
log.b
log.a
log.c
log.d
log
so
luti
on
tim
e
Graphplan
DP
DP/Satz
Walksat
What SATPLAN ShowsWhat SATPLAN Shows
General propositional theorem provers can compete with state of the art specialized planning systems
• New, highly tuned variations of DP surprising powerful
– result of sharing ideas and code in large SAT/CSP research community
– specialized engines can catch up, but by then new general techniques
• Radically new stochastic approaches to SAT can provide very low exponential scaling
– 2+ orders magnitude speedup on hard benchmark problems
Reflects general shift from first-order & non-standard logics to propositional logic as basis of scalable KRR systems
Further Paths to Scale-UpFurther Paths to Scale-Up
Efficient representations and new SAT engines extend the range of domain-independent planning
Ways for further improvement:
• Better SAT encodings
• Better general search algorithms
III. Improved Encodings: Compiling Control Knowledge
III. Improved Encodings: Compiling Control Knowledge
Kinds of Control KnowledgeKinds of Control Knowledge
About domain itself• a truck is only in one location
• airplanes are always at some airport
About good plans• do not remove a package from its destination location
• do not unload a package and immediate load it again
About how to search• plan air routes before land routes
• work on hardest goals first
Expressing KnowledgeExpressing Knowledge
Such information is traditionally incorporated in the planning algorithm itself
– or in a special programming language
Instead: use additional declarative axioms– (Bacchus 1995; Kautz 1998; Chen, Kautz, & Selman 1999)
• Problem instance: operator axioms + initial and goal axioms + control axioms
• Control knowledge constraints on search and solution spaces
• Independent of any search engine strategy
Axiomatic Control KnowledgeAxiomatic Control Knowledge
State Invariant: A truck is at only one location
at(truck,loc1,i) & loc1 loc2 at(truck,loc2,i)
Optimality: Do not return a package to a location
at(pkg,loc,i) & at(pkg,loc,i+1) & i<j at(pkg,loc,j)
Simplifying Assumption: Once a truck is loaded, it should immediately move
in(pkg,truck,i) & in(pkg,truck,i+1) &at(truck,loc,i+1)
at(truck,loc,i+2)
Adding Control Kx to SATPLANAdding Control Kx to SATPLAN
ProblemSpecification
Axioms
Control Knowledge
Axioms
Instantiated Clauses
SAT Simplifier
SAT Engine
SAT “Core”
As control knowledge increases, Core shrinks!
Logistics - Control KnowledgeLogistics - Control Knowledge
0.01
0.1
1
10
100
1000
10000
rocket.a
rocket.b
log.b
log.a
log.c
log.d
log
so
luti
on
tim
e
DP+Kx
DP
Walksat+Kx
Walksat
Scale Up with Compiled Control Knowledge
Scale Up with Compiled Control Knowledge
Significant scale-up using axiomatic control knowledge• Same knowledge useful for both systematic and local search
engines– simple DP now scales from 1010 to 1016 states
– order of magnitude speedup for Walksat
• Control axioms summarize general features of domain / good plans: not a detailed program!
• Obtained benefits using only “admissible” control axioms: no loss in solution quality (Cheng, Kautz, & Selman 1999)
Many kinds of control knowledge can be created automatically• Machine learning (Minton 1988, Etzioni 1993, Weld 1994,
Kambhampati 1996)
• Type inference (Fox & Long 1998, Rintanen 1998)
• Reachability analysis (Kautz & Selman 1999)
BackgroundBackground
Combinatorial search methods often exhibit
a remarkable variability in performance. It is
common to observe significant differences
between:• different heuristics
• same heuristic on different instances
• different runs of same heuristic with different random seeds
Example: SATZExample: SATZ
0.01
0.1
1
10
100
1000
10000
rocket.a
rocket.b
log.b
log.a
log.c
log.d
log
so
luti
on
tim
e
DP/Satz
Walksat
Preview of StrategyPreview of Strategy
We’ll put variability / unpredictability to our advantage via randomization / averaging.
Cost DistributionsCost Distributions
Consider distribution of running times of backtrack search on a large set of “equivalent” problem instances
• renumber variables
• change random seed used to break ties
Observation (Gomes 1997): distributions often have heavy tails
• infinite variance
• mean increases without limit
• probability of long runs decays by power law (Pareto-Levy), rather than exponentially (Normal)
Heavy-Tailed DistributionsHeavy-Tailed Distributions
… … infinite variance … infinite meaninfinite variance … infinite mean
Introduced by Pareto in the 1920’s “probabilistic curiosity”
Mandelbrot established the use of heavy-tailed distributions to model real-world fractal phenomena.
Examples: stock-market, earth-quakes, weather,...
How to Check for “Heavy Tails”?How to Check for “Heavy Tails”?
Log-Log plot of tail of distribution
should be approximately linear.
Slope gives value of
infinite mean and infinite varianceinfinite mean and infinite variance
infinite varianceinfinite variance
1
1 2
Heavy TailsHeavy Tails
Bad scaling of systematic solvers can be caused by heavy tailed distributions
Deterministic algorithms get stuck on particular instances
• but that same instance might be easy for a different deterministic algorithm!
Expected (mean) solution time increases without limit over large distributions
Randomized RestartsRandomized Restarts
Solution: randomize the systematic solver• Add noise to the heuristic branching (variable
choice) function
• Cutoff and restart search after a fixed number of backtracks
Provably Eliminates heavy tails
In practice: rapid restarts with low cutoff can dramatically improve performance
(Gomes 1996, Gomes, Kautz, and Selman 1997, 1998)
Rapid Restart on LOG.DRapid Restart on LOG.D
1000
10000
100000
1000000
1 10 100 1000 10000 100000 1000000
log( cutoff )
log
( b
ackt
rack
s )
Note Log Scale: Exponential speedup!
Increased PredictabilityIncreased Predictability
0.01
0.1
1
10
100
1000
10000
rocket.a
rocket.b
log.b
log.a
log.c
log.d
log
so
luti
on
tim
e
Satz
Satz/Rand
Overall insight:Overall insight:
Randomized tie-breaking with
rapid restarts can boost
systematic search
• Related analysis: Luby & Zuckerman 1993; Alt & Karp 1996.
• Other applications: sports scheduling, circuit synthesis, quasigroup competion, …
ConclusionsConclusions
Discussed approaches to scalable KRR systems based on propositional reasoning and search
Shift to 10,000+ variables and 106 clauses has
opened up new applications
Methodology: • Model as SAT;
• Compile away as much complexity as possible
• Use off-the-shelf SAT Solver for remaining core– Analogous to LP approaches
Conclusions, cont.Conclusions, cont.
Example: AI planning / SATPLAN system Order of magnitude improvement (last 3yrs):
10 step to 200 step optimal plans
Huge economic impact possible with 2 more!
up to 20,000 steps ...
Discussed themes in Encodings & Solvers• Local search
• Control knowledge
• Heavy-tails / Randomized restarts
Tractable Knowledge Compilation: SummaryTractable Knowledge
Compilation: SummaryMany techniques have been developed for
compiling general KR languages to computationally tractable languages
• Horn approximations (Kautz & Selman 1993, Cadoli 1994, Papadimitriou 1994)
• Model-based representations (Kautz & Selman 1992, Dechter & Pearl 1992, Roth & Khardon 1996, Mannila 1999, Eiter 1999)
• Prime Implicates (Reiter & DeKleer 1987, del Val 1995, Marquis 1996, Williams 1998)
Limits to CompilabilityLimits to Compilability
While practical for some domains, there are fundamental theoretical limitations to the approach
• some KB’s cannot be compiled into a tractable form unless polynomial hierarchy collapses (Kautz)
• Sometimes must face NP-hard reasoning problems head on
• will describe how advances in modeling and SAT solvers are pushing the envelope of the size problems that can be handled in practice
Logistics: Increased PredictabilityLogistics: Increased Predictability
0.01
0.1
1
10
100
1000
10000
rocket.a
rocket.b
log.b
log.a
log.c
log.d
log
so
luti
on
tim
e
Satz
Satz/Rand
Walksat
Recommended