Jump to content

Constraint logic programming

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Tizio (talk | contribs) at 11:45, 2 March 2006 (a variant of logic programming that include constraint satisfaction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Constraint logic programming is a variant of logic programming that incorporate constraints as used in constraint satisfaction.

A constraint logic programming is a logic program that includes constraints in the body of clauses. A clause can be used to prove the goal if its constraints are satisfied and its literals can be proved. More precisely, the set of all constraints of the clauses used in a derivation is supposed to be satisfiable in order for the derivation to be valid.

When the interpreter scans the body of a clause, it backtraks if a constraint is not satisfied or a literal cannot be proved. There is a difference in how constrains and literals are handled: literals are proved by recursively evaluating other clauses; constraints are checked by placing them in a set called constraint store that is supposed to be satisafiable. This constraint store contains all constraints assumed satisfiable during execution.

If the constraint store becomes unsatisfiable, the interpreter should backtrack, as the line it is following to prove the goal is wrong. In practice, some form of local consistency of the constraint store is used as an approximation of satisfiability. However, the goal is truly proved only if the constraint store is actually satisfiable.

Formally, constraint logic programs are like regular logic programs, but the body of clause can contain:

  1. logic programming literals (the regular literals of logic programming)
  2. constraints
  3. labeling literals

During evaluation, a pair is maintained. The first element initially is the goal, and is replaced with subgoals during execution. The second element is an initially empty set of constraints, called the constraint store. This set accumulates all constraints that the algorithm assumes satisfiable during execution.

At each step, the first literal of the goal is considered and removed from the current goal. If it is a constraint, it is added to the constraint store. If it is a literal, it is treated as in regular logic programming: a clause whose head has the same top-level predicate as the literal is chosen; its body is placed in front of the current goal; equality between the literal and the head of the clause is added to the constraint store. (The choice of this clause may be based on an order of the clauses or may be non-deterministic?).

If the constraint store is unsatisfiable, backtracking should be done. However, checking unsatisfiability at each step would be inefficient. For this reason, some form of local consistency is checked instead.

When the current goal is empty, a regular logic program interpreter stops and output the current substitution. In this condition, a constraint logic program stops, but only output with the current domains as reduced via the local consistency conditions on the constraint store (depends on the interpreter?). Actual satisfiability is enforced via labeling literals. In particular, whenever the interpreter encounters the literal during the evaluation of a clause, it runs a satisfiability checker on the current constraint store to try and find an assignment to these variables that satisfy all constraints (or, finds a complete satisfying assignment and restrict to these variables?)