Brought to you by EarthWeb
IT Library Logo

Click Here!
Click Here!

Search the site:
 
EXPERT SEARCH -----
Programming Languages
Databases
Security
Web Services
Network Services
Middleware
Components
Operating Systems
User Interfaces
Groupware & Collaboration
Content Management
Productivity Applications
Hardware
Fun & Games

EarthWeb Direct EarthWeb Direct Fatbrain Auctions Support Source Answers

EarthWeb sites
Crossnodes
Datamation
Developer.com
DICE
EarthWeb.com
EarthWeb Direct
ERP Hub
Gamelan
GoCertify.com
HTMLGoodies
Intranet Journal
IT Knowledge
IT Library
JavaGoodies
JARS
JavaScripts.com
open source IT
RoadCoders
Y2K Info

Previous Table of Contents Next


2.2.2. Constraint Satisfaction Problems

On the other hand, constraint propagation has been the core algorithm used in solving a large class of problems termed constraint satisfaction problems (CSPs). Standard CSPs have a fixed finite number of problem variables, and each variable has a finite set of possible values it can take (called its domain). The constraints between the variables can always be expressed as a set of admissible combinations of values. These constraints can be directly represented as relations in a relational database, in which case each admissible combination is a tuple of the relation. CSPs have inspired a fascinating variety of research because despite their simplicity they can be used to express, in a very natural way, real, difficult problems.1


1The class of CSP problems is NP complete.

One line of research has focused on constraint propagation, showing how to propagate more and more information (forward checking, arc-consistency, path-consistency, k-consistency, relational consistency, and so on). For example, arc-consistency is achieved by reducing the domains of the problem variables until the remaining values are all supported; a value is supported if every constraint on the variable includes a tuple in which the variable takes this value and the other variables all take supported values.

Even if none of the arc-consistent domains are empty, this does not imply the CSP has a solution! To find a solution it is still necessary to try out specific values for the problem variables. Only if all the variables can be assigned a specific value, such that they all support each other, has a solution been found. One algorithm for solving CSPs, which has proved useful in practice, is to select a value for each variable in turn, but, after making each selection, to re-establish arc-consistency. Thus, search is interleaved with constraint propagation. In this way, the domains of the remaining values are reduced further and further until either one becomes empty, in which case some previous choice must be changed, or else the remaining domains contain only one value, in which case the problem is solved.

Another line of research has investigated the global shape of the problem. This shape can be viewed as a graph, where each variable is a node and each constraint an edge (or hyper-edge) in the graph. Tree-structured problems are relatively easy to solve, but research has also revealed a variety of ways of dealing with more awkward structures, by breaking down a problem into easier subproblems, whose results can be combined into a solution of the original problem. Picturesque names have been invented for these techniques such as "perfect relaxation" and "hinge decomposition."

More recently, researchers have begun to explore the structure of the individual constraints. If the constraints belong to certain classes, propagation can be much more efficient -- or it can even be used to find globally consistent solutions in polynomial time. Indeed, there are quite nice sufficient conditions to distinguish between NP-complete problem classes and problem classes solvable with known polynomial algorithms.

2.2.3. Constraint Propagation in Logic Programming

The practical benefits of constraint propagation really began to emerge when it was embedded in a programming language (Van Hentenryck, 1989). Again it was logic programming that was first used as a host language, producing impressive results first on some difficult puzzles and then on industrial problems such as scheduling, warehouse location, cutting stock, and so on (Dincbas et al., 1988). The embedding suggested new kinds of propagation, new ways of interleaving propagation and search, and new ways of varying the propagation according to the particular features of the problem at hand.

These advantages were very clearly illustrated when, using lessons learned from the Operations Research community, constraint logic programming began to outperform specialized algorithms on a variety of benchmark problems. However, the main advantage of constraint programming is not the good performance that can be obtained on benchmarks, but its flexibility in modeling and efficiently solving complex problems.

A constraint program for an application such as Vehicle Scheduling, or Bin Packing, not only admits the standard constraints typically associated with that class of problems, but it also admits other side-constraints that cause severe headaches for Operations Research approaches.

Currently, several companies are offering constraint programming tools and constraint programming solutions for complex industrial applications. As host programming language, not only logic programming but also LISP and C++ are offered.

2.3. SEARCH

The topic of search has been at the heart of AI since GPS. Some fundamental search algorithms were generate-and-test, branch and bound, the A* algorithm, iterative deepening, and tree search guided by the global problem structure, or by information elicited during search, or by intelligent backtracking.

The contribution of constraint programming is to allow the end-user to control the search, combining generic techniques and problem-specific heuristics. A nice illustration of this is the n-queens problems: how to place n queens on an nXn chess board, so that no queens can take each other. For n = 8, depth-first generate-and-test with backtracking is quite sufficient, finding a solution in a few seconds. However, when n = 16, it is necessary to interleave constraint propagation with the search so as to obtain a solution quickly. However, when n = 32, it is necessary to add more intelligence to the search algorithm. A very general technique is to choose as the next variable to label the one with the smallest domain, i.e., the smallest number of possible values. This is called first-fail. It is particularly effective in conjunction with constraint propagation, which reduces the size of the domains of the unlabeled variables (as described for arc-consistency above), For n queens, the first-fail technique works very well, yielding a solution within a second. Unfortunately, even first-fail doesn't scale up beyond n = 70. However, there is a problem-specific heuristic that starts by placing queens in the middle of the board, and then moving outward. With the combination of depth-first search, interleaved with constraint propagation, using the first-fail ordering for choosing which queen to place next, and placing queens in the center of the board first, the $70$-queens problem is solved within a second, and the algorithm scales up easily to 200 queens.2


2Solutions for n queens can be generated by a deterministic algorithm; however, this problem is useful for providing a simple and illuminating example of techniques that also pay off where there are no such alternative algorithms!


Previous Table of Contents Next

footer nav
Use of this site is subject certain Terms & Conditions.
Copyright (c) 1996-1999 EarthWeb, Inc.. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of EarthWeb is prohibited. Please read our privacy policy for details.