Decision-tree pruning

Decision-tree pruning

Decision-tree pruning is traversing and utilization methods of Artificial Intelligence. They are used in many programs including maps with shortest route calculations, WebMD, Law sites, and Chess. The Sudoku puzzles can also benefit from an efficient solution search since the toughest puzzles seemed to defy logic (i.e., the infamous "Qassim Hamza"). [Lewis, R (2007) "Metaheuristics Can Solve Sudoku Puzzles" Journal of Heuristics, vol. 13 (4), pp 387-401.]

Some think you can create a valid Sudoku puzzle that can't be solved with logic. That statement itself is not logical since, assuming its a fair puzzle (i.e., it has a valid starting point in which givens were removed), the puzzle was created using a logical and repeatable approach, thus it should be impossible to create an unsolvable puzzle much like you can never scramble a Rubic's Cube to a point that it can't be solved.

With a decision-tree pruning method, a human could solve the "Qassim Hamza" in 1599 logical steps. Each step involves scanning rows, columns, and squares, and maintaining a log of candidates for each cell, and counting and comparing the candidates to decide the next value to test. A person working quickly can complete a step in 45 to 60 seconds so the total solve time is estimated to be between 20 to 27 hours of continuous working time.

Below are the human logic steps and a link to a computer solver that uses the same logic. It will produce a proof in clear text for any puzzle you type in.

For humans, use this method:

1. Determine all possible values for each ungiven square. Use the standard scanning method which checks the section, row and column for possible values and givens already in them and not in them. This step alone can solve most puzzles.

2. For puzzles that deadend using step 1, you next should find the square with the least possible outcomes and secondary effects so the shortest solution path is found. You do this by adding the number of outcomes for the current square, section, row, and column for each square on the board having more than one possible value (squares with only one value is either a given or a resolved square). Compute this value for all unsolved squares and choose the one with the smallest number. In case of a tie, choose either one.

3. Now that you have determined the best square to start your search, choose the first value of the square and go to step 1 to see if the solution is found. If you deadend without producing an invalid value, you recurse forward into step 2 and 3. If you produce an invalid, you backtrack and try the next value.

Remember, one of those values is correct. They have to be since you placed all the possible values that square can have for the givens of that puzzle. Sometimes the first value turns out to be the correct one. For the Qassim Hamza puzzle, this is not the case. In fact, the proof shows you'll need to recurse 6 cells deep to find the correct solution. This is why its a difficult puzzle.

To solve any 4X4, 9X9, and 16X16 Sudoku puzzle, go to this [ Sudoku Solver]


Wikimedia Foundation. 2010.

Look at other dictionaries:

  • Decision tree learning — This article is about decision trees in machine learning. For the use of the term in decision analysis, see Decision tree. Decision tree learning, used in statistics, data mining and machine learning, uses a decision tree as a predictive model… …   Wikipedia

  • Pruning (decision trees) — Pruning is a technique in machine learning that reduces the size of decision trees by removing sections of the tree that provide little power to classify instances. The dual goal of pruning is reduced complexity of the final classifier as well as …   Wikipedia

  • Pruning (algorithm) — Pruning is a term in mathematics and informatics which describes a method of enumeration, which allows to cut parts of a decision tree. Pruned parts of the tree are no longer considered because the algorithm knows based on already collected data… …   Wikipedia

  • Pruning (disambiguation) — Pruning is the practice of removing unwanted portions from a plant, but may also refer to: * Synaptic pruning, the reformation of neural structure by pruning excess neurons or neural clusters * Pruning (algorithm), a method of simplification of a …   Wikipedia

  • Tree (data structure) — A simple unordered tree; in this diagram, the node labeled 7 has two children, labeled 2 and 6, and one parent, labeled 2. The root node, at the top, has no parent. In computer science, a tree is a widely used data structure that emulates a… …   Wikipedia

  • Game tree — If you re looking for game tree as it s used in game theory (not combinatorial game theory), please see Extensive form game. In game theory, a game tree is a directed graph whose nodes are positions in a game and whose edges are moves. The… …   Wikipedia

  • Regression Tree — Entscheidungsbäume sind eine spezielle Darstellungsform von Entscheidungsregeln. Sie veranschaulichen aufeinanderfolgende, hierarchische Entscheidungen. Sie haben eine Bedeutung in der Stochastik zur Veranschaulichung bedingter… …   Deutsch Wikipedia

  • Arbre De Décision — Un arbre de décision est un outil d aide à la décision et à l exploration de données. Il permet de modéliser simplement, graphiquement et rapidement un phénomène mesuré plus ou moins complexe. Sa lisibilité, sa rapidité d exécution et le peu d… …   Wikipédia en Français

  • Arbre de decision — Arbre de décision Un arbre de décision est un outil d aide à la décision et à l exploration de données. Il permet de modéliser simplement, graphiquement et rapidement un phénomène mesuré plus ou moins complexe. Sa lisibilité, sa rapidité d… …   Wikipédia en Français

  • Arbre de décision — Pour les articles homonymes, voir Arbre (homonymie). Un arbre de décision est un outil d aide à la décision qui représente la situation plus ou moins complexe à laquelle on doit faire face sous la forme graphique d un arbre de façon à faire… …   Wikipédia en Français

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.