# Shortest path problem

﻿
Shortest path problem  A graph with 6 vertices and 7 edges

In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. An example is finding the quickest way to get from one location to another on a road map; in this case, the vertices represent locations and the edges represent segments of road and are weighted by the time needed to travel that segment.

There are several variations according to whether the given graph is undirected, directed, or mixed. For undirected graphs, the shortest path problem can be formally defined as follows. Given a weighted graph (that is, a set V of vertices, a set E of edges, and a real-valued weight function f : E → R), and elements v and v' of V, find a path P from v to a v' of V so that $\sum_{p\in P} f(p)$

is minimal among all paths connecting v to v' .

The problem is also sometimes called the single-pair shortest path problem, to distinguish it from the following variations:

• The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph.
• The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph.
• The all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph.

These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices.

## Algorithms

The most important algorithms for solving this problem are:

• Dijkstra's algorithm solves the single-source shortest path problems.
• Bellman-Ford algorithm solves the single source problem if edge weights may be negative.
• A* search algorithm solves for single pair shortest path using heuristics to try to speed up the search.
• Floyd-Warshall algorithm solves all pairs shortest paths.
• Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd-Warshall on sparse graphs.
• Perturbation theory finds (at worst) the locally shortest path.

Additional algorithms and associated evaluations may be found in Cherkassky et al.

### Single-source shortest paths

#### Directed graphs with nonnegative weights

Algorithm Time complexity Author
O(V4) Shimbel 1955
O(V2EL) Ford 1956
Bellman–Ford algorithm O(VE) Bellman 1958, Moore 1959
O(V2 log V) Dantzig 1958, Dantzig 1960, Minty (cf. Pollack & Wiebenson 1960), Whiting & Hillier 1960
Dijkstra's algorithm O(V2) Leyzorek et al. 1957, Dijkstra 1959
... ... ...
Dijkstra's algorithm with Fibonacci heaps O(E + V log V) Fredman & Tarjan 1984, Fredman & Tarjan 1987
O(E log log L) Johnson 1982, Karlsson & Poblete 1983
Gabow's algorithm O(V logE/V L) Gabow 1983b, Gabow 1985b
O(E + V√log L) Ahuja et al. 1990

### All-pairs shortest paths

Floyd-Warshall algorithm

## Applications

Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on web mapping websites like Mapquest or Google Maps. For this application fast specialized algorithms are available.

If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represents the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves.

In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path.

A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film.

Other applications include "operations research, plant and facility layout, robotics, transportation, and VLSI design".

## Related problems

For shortest path problems in computational geometry, see Euclidean shortest path.

The travelling salesman problem is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles (edges with negative weights), the travelling salesman problem is NP-complete and, as such, is believed not to be efficiently solvable (see P = NP problem). The problem of finding the longest path in a graph is also NP-complete.

The Canadian traveller problem and the stochastic shortest path problem are generalizations where either the graph isn't completely known to the mover, changes over time, or where actions (traversals) are probabilistic.

The shortest multiple disconnected path  is a representation of the primitive path network within the framework of Reptation theory.

The problems of recalculation of shortest paths arises if some graph transformations (e.g., shrinkage of nodes) are made with a graph.

The widest path problem seeks a path so that the minimum label of any edge is as large as possible.

## Linear programming formulation

There is a natural linear programming formulation for the shortest path problem, given below. It is very trivial compared to most other uses of linear programs in discrete optimization, however it illustrates connections to other concepts.

Given a directed graph (V, A) with source node s, target node t, and cost wij for each arc (i, j) in A, consider the program with variables xij

minimize $\sum_{ij \in A} w_{ij} x_{ij}$ subject to $x \ge 0$ and for all i, $\sum_j x_{ij} - \sum_j x_{ji} = \begin{cases}1, &\mbox{if }i=s;\\ -1, &\mbox{if }i=t;\\ 0, &\mbox{ otherwise.}\end{cases}$

This LP, which is common fodder for operations research courses, has the special property that it is integral; more specifically, every basic optimal solution (when one exists) has all variables equal to 0 or 1, and the set of edges whose variables equal 1 form an consistent heuristic for the A* algorithm for shortest paths. For any feasible dual y the reduced costs w'ij = wijyj + yi are nonnegative and A* essentially runs Dijkstra's algorithm on these reduced costs.