Time hierarchy theorem

Time hierarchy theorem

In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, there are problems that can be solved with n2 time but not n time.

The time hierarchy theorem for deterministic multi-tape Turing machines was first proven by Richard Stearns and Juris Hartmanis in 1965.[1] It was improved a year later when F. C. Hennie and Richard Stearns improved the efficiency of the Universal Turing machine.[2] As a consequence, for every deterministic time-bounded complexity class, there is a strictly larger time-bounded complexity class, and so the time-bounded hierarchy of complexity classes does not completely collapse. More precisely, the time hierarchy theorem for deterministic Turing machines states that for all time-constructible functions f(n),

\operatorname{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right) \subsetneq \operatorname{DTIME}(f(n)).

The time hierarchy theorem for nondeterministic Turing machines was originally proven by Stephen Cook in 1972.[3] It was improved to its current form via a complex proof by Joel Seiferas, Michael Fischer, and Albert Meyer in 1978.[4] Finally in 1983, Stanislav Žák achieved the same result with the simple proof taught today.[5] The time hierarchy theorem for nondeterministic Turing machines states that if g(n) is a time-constructible function, and f(n+1) = o(g(n)), then

\operatorname{NTIME}(f(n)) \subsetneq \operatorname{NTIME}(g(n)).

The analogous theorems for space are the space hierarchy theorems. A similar theorem is not known for time-bounded probabilistic complexity classes, unless the class also has advice.[6]

Contents

Background

Both theorems use the notion of a time-constructible function. A function f:\mathbb{N}\rightarrow\mathbb{N} is time-constructible if there exists a deterministic Turing machine such that for every n\in\mathbb{N}, if the machine is started with an input of n ones, it will halt after precisely f(n) steps. All polynomials with non-negative integral coefficients are time-constructible, as are exponential functions such as 2n.

Proof overview

We need to prove that some time class TIME(g(n)) is strictly larger than some time class TIME(f(n)). We do this by constructing a machine which cannot be in TIME(f(n)), by diagonalization. We then show that the machine is in TIME(g(n)), using a simulator machine.

Deterministic time hierarchy theorem

Statement

The theorem states that: If f(n) is a time-constructible function, then there exists a decision problem which cannot be solved in worst-case deterministic time f(n) but can be solved in worst-case deterministic time f(n)2. In other words, the complexity class DTIME(f(n)) is a strict subset of DTIME(f(n)2). Note that f(n) is at least n, since smaller functions are never time-constructible.

Even more generally, it can be shown that if f(n) is time-constructible, then \operatorname{DTIME}\left(o\left(\frac{f(n)}{\log f(n)}\right)\right) is properly contained in \operatorname{DTIME}(f(n)). For example, there are problems solvable in time n2 but not time n, since n is in o\left(\frac{n^2}{\log {n^2}}\right).

Proof

We include here a proof that DTIME(f(n)) is a strict subset of DTIME(f(2n + 1)3) as it is simpler. See the bottom of this section for information on how to extend the proof to f(n)2.

To prove this, we first define a language as follows:

 H_f = \left\{ ([M], x)\ |\ M \ \mbox{accepts}\ x \ \mbox{in}\ f(|x|) \ \mbox{steps} \right\}.

Here, M is a deterministic Turing machine, and x is its input (the initial contents of its tape). [M] denotes an input that encodes the Turing machine M. Let m be the size of the tuple ([M], x).

We know that we can decide membership of Hf by way of a deterministic Turing machine that first calculates f(|x|), then writes out a row of 0s of that length, and then uses this row of 0s as a "clock" or "counter" to simulate M for at most that many steps. At each step, the simulating machine needs to look through the definition of M to decide what the next action would be. It is safe to say that this takes at most f(m)3 operations, so

 H_f \in \mathsf{TIME}(f(m)^3).

The rest of the proof will show that

 H_f \notin \mathsf{TIME}(f( \left\lfloor m/2 \right\rfloor ))

so that if we substitute 2n + 1 for m, we get the desired result. Let us assume that Hf is in this time complexity class, and we will attempt to reach a contradiction.

If Hf is in this time complexity class, it means we can construct some machine K which, given some machine description [M] and input x, decides whether the tuple ([M], x) is in Hf within  \mathsf{TIME}(f( \left\lfloor m/2 \right\rfloor )) .

Therefore we can use this K to construct another machine, N, which takes a machine description [M] and runs K on the tuple ([M], [M]), and then accepts only if K rejects, and rejects if K accepts. If now n is the length of the input to N, then m (the length of the input to K) is twice n plus some delimiter symbol, so m = 2n + 1. N's running time is thus  \mathsf{TIME}(f( \left\lfloor m/2 \right\rfloor )) = \mathsf{TIME}(f( \left\lfloor (2n+1)/2 \right\rfloor )) = \mathsf{TIME}(f(n)).

Now if we feed [N] as input into N itself (which makes n the length of [N]) and ask the question whether N accepts its own description as input, we get:

  • If N accepts [N] (which we know it does in at most f(n) operations), this means that K rejects ([N], [N]), so ([N], [N]) is not in Hf, and thus N does not accept [N] in f(n) steps. Contradiction!
  • If N rejects [N] (which we know it does in at most f(n) operations), this means that K accepts ([N], [N]), so ([N], [N]) is in Hf, and thus N does accept [N] in f(n) steps. Contradiction!

We thus conclude that the machine K does not exist, and so

 H_f \notin \mathsf{TIME}(f( \left\lfloor m/2 \right\rfloor )).

Extension

The reader may have realised that the proof is simpler because we have chosen a simple Turing machine simulation for which we can be certain that

 H_f \in \mathsf{TIME}(f(m)^3).

It has been shown[7] that a more efficient model of simulation exists which establishes that

 H_f \in \mathsf{TIME}(f(m) \log f(m))

but since this model of simulation is rather involved, it is not included here.

Non-deterministic time hierarchy theorem

If g(n) is a time-constructible function, and f(n+1) = o(g(n)), then there exists a decision problem which cannot be solved in non-deterministic time f(n) but can be solved in non-deterministic time g(n). In other words, the complexity class NTIME(f(n)) is a strict subset of NTIME(g(n)).

Consequences

The time hierarchy theorems guarantee that the deterministic and non-deterministic versions of the exponential hierarchy are genuine hierarchies: in other words PEXPTIME2-EXP ⊂ ... and NPNEXPTIME2-NEXP ⊂ ....

For example, PEXPTIME since PDTIME(2n) ⊂ DTIME(22n) ⊆ EXPTIME.

The theorem also guarantees that there are problems in P requiring arbitrary large exponents to solve; in other words, P does not collapse to DTIME(nk) for any fixed k. For example, there are problems solvable in n5000 time but not n4999 time. This is one argument against Cobham's thesis, the convention that P is a practical class of algorithms. If such a collapse did occur, we could deduce that PPSPACE, since it is a well-known theorem that DTIME(f(n)) is strictly contained in DSPACE(f(n)).

However, the time hierarchy theorems provide no means to relate deterministic and non-deterministic complexity, or time and space complexity, so they cast no light on the great unsolved questions of computational complexity theory: whether P and NP, NP and PSPACE, PSPACE and EXPTIME, or EXPTIME and NEXPTIME are equal or not.

References

  1. ^ Hartmanis, J.; Stearns, R. E. (1 May 1965). "On the computational complexity of algorithms". Transactions of the American Mathematical Society (American Mathematical Society) 117: 285–306. doi:10.2307/1994208. ISSN 00029947. JSTOR 1994208. MR0170805. 
  2. ^ Hennie, F. C.; Stearns, R. E. (October 1966). "Two-Tape Simulation of Multitape Turing Machines". J. ACM (New York, NY, USA: ACM) 13 (4): 533–546. doi:10.1145/321356.321362. ISSN 0004-5411. 
  3. ^ Cook, Stephen A. (1972). "A hierarchy for nondeterministic time complexity". Proceedings of the fourth annual ACM symposium on Theory of computing. STOC '72. Denver, Colorado, United States: ACM. pp. 187–192. doi:10.1145/800152.804913. 
  4. ^ Seiferas, Joel I.; Fischer, Michael J.; Meyer, Albert R. (January 1978). "Separating Nondeterministic Time Complexity Classes". J. ACM (New York, NY, USA: ACM) 25 (1): 146–167. doi:10.1145/322047.322061. ISSN 0004-5411. 
  5. ^ Stanislav, Žák (October 1983). "A Turing machine time hierarchy". Theoretical Computer Science (Elsevier Science B.V.) 26 (3): 327–333. doi:10.1016/0304-3975(83)90015-4. 
  6. ^ Fortnow, L.; Santhanam, R. (2004). Hierarchy Theorems for Probabilistic Polynomial Time. pp. 316. doi:10.1109/FOCS.2004.33. 
  7. ^ Luca Trevisan, Notes on Hierarchy Theorems, U.C. Berkeley
  • Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X.  Pages 310–313 of section 9.1: Hierarchy theorems.
  • Christos Papadimitriou (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 0-201-53082-1.  Section 7.2: The Hierarchy Theorem, pp. 143–146.

Wikimedia Foundation. 2010.

Игры ⚽ Поможем сделать НИР

Look at other dictionaries:

  • Space hierarchy theorem — In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For… …   Wikipedia

  • Exponential hierarchy — In computational complexity theory, the exponential hierarchy is a hierarchy of complexity classes, starting with EXP:: m{EXP} = igcup {kinmathbb{N mbox{DTIME}left(2^{n^k} ight)and continuing with: m{2EXP} = igcup {kinmathbb{N… …   Wikipedia

  • Computation time — In computational complexity theory, computation time is a measure of how many steps are used by some abstract machine in a particular computation. For any given model of abstract machine, the computation time used by that abstract machine is a… …   Wikipedia

  • Sipser-Lautemann theorem — In computational complexity theory, the Sipser Lautemann theorem or Sipser Gács Lautemann theorem states that BPP (Bounded error Probablistic Polynomial) time, is contained in the polynomial time hierarchy, and more specifically Sigma;2 cap; Pi;2 …   Wikipedia

  • Toda's theorem — The Toda s Theorem was proven by Seinosuke Toda in his paper PP is as Hard as the Polynomial Time Hierarchy (1991) and was given 1998 Gödel Prize. The theorem considers solution counting for polynomial problem class PP and polynomial hierarchy PH …   Wikipedia

  • Karp-Lipton theorem — The Karp–Lipton theorem in complexity theory states that if the boolean satisfiability problem (SAT) can be solved by Boolean circuits with a polynomial number of logic gates, then :Pi 2 , = Sigma 2 , and therefore mathrm{PH} , = Sigma 2 ,.That… …   Wikipedia

  • Prime number theorem — PNT redirects here. For other uses, see PNT (disambiguation). In number theory, the prime number theorem (PNT) describes the asymptotic distribution of the prime numbers. The prime number theorem gives a general description of how the primes are… …   Wikipedia

  • Arithmetical hierarchy — In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene hierarchy classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called arithmetical. The… …   Wikipedia

  • Gap theorem — See also Gap theorem (disambiguation) for other gap theorems in mathematics. In computational complexity theory the Gap theorem is a major theorem about the complexity of computable functions. [Lance Fortnow, Steve Homer,… …   Wikipedia

  • Computational complexity theory — is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. In this context, a… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”