# Time-evolving block decimation

﻿
Time-evolving block decimation

The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions.It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement which is fulfilled by a large class of quantum many-body systems in one dimension.

Introduction

There is nowadays a considerable interest in the field of quantum theory for computational methods well-suited to the physics of many-body systems. Considering the inherent difficulties of simulating general quantum many-body systems, the exponential increase in parameters with the size of the system, and correspondingly, the high computational costs, one solution would be to look for numerical methods which deal with special cases, where one can profit from the physics of the system. The raw approach, by directly dealing with all the parameters used to fully characterize a quantum many-body system is seriously impeded by the lavishly exponential buildup with the system size of the amount of variables needed for simulation , which leads, in the best cases, to unreasonably long computational times and extended use of memory. To get around this problem a number of various methods have been developed and put into practice in the course of time, one of the most successful ones being the quantum Monte Carlo method (QMC). Also the density matrix renormalization group (DMRG) method, next to QMC, is a very reliable method, with an expanding community of users and an increasing number of applications to physical systems.

When the first quantum computer will be plugged in and functioning, the perspectives for the field of computational physics will look rather promising, but until that day one has to restrict oneself to the mundane tools offered by classical computers. While experimental physicists are putting a lot of effort in trying to build the first quantum computer, theoretical physicists are searching, in the field of quantum information theory (QIT), for genuine quantum algorithms, appropriate for problems which would perform badly when trying to be solved on a classical computer but pretty fast and successful on a quantum one. The search for such algorithms is still going, the best-known (and almost the only ones found) being the Shor's algorithm, for factoring large numbers, and Grover's search algorithm.

In the field of QIT one has to identify the primary resources necessary for genuine quantum computation. Such a resource may be responsible for the speedup gain in quantum versus classical, identifying them means also identifying systems which can be simulated in a reasonably efficient manner on a classical computer. Such a resource is quantum entanglement; hence, it is possible to establish a distinct lower bound for the entanglement needed for quantum computational speedups.

Guifré Vidal, from Institute for Quantum Information, CalTech, has recently proposed a scheme useful for simulating a certain category of quantum Guifré Vidal, "Efficient Classical Simulation of Slightly Entangled Quantum Computations", PRL 91, 147902 (2003) [http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:quant-ph/0301063] ] systems. He asserts that "any quantum computation with pure states can be efficiently simulated with a classical computer provided the amount of entanglement involved is sufficiently restricted". This happens to be the case with generic Hamiltonians displaying local interactions, as for example, Hubbard-like Hamiltonians. The method exhibits a low-degree polynomial behavior in the increase of computational time with respect to the amount of entanglement present in the system. The algorithm is based on a scheme which exploits the fact that in these one-dimensional systems the eigenvalues of the reduced density matrix on a bipartite split of the system are exponentially decaying, thus allowing us to work in a resized space spanned by the eigenvectors corresponding to the eigenvalues we selected.

One can also estimate the amount of computational resources required for the simulation of a quantum system on a classical computer, knowing how the entanglement contained in the system scales with the size of the system. The classically (and quantum, as well) feasible simulations are those which involve systems which are only a trifle entangled, the strongly entangled ones being, on the other hand, good candidates only for genuine quantum computations.

The numerical method is efficient in simulating real-time dynamics or calculations of ground states using imaginary-time evolution or isentropic interpolations between a target Hamiltonian and a Hamiltonian with an already-known ground state. The computational time scales linearly with the system size, hence many-particles systems in 1D can be investigated.

A very useful feature of the TEBD algorithm is that it can be reliably employed for time evolution simulations of time-dependent Hamiltonians, describing systems which can be realized with cold atoms in optical lattices, or in systems far from equilibrium in quantum transport. From this point of view, TEBD had a certain ascendance over DMRG, a very powerful technique, but until recently not very well suited for simulating time-evolutions. The Matrix Product States formalism being at the mathematical heart of DMRG, the TEBD scheme was adopted by the DMRG community, thus giving birth to the time dependent DMRG [http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:cond-mat/0403313] , t-DMRG for short.

After Vidal's proposal, other groups have developed similar approaches in which quantum information plays a predominat role as, for example, for studying mixed-state dynamics in one-dimensional quantum lattice systems [http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:cond-mat/0406440] , for calculating finite-temperature and dissipative 1D systems [http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:cond-mat/0501493] or in DMRG implementations for periodic boundary conditions [http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:cond-mat/0404706] .

The decomposition of state

Introducing the decomposition of State

Let us consider a chain of "N" qubits, described by the function $| Psi angle in H^\right)^2cdot\left(\left\{lambda\text{'}\right\}^\left\{ \left[l\right] \right\}_\left\{alpha_l\right\}\right)^2=sumlimits_\left\{alpha_l=1\right\}^\left\{chi_c\right\}\left(c_\left\{alpha_${l-1}alpha_{l)^2frac{(lambda^{ [l] }_{alpha_l})^2}{R} = frac{S_1}{R}

Taking the difference, $epsilon = n_2 - n_1 = n_2 - 1$, we get:

:$epsilon = frac\left\{S_1\right\}\left\{R\right\} - 1 leq frac\left\{1-R\right\}\left\{R\right\} = frac\left\{epsilon_l\right\}\left\{1-epsilon_l\right\} \left\{ ightarrow\right\}0 as$epsilon_l{ ightarrow}0

Hence, when constructing the reduced density matrix, the trace of the matrix is multiplied by the factor:

:$|langle\left\{psi_\left\{D|psi_\left\{D\right\} angle|^2 = 1 - frac\left\{epsilon_l\right\}\left\{1-epsilon_l\right\} = frac\left\{1-2epsilon_l\right\}\left\{1-epsilon_l\right\}$

The total truncation error

The total truncation error, considering both sources, is upper bounded by:

:$epsilon\left($D) = 1 - prodlimits_{n=1}^{N-1}(1-epsilon_n) prodlimits_{n=1}^{N-1}frac{1-2epsilon_n}{1-epsilon_n} = 1 - prodlimits_{n=1}^{N-1}(1-2epsilon_n)

When using the Trotter expansion, we do not move from bond to bond, but between bonds of same parity; moreover, for the ST2, we make a sweep of the even ones and two for the odd. But nevertheless, the calculation presented above still holds. The error is evaluated by repeatingly multiplying with the renormalization constant, each time we build the reduced density matrix and select its relevant eigenvalues.

One thing that can save a lot of computational time without loss of accuracy is to use a different Schmidt dimension for each bond instead of a fixed one for all bonds, keeping only the necessary amount of relevant coefficients, as usual. For example, taking the first bond, in the case of qubits, the Schmidt dimension will be just two. Hence, at the first bond, instead of futilely diagonalizing, let us say, 10 by 10 or 20 by 20 matrices, we can just restrict ourselves to ordinary 2 by 2 ones, thus making the algorithm generally faster. What we can do instead is set a threshold for the eigenvalues of the SD, keeping only those that are above the threshold.

*
*

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Density matrix renormalization group — The density matrix renormalization group (DMRG) is a numerical variational technique devised to obtain the low energy physics of quantum many body systems with high accuracy. It was invented in 1992 by Steven R. White and it is nowadays the most… …   Wikipedia

• Quantum Monte Carlo — is a large class of computer algorithms that simulate quantum systems with the idea of solving the many body problem. They use, in one way or another, the Monte Carlo method to handle the many dimensional integrals that arise. Quantum Monte Carlo …   Wikipedia

• Europe, history of — Introduction       history of European peoples and cultures from prehistoric times to the present. Europe is a more ambiguous term than most geographic expressions. Its etymology is doubtful, as is the physical extent of the area it designates.… …   Universalium

• Influenza — Flu redirects here. For other uses, see Flu (disambiguation). This article is about the disease influenza. For the family of viruses that cause the disease, see Orthomyxoviridae. Influenza Classification …   Wikipedia

• performing arts — arts or skills that require public performance, as acting, singing, or dancing. [1945 50] * * * ▪ 2009 Introduction Music Classical.       The last vestiges of the Cold War seemed to thaw for a moment on Feb. 26, 2008, when the unfamiliar strains …   Universalium

• Agriculture and Food Supplies — ▪ 2007 Introduction Bird flu reached Europe and Africa, and concerns over BSE continued to disrupt trade in beef. An international vault for seeds was under construction on an Arctic island. Stocks of important food fish species were reported… …   Universalium

• World War II — the war between the Axis and the Allies, beginning on September 1, 1939, with the German invasion of Poland and ending with the surrender of Germany on May 8, 1945, and of Japan on August 14, 1945. Abbr.: WWII * * * or Second World War (1939–45)… …   Universalium

• Antarctica — /ant ahrk ti keuh, ahr ti /, n. the continent surrounding the South Pole: almost entirely covered by an ice sheet. ab. 5,000,000 sq. mi. (12,950,000 sq. km). Also called Antarctic Continent. * * * Antarctica Introduction Antarctica Background:… …   Universalium