This series consists of talks in the area of Quantum Information Theory.
The original motivation to build a quantum computer came from Feynman who envisaged a machine capable of simulating generic quantum mechanical systems, a task that is intractable for classical computers. Such a machine would have tremendous applications in all physical sciences, including condensed matter physics, chemistry, and high energy physics. Part of Feynman's challenge was met by Lloyd who showed how to approximately decompose the time-evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer.
In quantum information, entanglement has often been viewed as a resource. But in this talk, I will look at (pure bipartite) entanglement through the lens of superselection rules. The idea is that it requires quantum communication not only to create entanglement, but also to destroy it in a way that doesn't leak information to the environment. As a result, when communication is scarce, superpositions of different numbers of EPR pairs can be difficult to obtain.
I'll discuss some work-in-progress about the computational complexity of simulating the extremely "simple" quantum systems that arise in linear optics experiments. I'll show that *either* one can describe an experiment, vastly easier than building a universal quantum computer, that would test whether Nature is performing nontrivial quantum computation, or else one can give surprising new algorithms for approximating the permanent. Audience feedback as to which of these possibilities is the right one is sought. Joint work with Alex Arkhipov.
Alongside the effort underway to build quantum computers, it is important to better understand which classes of problems they will find easy and which others even they will find intractable. Inspired by the success of the statistical study of classical constraint optimization problems, we study random ensembles of the QMA$_1$-complete quantum satisfiability (QSAT) problem introduced by Bravyi. QSAT appropriately generalizes the NP-complete classical satisfiability (SAT) problem.
If a pure quantum state is drawn at random, this state will almost surely be almost maximally entangled. This is a well-known example for the "concentration of measure" phenomenon, which has proved to be tremendously helpful in recent years in quantum information theory. It was also used as a new method to justify some foundational aspects of statistical mechanics.
Solving linear systems of equations is a common problem that arises both on its own and as a subroutine in more complex problems: given a matrix A and a vector b, find a vector x such that Ax=b. Often, one does not need to know the solution x itself, but rather an approximation of the expectation value of some operator associated with x, e.g., x'Mx for some matrix M. In this case, when A is sparse and well-conditioned, with largest dimension N, the best known classical algorithms can find x and estimate x'Mx in O(N * poly(log(N))) time.
In topological quantum computation, a quantum algorithm is performed by braiding and fusion of certain quasi-particles called anyons. Therein, the performed quantum circuit is encoded in the topology of the braid. Thus, small inaccuracies in the world-lines of the braided anyons do not adversely affect the computation. For this reason, topological quantum computation has often been regarded as error-resilient per se, with no need for quantum error-correction. However, newer work ,  shows that even topological computation is plagued with (small) errors.
In the closed system setting I will show how to obtain extremely accurate adiabatic QC by proper choice of the interpolation between the initial and final Hamiltonians. Namely, given an analytic interpolation whose first N initial and final time derivatives vanish, the error can be made to be smaller than 1/N^N, with an evolution time which scales as N and the square of the norm of the time-derivative of the Hamiltonian, divided by the cube of the gap (joint work with Ali Rezakhani and Alioscia Hamma).
There are many results showing that the probability of entanglement is high for large dimensions. Recently, Arveson showed that the probability of entanglement is zero when the rank of a bipartite state is no larger than half the dimension of the smaller space. We show that that the probability of entanglement is zero when the rank of a bipartite state is no larger than half the maximum of the rank of its two reduced density matrices. Our approach is quite different from that of Arveson and uses a different measure.
A recent breakthrough in quantum computing has been the realization that quantum computation can proceed solely through single-qubit measurements on an appropriate quantum state - for example, the ground state of an interacting many-body system. It would be unfortunate, however, if the usefulness of a ground state for quantum computation was critically dependent on the details of the system\'s Hamiltonian; a much more powerful result would be the existence of a robust ordered phase which is characterized by the ability to perform measurement-based quantum computation (MBQC).