This series consists of weekly discussion sessions on foundations of quantum Theory and quantum information theory. The sessions start with an informal exposition of an interesting topic, research result or important question in the field. Everyone is strongly encouraged to participate with questions and comments.
Given a vector in a representation, can it be distinguished from zero by an invariant polynomial? This classical question in invariant theory relates to a diverse set of problems in mathematics and computer science. In quantum information, it captures the quantum marginal problem and recent bounds on tensor ranks. We will see that the general question can be usefully thought of as an optimization problem and discuss how this perspective leads to efficient algorithms for solving it.
Many quantum information protocols require the implementation of random unitaries. Because it takes exponential resources to produce Haar-random unitaries drawn from the full n-qubit group, one often resorts to t-designs. Unitary t-designs mimic the Haar-measure up to t-th moments. It is known that Clifford operations can implement at most 3-designs. In this work, we quantify the non-Clifford resources required to break this barrier.
Given the large push by academia and industry (e.g., IBM and Google), quantum computers with hundred(s) of qubits are at the brink of existence with the promise of outperforming any classical computer. Demonstration of computational advantages of noisy near-term quantum computers over classical computers is an imperative near-term goal. The foremost candidate task for showing this is Random Circuit Sampling (RCS), which is the task of sampling from the output distribution of a random circuit. This is exactly the task that recently Google experimentally performed on 53-qubits.
Basic statistical properties of quantum many-body systems in thermal equilibrium can be obtained from their partition function. In this talk, I will present a quasi-polynomial time classical algorithm that estimates the partition function of quantum many-body systems at temperatures above the thermal phase transition point. It is known that in the worst case, the same problem is NP-hard below this temperature. This shows that the transition in the phase of a quantum system is also accompanied by a transition in the computational hardness of estimating its statistical properties.
Schur-Weyl duality is a ubiquitous tool in quantum information. At its heart is the statement that the space of operators that commute with the tensor powers of all unitaries is spanned by the permutations of the tensor factors. In this work, we describe a similar duality theory for tensor powers of Clifford unitaries. The Clifford group is a central object in many subfields of quantum information, most prominently in the theory of fault-tolerance. The duality theory has a simple and clean description in terms of finite geometries.
It is believed that active quantum error correction will be an essential ingredient to build a scalable quantum computer. The currently favored scheme is the surface code due to its high decoding threshold and efficient decoding algorithm. However, it suffers from large overheads which are even more severe when parity check measurements are subject to errors and have to be repeated. Furthermore, the number of encoded qubits in the surface code does not grow with system size, leading to a sub-optimal use of the physical qubits.
The area law for entanglement provides one of the most important connections between information theory and quantum many-body physics. It is not only related to the universality of quantum phases, but also to efficient numerical simulations in the ground state (i.e., the lowest energy state). Various numerical observations have led to a strong belief that the area law is true for every non-critical phase in short-range interacting systems [1].
We investigate weak coin flipping, a fundamental cryptographic primitive where two distrustful parties need to remotely establish a shared random bit. A cheating player can try to bias the output bit towards a preferred value. A weak coin-flipping protocol has a bias ϵ if neither player can force the outcome towards their preferred value with probability more than 1/2+ϵ. While it is known that classically ϵ=1/2, Mochon showed in 2007 [arXiv:0711.4114] that quantumly weak coin flipping can be achieved with arbitrarily small bias, i.e.
The quantum Fisher information (QFI) measures the amount of information that a quantum state carries about an unknown parameter. Given a quantum channel, its entanglement-assisted QFI is the maximum QFI of the output state assuming an input state over the system and an arbitrarily large ancilla. Consider N identical copies of a quantum channel, the channel QFI grows either linearly or quadratically with N asymptotically. Here we obtain a simple criterion that determines whether the scaling is linear or quadratic.
Motivated by puzzles in quantum gravity AdS/CFT, Lenny Susskind posed the following question: supposing one had the technological ability to distinguish a macroscopic superposition of two given states |v> and |w> from incoherent mixture of those states, would one also have the technological ability to map |v> to |w> and vice versa? More precisely, how does the quantum circuit complexity of the one task relate to the quantum circuit complexity of the other? Here we resolve Susskind's question -- showing that the two complexities are essentially identical, even for approximate v