This series consists of weekly discussion sessions on foundations of quantum Theory and quantum information theory. The sessions start with an informal exposition of an interesting topic, research result or important question in the field. Everyone is strongly encouraged to participate with questions and comments.
In a quantum measurement process, classical information about the measured system spreads through the environment. In contrast, quantum information about the system becomes inaccessible to local observers. In this talk, I will present a result about quantum channels indicating that an aspect of this phenomenon is completely general. We show that for any evolution of the system and environment, for everywhere in the environment excluding an O(1)-sized region we call the "quantum Markov blanket," any locally accessible information about the system must be approximately classical, i.e.
"Analogue" Hamiltonian simulation involves engineering a Hamiltonian of
interest in the laboratory and studying its properties experimentally.
Large-scale Hamiltonian simulation experiments have been carried out in
optical lattices, ion traps and other systems for two decades. Despite
this, the theoretical basis for Hamiltonian simulation is surprisingly
sparse. Even a precise definition of what it means to simulate a
Hamiltonian was lacking.
The Fermi-Hubbard model is of fundamental importance in condensed-matter physics, yet is extremely challenging to solve numerically. Finding the ground state of the Hubbard model using variational methods has been predicted to be one of the first applications of near-term quantum computers. In this talk, I will discuss recent work which carried out a detailed analysis and optimisation of the complexity of variational quantum algorithms for finding the ground state of the Hubbard model, including extensive numerical experiments for systems with up to 12 sites.
I will review work by myself and others in recent years on the use of randomization in quantum circuit optimization. I will present general results showing that any deterministic compiler for an approximate synthesis problem can be lifted to a better random compiler. I will discuss the subtle issue of what "better" means and how it is sensitive to the metric and computation task at hand. I will then review specific randomized algorithms for quantum simulations, including randomized Trotter (Su & Childs) and my group's work on the qDRIFT and SPARSTO algorithms. The qDRIFT algorithm i
Discriminating between unknown objects in a given set is a fundamental task in experimental science. Suppose you are given a quantum system which is in one of two given states with equal probability. Determining the actual state of the system amounts to doing a measurement on it which would allow you to discriminate between the two possible states. It is known that unless the two states are mutually orthogonal, perfect discrimination is possible only if you are given arbitrarily many identical copies of the state.
Unitary t-designs are the bread and butter of quantum information theory and beyond. An important issue in practice is that of efficiently constructing good approximations of such unitary t-designs. Building on results by Aubrun (Comm. Math. Phys. 2009), we prove that sampling dtpoly(t,logd,1/ϵ) unitaries from an exact t-design provides with positive probability an ϵ-approximate t-design, if the error is measured in one-to-one norm.
The Petz recovery channel plays an important role in quantum information science as an operation that approximately reverses the effect of a quantum channel. The pretty good measurement is a special case of the Petz recovery channel, and it allows for near-optimal state discrimination. A hurdle to the experimental realization of these vaunted theoretical tools is the lack of a systematic and efficient method to implement them.
Zero-knowledge proofs are one of the cornerstones of modern cryptography. It is well known that any language in NP admits a zero-knowledge proof. In the quantum setting, it is possible to go beyond NP. Zero-knowledge proofs for QMA have first been studied in a work of Broadbent et al (FOCS'16). There, the authors show that any language in QMA has an (interactive) zero-knowledge proof. In this talk, I will describe an idea, based on quantum teleportation, to remove interaction at the cost of adding an instance-independent preprocessing step.
We study the problem of learning the Hamiltonian of a quantum many-body system given samples from its Gibbs (thermal) state. The classical analog of this problem, known as learning graphical models or Boltzmann machines, is a well-studied question in machine learning and statistics. In this work, we give the first sample-efficient algorithm for the quantum Hamiltonian learning problem. In particular, we prove that polynomially many samples in the number of particles (qudits) are necessary and sufficient for learning the parameters of a spatially local Hamiltonian in l_2-norm.
We reconsider the black hole firewall puzzle, emphasizing that quantum error-correction, computational complexity, and pseudorandomness are crucial concepts for understanding the black hole interior. We assume that the Hawking radiation emitted by an old black hole is pseudorandom, meaning that it cannot be distinguished from a perfectly thermal state by any efficient quantum computation acting on the radiation alone. We then infer the existence of a subspace of the radiation system which we interpret as an encoding of the black hole interior.