A A   
Connect with us:      
 

Quantum Machine Learning

Conference Date: 
Monday, August 8, 2016 (All day) to Friday, August 12, 2016 (All day)
Pirsa Collection: 
Scientific Areas: 
Condensed Matter
Tensor Networks

 

The advent of modern machine learning has ushered in rapid advances in the classification and interpretation of large data sets, sparking a revolution in areas such as image and natural language processing.  Much of our current understanding of the techniques that underlie this revolution owes a great debt to insights first gleaned from condensed matter and statistical physics. This raises the important question of what further insights remain to be found at the intersection of machine learning and fields such as statistical physics, condensed matter, and quantum information.  In response to this question, this workshop aims to bring together experts from a variety of backgrounds who are interested in connections between many-body physics, quantum computing and machine learning.  The scope of the conference will include:

- The use of techniques from machine learning, such as neural networks or statistical learning, to tackle quantum many-body problems, such as discriminating phases of matter, analyzing phase transitions, and addressing the inverse Hamiltonian problem.

- Physics-inspired algorithms for machine learning and neural networks, such as extensions of Boltzmann machines (classical statistical mechanical learning) and connections between deep learning, the renormalization group, and tensor networks/MERA.

- Opportunities for machine learning that quantum computing will enable.  This includes algorithmic advances for fault tolerant computers, as well as currently-available hardware systems such as quantum annealers.

Registration for this event is now closed.
 
 
 
  • Mohammad Amin, D Wave Systems
  • Peter Broecker, University of Cologne
  • Kieron Burke, University of California, Irvine
  • Matthew Fisher, Kavli Institute for Theoretical Physics
  • Christopher Granade, University of Sydney
  • Sergei Isakov, Google
  • Ashish Kapoor, Microsoft Research
  • Rosemary Ke, University of Montreal
  • Seth Lloyd, Massachusetts Institute of Technology
  • Andrew Millis, Simons Foundation
  • Alejandro Perdomo Oritz, NASA Ames Research Center
  • Barry Sanders, University of Calgary
  • Maria Schuld, University of KwaZulu-Natal
  • David Schwab, Northwestern University
  • Cyril Stark, Massachusetts Institute of Technology
  • James Steck, Wichita State University
  • Damian Steiger, ETH Zurich & Google
  • Miles Stoudenmire, University of California, Irvine
  • GiacomoTorlai, University of Waterloo
  • Mohammad Amin, D Wave Systems
  • Louis-Francois Arsenault, Columbia University
  • Jacob Barnett, Perimeter Institute
  • Matt Beach, University of British Columbia
  • Stefanie Beale, Institute for Quantum Computing
  • Oleg Boulanov, Université Laval
  • Daniel Brod, Perimeter Institute
  • Peter Broecker, University of Cologne
  • Kieron Burke, University of California, Irvine
  • Juan Carrasquilla, Perimeter Institute
  • Chen-Fu Chiang, SUNY
  • Joshua Combes, Perimeter Institute
  • Alexandre Day, Boston University
  • Matthew Fisher, Kavli Institute for Theoretical Physics
  • Wenbo Fu, Harvard University
  • Martin Ganahl, Perimeter Institute
  • Sevag Gharibian, Virginia Commonwealth University
  • Victor Godet, Google
  • Christopher Granade, University of Sydney
  • Zhengcheng Gu, Perimeter Institute
  • Gian Giacomo Guerreschi, Intel
  • Guiyang Han, University of Waterloo
  • Lauren Hayward-Sierens, Perimeter Institute
  • Yejin Huh, University of Toronto
  • Sergei Isakov, Google
  • Bryan Jacobs, IARPA
  • Ying-Jer Kao, National Taiwan University
  • Ashish Kapoor, Microsoft Research
  • Hemant Katiyar, Institute for Quantum Computing
  • Rosemary Ke, University of Montreal
  • Adrian Kent, Cambridge University
  • Ehsan Khatami, San Jose State University
  • Aaram Kim, Goethe-Universität Frankfurt am Main
  • Alexandre Krajenbrink, Cambridge Quantum Computing
  • Bohdan Kulchytskyy, University of Waterloo
  • Joel Lamy-Poirier, Perimeter Institute
  • Jaehoon Lee, University of British Columbia
  • Junhyun Lee, Harvard University
  • Ipsita Mandal, Perimeter Institute
  • Roger Melko, Perimeter Institute & University of Waterloo
  • Andrew Millis, Simons Foundation
  • Ryan Mishmash, California Institute of Technology
  • Robert Myers, Perimeter Institute
  • Apurva Narayan, University of Waterloo
  • Nam Nguyen, Wichita State University
  • Chan Y. Park, Rutgers University
  • Alejandro Perdomo Oritz, NASA Ames Research Center
  • Anthony Polloreno, Rigetti Computing
  • Pedro Ponte, Perimeter Institute
  • Andrew Reeves, Grand River Regional Cancer Center
  • Trevor Rempel, Perimeter Institute
  • Julian Rincon, Perimeter Institute
  • Nicholas Rubin, Rigetti Computing
  • Wojciech Rzadkowski, University of Warsaw
  • Subir Sachdev, Harvard University
  • Barry Sanders, University of Calgary
  • Norbert Schuch, Max-Planck-Institute of Quantum Optics
  • Maria Schuld, University of KwaZulu-Natal
  • David Schwab, Northwestern University
  • Ivan Sergienko, Scotiabank
  • Todd Sierens, Perimeter Institute
  • Rajiv Singh, University of California, Davis
  • Cyril Stark, Massachusetts Institute of Technology
  • James Steck, Wichita State University
  • Damian Steiger, ETH Zurich & Google
  • Miles Stoudenmire, University of California, Irvine
  • Yongchao Tang, University of Waterloo
  • GiacomoTorlai, University of Waterloo
  • Jordan Venderley, Cornell University
  • Guillaume Verdon-Akzam, University of Waterloo
  • Guifre Vidal, Perimeter Institute
  • Yuan Wan, Perimeter Institute
  • Chenjie Wang, Perimeter Institute
  • Ching-Hao Wang, Boston University
  • Shuo Yang, Perimeter Institute
  • Chuck-Hou Yee, Rutgers University

Monday, August 8, 2016

Time

Event

Location

9:00 – 9:30am

Registration

Reception

9:30 – 9:35am

Welcome and Opening Remarks

Theatre

9:35 – 10:15am

Ashish Kapoor, Microsoft Research
Comparing Classical and Quantum Methods for Supervised Machine Learning

Theatre

10:15 – 11:00am

Coffee Break

Bistro – 1st Floor

11:00 – 11:45am

Maria Schuld, University of KwaZulu-Natal
Classification on a quantum computer: Linear regression and ensemble methods

Theatre

11:45 – 12:30pm

Christopher Granade, University of Sydney
Rejection and Particle Filtering for Hamiltonian Learning

Theatre

12:30 – 2:30pm

Lunch

Bistro – 2nd Floor

2:30 – 3:15pm

Barry Sanders, University of Calgary
Learning in Quantum Control: High-Dimensional Global Optimization for Noisy Quantum Dynamics

Theatre

 

Tuesday, August 9, 2016

Time

Event

Location

9:30 – 10:15am

Cyril Stark, Massachusetts Institute of Technology
Physics-inspired techniques for association rule mining

Theatre

10:15 – 11:00am

Coffee Break

Bistro – 1st Floor

11:00 – 11:45am

David Schwab, Northwestern University
Physical approaches to the extraction of relevant information

Theatre

11:45 – 12:30pm

Miles Stoudenmire, University of California, Irvine
Learning with Quantum-Inspired Tensor Networks

Theatre

12:30 – 2:30pm

Lunch

Bistro – 2nd Floor

2:30 – 3:00pm

James Steck, Wichita State University
Learning quantum annealing

Theatre

3:00pm – 3:30pm

Rosemary Ke, MILA, University of Montreal
Deep Learning: An Overview

Theatre

 

Wednesday, August 10, 2016

Time

Event

Location

9:30 – 10:15am

Sergei Isakov, Google
TBA

Theatre

10:15 - 11:00am

Coffee Break

Bistro – 1st Floor

11:00 – 11:45am

Mohammad Amin, D Wave Systems
Quantum Boltzmann Machine using a Quantum Annealer

Theatre

11:45 – 12:30pm

Alejandro Perdomo-Ortiz, NASA Ames Research Center
A quantum-assisted algorithm for sampling applications in machine learning

Theatre

12:00 – 2:00pm

Lunch

Bistro – 2nd Floor

2:00-3:30pm

Colloquium
Matthew Fisher, Kavli Institute for Theoretical Physics
Quantum Crystals, Quantum Computing and Quantum Cognition

Theatre

 

Thursday, August 11, 2016

Time

Event

Location

9:30 – 10:15am

Kieron Burke, University of California, Irvine
Finding density functionals with machine-learning

Theatre

10:15 – 11:00am

Coffee Break

Bistro – 1st Floor

11:00 – 11:45am

Andrew Millis, Columbia University
TBA

Theatre

11:45 – 12:30pm

Juan Carrasquilla, Perimeter Institute
Machine Learning Phases of Matter

Theatre

12:30 – 2:30pm

Lunch

Bistro – 2nd Floor

2:30 – 2:45pm

Conference Photo

TBA

2:45 – 3:15pm

Giacomo Torlai, University of Waterloo
Learning Thermodynamics with Boltzmann Machines

Theatre

3:15 – 3:45pm

Peter Broecker, University of Cologne
Machine learning quantum phases of matter beyond the fermion sign problem

Theatre

5:30pm

Pub Night

Bistro – 2nd Floor

 

Friday, August 12, 2016

Time

Event

Location

9:30 – 10:15am

Damian Steiger, ETH Zurich & Google
Racing in parallel: Quantum versus Classical

Theatre

10:15 – 11:00am

Coffee Break

Bistro – 1st Floor

11:00 – 11:45am

Seth Lloyd, Massachusetts Institute of Technology
Quantum algorithm for topological analysis of data

Theatre

12:00 – 2:30pm

Lunch

Bistro – 2nd Floor

2:30 – 5:00pm

Collaboration

Theatre

 

Mohammad Amin, D Wave Systems

Quantum Boltzmann Machine using a Quantum Annealer

Machine learning is a rapidly growing field in computer science with applications in computer vision, voice recognition, medical diagnosis, spam filtering, search engines, etc. In this presentation, I will introduce a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Model. Due to the non-commutative nature of quantum mechanics, the training process of the Quantum Boltzmann Machine (QBM) can become nontrivial.  I will show how to circumvent this problem by introducing bounds on the quantum probabilities. This allows training the QBM efficiently by sampling. I will then show examples of QBM training with and without the bound, using exact diagonalization, and compare the results with classical Boltzmann training. Finally, after a brief introduction to D-Wave quantum annealing processors, I will discuss the possibility of using such processors for QBM training and application.

Peter Broecker, University of Cologne

Machine learning quantum phases of matter beyond the fermion sign problem

Kieron Burke, University of California, Irvine

Finding density functionals with machine-learning

Density functional theory (DFT) is an extremely popular approach to electronic structure problems in both materials science and chemistry and many other fields.   Over the past several years, often in collaboration with Klaus Mueller at TU Berlin, we have explored using machine-learning to find the density functionals that must be approximated in DFT calculations. I will summarize our results so far, and report on two new works.

Juan Carrasquilla, Perimeter Institute

Machine Learning Phases of Matter

Matthew FisherKavli Institute for Theoretical Physics

Quantum Crystals, Quantum Computing and Quantum Cognition

Quantum mechanics is down to earth - quite literally - since the electrons within the tiny crystals found in a handful of dirt manifest a dizzying world of quantum motion. Each crystal has it’s own unique choreography, with the electrons entangled in a myriad of quantum dances. Quantum entanglement

also holds the promise of futuristic Quantum Computers - which might be comprised of electron and nuclear spins inside diamond, or of atoms confined in traps, or of small superconducting grains, among a plethora of suggested platforms. In this talk I will describe ongoing efforts to elucidate the mysteries of Quantum Crystals, to design and assemble Quantum Computers, before ruminating about “Quantum Cognition” - the proposal that our brains are capable of quantum processing.
 
Christopher Granade, University of Sydney
 
Rejection and Particle Filtering for Hamiltonian Learning
 
Many tasks in quantum information rely on accurate knowledge of a system's Hamiltonian, including calibrating control, characterizing devices, and verifying quantum simulators. In this talk, we pose the problem of learning Hamiltonians as an instance of parameter estimation. We then solve this problem with Bayesian inference, and describe how rejection and particle filtering provide efficient numerical algorithms for learning Hamiltonians. Finally, we discuss how filtering can be combined with quantum resources to verify quantum systems beyond the reach of classical simulators.
 
Sergei Isakov, Google
 
Towards Quantum Supremacy with Near-Term Devices
 
Can quantum computers outperform classical computers on any computational problem in the near future? We study the problem of sampling from the output distribution of random quantum circuits.
Sampling from this distribution requires an exponential amount of classical computational resources. We argue that quantum supremacy can be achieved in the near future with approximately fifty superconducting qubits and without error correction despite the fact that quantum random circuits are extremely sensitive to errors.
 
Ashish Kapoor, Microsoft Research
 
Comparing Classical and Quantum Methods for Supervised Machine Learning
 
Supervised Machine Learning is one of the key problems that arises in modern big data tasks. In this talk, I will first describe several different classical algorithmic paradigms for classification and then contrast them with quantum algorithmic constructs. In particular, we will look at classical methods such as the nearest neighbor rule, optimization based algorithms (e.g. SVMs), Bayesian inference based techniques (e.g. Bayes point machine) and provide a unifying framework so that we can get a deeper understanding about the quantum versions of the methods. 
 
Rosemary Ke, MILA, University of Montreal
 
Deep Learning: An Overview
 
Seth Lloyd, Massachusetts Institute of Technology
 
Quantum algorithm for topological analysis of data
 
This talk presents a quantum algorithm for performing persistent homology, the identification of topological features of data sets such as connected components, holes and voids.    Finding the  full persistent homology of a data set over n points using classical algorithms takes time O(2^{2n}), while the quantum algorithm takes time O(n^2), an exponential improvement.    The quantum algorithm does not require a quantum random access memory and is suitable for implementation on small quantum computers with a few hundred qubits.
 
Alejandro Perdomo-Ortiz, NASA Ames Research Center
 
A quantum-assisted algorithm for sampling applications in machine learning. 
 
An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact in deep learning and other machine learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively. One of the main limitations is that, while the device may indeed sample from a Boltzmann-like distribution, quantum dynamical arguments suggests it will do so with an instance-dependent effective temperature, different from the physical temperature of the device. Unless this unknown temperature can be unveiled, it might not be possible to effectively use a quantum annealer for Boltzmann sampling. In this talk, we present a strategy to overcome this challenge with a simple effective-temperature estimation algorithm. We provide a systematic study assessing the impact of the effective temperatures in the learning of a kind of restricted Boltzmann machine embedded on quantum hardware, which can serve as a building block for deep learning architectures. We also provide a comparison to k-step contrastive divergence (CD-k) with k up to 100. Although assuming a suitable fixed effective temperature also allows to outperform one step contrastive divergence (CD-1), only when using an instance-dependent effective temperature we find a performance close to that of CD-100 for the case studied here. We discuss generalizations of the algorithm to other more expressive generative models, beyond restricted Boltzmann machines.  
 
Barry Sanders, University of Calgary
 
Learning in Quantum Control: High-Dimensional Global Optimization for Noisy Quantum Dynamics
 
Quantum control is valuable for various quantum technologies such as high-fidelity gates for universal quantum computing, adaptive quantum-enhanced metrology, and ultra-cold atom manipulation. Although supervised machine learning and reinforcement learning are widely used for optimizing control parameters in classical systems, quantum control for parameter optimization is mainly pursued via gradient-based greedy algorithms. Although the quantum fitness landscape is often compatible for greedy algorithms, sometimes greedy algorithms yield poor results, especially for large-dimensional quantum systems. We employ differential evolution algorithms to circumvent the stagnation problem of non-convex optimization, and we average over the objective function to improve quantum control fidelity for noisy systems. To reduce computational cost, we introduce heuristics for early termination of runs and for adaptive selection of search subspaces. Our implementation is massively parallel and vectorized to reduce run time even further. We demonstrate our methods with two examples, namely quantum phase estimation and quantum gate design, for which we achieve superior fidelity and scalability than obtained using greedy algorithms.
 
David Schwab, Northwestern University

Physical approaches to the extraction of relevant information

In the first part of this talk, I will focus on the physics of deep learning, a popular subfield of machine learning where recent performance on tasks such as visual object recognition rivals human performance. I present work relating greedy training of deep belief networks to a form of variational real-space renormalization. This connection may help explain how deep networks automatically learn relevant features from data and extract independent factors of variation. Next, I turn to the information bottleneck (IB), an information theoretic approach to clustering and compression of relevant information that has been suggested as a framework for deep learning. I present a new variant of IB called the Deterministic Information Bottleneck, arguing that it better captures the notion of compression while retaining relevant information. 

Maria Schuld, University of KwaZulu-Natal
 
Classification on a quantum computer: Linear regression and ensemble methods
 
Quantum machine learning algorithms usually translate a machine learning methods into an algorithm that can exploit the advantages of quantum information processing. One approach is to tackle methods that rely on matrix inversion with the quantum linear system of equations routine. We give such a quantum algorithm based on unregularised linear regression. Opposed to closely related work from Wiebe, Braun and Lloyd [PRL 109 (2012)] our scheme focuses on a classification task and uses a different combination of core routines that allows us to process non-sparse inputs, and significantly improves the dependence on the condition number.  The second part of the talk presents an idea that transcends the reproduction of classical results. Instead of considering a single trained classifier, practicioners often use ensembles of models to make predictions more robust and accurate. Under certain conditions, having infinite ensembles can lead to good results. We introduce a quantum sampling scheme that uses the parallelism inherent to a quantum computer in order to sample from 'exponentially large' ensembles that are not explicitely trained. 

Cyril Stark, Massachusetts Institute of Technology

Physics-inspired techniques for association rule mining
 
Imagine you run a supermarket, and assume that for each customer “u” you record what “u” is buying. For instance, you may observe that u=1 typically buys bread and cheese and u=2 typically buys bread and salami. Studying your dataset you suspect that generally, customers who are likely to buy cheese are likely to buy bread as well. Rules of this kind are called association rules. Mining association rules is of significant practical importance in fields like market basket analysis and healthcare. In this talk I introduce a novel method for association rule mining which is inspired by ideas from classical statistical mechanics and quantum foundations. 
 
James Steck, Wichita State University
 
Learning quantum annealing

Damian Steiger, ETH Zurich & Google

Racing in parallel: Quantum versus Classical

In a fair comparison of the performance of a quantum algorithm to a classical one it is important to treat them on equal footing, both regarding resource usage and parallelism. We show how one may otherwise mistakenly attribute speedup due to parallelism as quantum speedup. As an illustration we will go through a few quantum machine learning algorithms, e.g. Quantum Page Rank, and show how a classical parallel computer can solve these problems faster with the same amount of resources.
Our classical parallelism considerations are especially important for quantum machine learning algorithms, which either use QRAM, allow for unbounded fanout, or require an all-to-all communication network.
 
Miles Stoudenmire, University of California, Irvine
 
Learning with Quantum-Inspired Tensor Networks
 
We propose a family of models with an exponential number of parameters, but which are approximated by a tensor network.  Tensor networks are used to represent quantum wavefunctions, and powerful methods for optimizing them can be extended to machine learning applications as well. We use a matrix product state to classify images, and find that a surprisingly small bond dimension yields state-of-the-art results. Tensor networks offer many advantages for machine learning, such as better scaling for existing machine learning approaches and the ability to adapt hyperparameters during training. We will also propose a generative interpretation of the trained models.
 
Giacomo TorlaiUniversity of Waterloo
 
Learning Thermodynamics with Boltzmann Machines
 
The introduction of neural networks with deep architecture has led to a revolution, giving rise to a new wave of technologies empowering our modern society. Although data science has been the main focus, the idea of generic algorithms which automatically extract features and representations from raw data is quite general and applicable in multiple scenarios. Motivated by the effectiveness of deep learning algorithms in revealing complex patterns and structures underlying data, we are interested in exploiting such tool in the context of many-body physics.  I will first introduce the Boltzmann Machine, a stochastic neural network that has been extensively used in the layers of deep architectures.  I will describe how such network can be used for modelling thermodynamic observables for physical systems in thermal equilibrium, and show that it can faithfully reproduce observables for the 2 dimensional Ising model. Finally, I will discuss how to adapt the same network for implementing the classical computation required to perform quantum error correction in the 2D toric code.
 

 

Friday Aug 12, 2016
Speaker(s): 

This talk presents a quantum algorithm for performing persistent homology, the identification of topological features of data sets such as connected components, holes and voids. Finding the full persistent homology of a data set over n points using classical algorithms takes time O(2^{2n}), while the quantum algorithm takes time O(n^2), an exponential improvement. The quantum algorithm does not require a quantum random access memory and is suitable for implementation on small quantum computers with a few hundred qubits.

Collection/Series: 
Scientific Areas: 

 

Friday Aug 12, 2016
Speaker(s): 

In a fair comparison of the performance of a quantum algorithm to a classical one it is important to treat them on equal footing, both regarding resource usage and parallelism. We show how one may otherwise mistakenly attribute speedup due to parallelism as quantum speedup. As an illustration we will go through a few quantum machine learning algorithms, e.g. Quantum Page Rank, and show how a classical parallel computer can solve these problems faster with the same amount of resources.

Collection/Series: 
Scientific Areas: 

 

Thursday Aug 11, 2016
Speaker(s): 

The introduction of neural networks with deep architecture has led to a revolution, giving rise to a new wave of technologies empowering our modern society. Although data science has been the main focus, the idea of generic algorithms which automatically extract features and representations from raw data is quite general and applicable in multiple scenarios. Motivated by the effectiveness of deep learning algorithms in revealing complex patterns and structures underlying data, we are interested in exploiting such tool in the context of many-body physics.

Collection/Series: 
Scientific Areas: 

 

Thursday Aug 11, 2016
Speaker(s): 
Collection/Series: 
Scientific Areas: 

 

Thursday Aug 11, 2016
Speaker(s): 
Collection/Series: 
Scientific Areas: 

 

Thursday Aug 11, 2016
Speaker(s): 

Density functional theory (DFT) is an extremely popular approach to electronic structure problems in both materials science and chemistry and many other fields. Over the past several years, often in collaboration with Klaus Mueller at TU Berlin, we have explored using machine-learning to find the density functionals that must be approximated in DFT calculations. I will summarize our results so far, and report on two new works.

Collection/Series: 
Scientific Areas: 

 

Wednesday Aug 10, 2016
Speaker(s): 

Quantum mechanics is down to earth - quite literally - since the electrons within the tiny crystals found in a handful of dirt manifest a dizzying world of quantum motion. Each crystal has it’s own unique choreography, with the electrons entangled in a myriad of quantum dances. Quantum entanglement

Collection/Series: 
Scientific Areas: 

 

Wednesday Aug 10, 2016

An increase in the efficiency of sampling from Boltzmann distributions would have a significant impact in deep learning and other machine learning applications. Recently, quantum annealers have been proposed as a potential candidate to speed up this task, but several limitations still bar these state-of-the-art technologies from being used effectively.

Collection/Series: 
Scientific Areas: 

 

Wednesday Aug 10, 2016
Speaker(s): 

Machine learning is a rapidly growing field in computer science with applications in computer vision, voice recognition, medical diagnosis, spam filtering, search engines, etc. In this presentation, I will introduce a new machine learning approach based on quantum Boltzmann distribution of a transverse-field Ising Model. Due to the non-commutative nature of quantum mechanics, the training process of the Quantum Boltzmann Machine (QBM) can become nontrivial.  I will show how to circumvent this problem by introducing bounds on the quantum probabilities.

Collection/Series: 
Scientific Areas: 

Pages

Scientific Organziers:

  • Roger Melko, Perimeter Institute & University of Waterloo
  • Miles Stoudenmire, University of California, Irvine
  • Guifre Vidal, Perimeter Institute
  • Nathan Wiebe, Microsoft Research