Machine Learning has been the focus of research and an interest point for industry, and with the rise of quantum computing in the recent years the attempt to examine how machine learning could be adapted to work on quantum computers was only natural. Quantum machine learning (QML) strives to effectively solve certain learning tasks that are challenging for classical methods. Some QML algorithms have already shown better performance in runtime complexity compared to their classical counterparts in theoretical studies. Another possible advantage that can be achieved via QML can be more optimal solutions or superior accuracy, even in the case of identical or worse runtime complexity. Kernel methods in QML are a potential candidate to achieve quantum advantage in data-analysis. Since Quantum Kernel Methods (QKMs) map the input data onto a higher dimensional feature space similarly to their classical analogous Classical Kernel Methods, QKMs can take advantage of the fact that the same number of qubits span a Hilbert-space of exponentially higher dimensions compared to the classical bits. However in the current era of quantum computing, known as NISQ (Noisy Intermediate Scale Quantum) era, the inherent noise on the hardware is a considerable challenge before the quantum advantage QKMs can provide, as it is before all other applications of quantum computing. Noise on NISQ devices is one of the notable sources of a challenge in training of QKMs, known as exponential concentration. Exponential concentration is a barrier similar to the concept of barren plateaus in Neural Networks. It describes how the value of a quantum kernel can become fixed around a certain value with increasing number of qubits and and how the required number of shots to resolve the desired accuracy then also scales exponentially and has multiple different sources such as entanglement, noise, global measurements, expressivity of the data embedding. Nevertheless, the introduction of a new hyperparameter called bandwidth (t), which controls the scaling of the input data, is shown - both empirically and theoretically - to take a kernel that is provably not generalizable to one that is able to generalize for well-aligned tasks. Moreover, a quantum error mitigation method - called Bit-Flip Tolerance (BFT) has been proposed specifically for quantum kernels targeting exponential concentration. BFT is based on the idea that minor deviations from the expected outcome of all-zero, e.g bit-flips from 0 to 1 on a small number of qubits, are due to noise. The number of qubits that are assumed to have gone through a bit flip error can be controlled in this method with a parameter callsed number of allowed bit-flips (d).
In this thesis, the aim is to uncover any possible correlations between these two parameters bandwidth t and number of allowed bit-flips d in BFT to minimize the exponential concentration thereby enabling the maximum improvement of the quantum kernel’s accuracy. The goal is to characterize the interplay between these parameters and if there are certain conditions to be held for optimal performance of the kernel. For this purpose, multiple datasets will be investigated, varying the number of qubits employed for the kernel in an empirical study, where t and d will be the control parameters
Aufgabensteller:
Prof. Dr. D. Kranzlmüller
Duration:
Supervisor: