opncrafter

QML Algorithms Explained (Beginner to Advanced)

Quantum Machine Learning encompasses a growing library of algorithms, from near-term variational methods that run on today's NISQ hardware to fault-tolerant algorithms requiring millions of error-corrected qubits. This guide walks through the most important QML algorithms in order of increasing quantum capability requirements — from what you can run today on a free IBM Quantum account, to what the 2035 quantum roadmap enables.


Level 1: NISQ-Ready Algorithms (Run Today)

Variational Quantum Classifier (VQC)

The most widely used QML algorithm. A parameterized quantum circuit acts as a classifier, trained via gradient descent using the parameter shift rule.

from qiskit_machine_learning.algorithms import VQC
from qiskit.circuit.library import ZZFeatureMap, RealAmplitudes
from qiskit_algorithms.optimizers import COBYLA
from qiskit_aer.primitives import Sampler

# Feature map: encodes classical data into quantum state
feature_map = ZZFeatureMap(feature_dimension=4, reps=2)

# Ansatz: trainable part of the circuit
ansatz = RealAmplitudes(num_qubits=4, reps=3)

# Train the classifier
vqc = VQC(
    sampler=Sampler(),
    feature_map=feature_map,
    ansatz=ansatz,
    optimizer=COBYLA(maxiter=300),
)

# X_train: classical features [n_samples, 4]
# y_train: binary labels [n_samples]
vqc.fit(X_train, y_train)
accuracy = vqc.score(X_test, y_test)
print(f"QML Test Accuracy: {accuracy:.3f}")
# Typical result on small datasets: 70-90% accuracy
# Classical MLP on same data: often similar or better

Quantum Kernel Methods

Classical SVMs use kernel functions to project data into higher-dimensional spaces where it becomes linearly separable. Quantum kernel methods use quantum circuits to compute kernels that may be exponentially expensive to compute classically.

from qiskit_machine_learning.kernels import FidelityQuantumKernel
from qiskit_machine_learning.algorithms import QSVC

# Quantum kernel: compute inner products in quantum feature space
quantum_kernel = FidelityQuantumKernel(feature_map=ZZFeatureMap(4, reps=2))

# Quantum Support Vector Classifier
qsvc = QSVC(quantum_kernel=quantum_kernel)
qsvc.fit(X_train, y_train)

# Key advantage: if the quantum feature space is hard to classically simulate,
# this could provide a genuine computational advantage for certain datasets.
# Key limitation: kernel computation requires O(n²) circuit evaluations
# for n training samples -- slow for large datasets.

Level 2: Hybrid Quantum-Classical (Current Research)

Quantum Boltzmann Machines (QBM)

Classical Restricted Boltzmann Machines (RBMs) are generative models learned via contrastive divergence. QBMs replace the RBM with a quantum Ising Hamiltonian, using quantum tunneling to escape local minima that trap classical optimization. D-Wave's quantum annealers are the most practical near-term hardware for this.

Quantum Neural Networks (QNN) with Data Re-uploading

A key technique for making shallow quantum circuits more expressive: encode the input data multiple times at different layers of the circuit, interleaved with trainable gates.

import pennylane as qml
import numpy as np

dev = qml.device("default.qubit", wires=2)

@qml.qnode(dev)
def data_reuploading_circuit(x, weights, n_layers=3):
    """
    Data re-uploading: encode x at every layer.
    This makes shallow circuits more expressive without more qubits.
    Proven to be universal function approximators (analogous to universal NNs).
    """
    for layer in range(n_layers):
        # Re-encode input at each layer
        for i in range(2):
            qml.RX(x[0], wires=i)
            qml.RY(x[1], wires=i)
        # Trainable layer
        qml.CNOT(wires=[0, 1])
        qml.RY(weights[layer, 0], wires=0)
        qml.RY(weights[layer, 1], wires=1)

    return qml.expval(qml.PauliZ(0))

Level 3: Fault-Tolerant Algorithms (2030+ Hardware)

HHL Algorithm (Quantum Linear Systems)

Harrow-Hassidim-Lloyd (HHL) solves the linear system Ax = b in O(log N) time under specific conditions, compared to classical O(N³). The ML applications: solving the normal equations for linear regression, computing gradients in linear models, and certain neural network layer computations.

from qiskit_algorithms.linear_solvers import HHL
from qiskit_algorithms.linear_solvers.matrices import TridiagonalToeplitz

# Solve Ax = b on a quantum computer
# Requirement: A must be sparse and well-conditioned
# Requirement: b must be loadable as a quantum state

matrix = TridiagonalToeplitz(2, 1, 0.5)  # 4x4 sparse matrix
b_vector = [1, 0, 0, 0]  # Right-hand side

hhl = HHL()
solution = hhl.solve(matrix, b_vector)

# Caveat: the "solution" is a quantum state.
# Reading out all N components requires N measurements.
# This eliminates the speedup unless you only need a scalar property of x.
# E.g., dot product x·v for some known vector v -- that's O(1) to extract.

Quantum Singular Value Decomposition (qSVD)

Classical SVD is O(min(mn², m²n)) for an m×n matrix. Quantum SVD provides exponential speedup for computing dominant singular values and vectors — core operations in PCA, matrix factorization, and the linear layers of neural networks. Requires fault-tolerant hardware and qRAM.


Algorithm Selection Guide

If you want to...Use this algorithmHardware needed
Classify small datasets with quantum kernelQSVC / Quantum Kernel SVMNISQ (IBM/IonQ free tier)
Train a small generative modelQuantum Boltzmann MachineD-Wave or NISQ simulator
Prove expressibility of shallow circuitsVQC with data re-uploadingNISQ
Solve a linear regression gradient exactlyHHLEarly fault-tolerant (2030+)
Run PCA exponentially fasterQuantum SVDMature fault-tolerant (2035+)
Train a transformer-style model quantumlyQuantum attention (research)Full-scale QC (2040+)

Conclusion

The QML algorithm landscape spans from immediately runnable (VQC, quantum kernels on IBM Quantum free tier) to foundational research targets (quantum transformers on 2040s hardware). The practical engineer's path: start with PennyLane or Qiskit, implement a VQC on a small dataset, understand the parameter shift rule, and gradually work toward the fault-tolerant algorithm layer. The algorithms are the interface between quantum physics and machine learning — understanding them is what enables you to identify which of your ML workloads will benefit as hardware matures.

Continue Reading

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK