opncrafter

What is Quantum Machine Learning (QML)?

Quantum Machine Learning sits at the intersection of two of the most rapidly evolving fields in computing. It's not a single algorithm or framework — it's a research discipline exploring how quantum computers can enhance, accelerate, or fundamentally reimagine the training and inference of machine learning models. Understanding QML requires clarity on what it actually includes, what it excludes, and where the genuine research frontier lies versus marketing hype.


The Four Sub-Fields of QML

Sub-FieldData TypeModel TypeMaturity
CC (Classical-Classical)ClassicalClassical MLProduction — this is standard ML
CQ (Classical-Quantum)Classical dataQuantum model (VQC)NISQ research / early experiments
QC (Quantum-Classical)Quantum dataClassical ML interpretsActive use in quantum error correction
QQ (Quantum-Quantum)Quantum dataQuantum modelTheoretical / long-term research

Most QML research today focuses on the CQ category — taking classical datasets (images, text, tabular data) and running them through parameterized quantum circuits (PQCs) that act as trainable models. This is the quantum analogue of a neural network: instead of learned weight matrices, you have learned rotation angles applied to qubit states.


The Core Primitive: Variational Quantum Circuits

# Your first QML model using PennyLane
# pip install pennylane pennylane-sf torch

import pennylane as qml
import torch
import torch.nn as nn

# Define quantum device (can be simulator or real hardware)
n_qubits = 4
dev = qml.device("default.qubit", wires=n_qubits)

@qml.qnode(dev, interface="torch")
def quantum_layer(inputs, weights):
    """
    Parameterized Quantum Circuit (PQC) -- the core QML primitive.
    
    Steps:
    1. Encode: map classical input features onto qubit rotation angles
    2. Entangle: apply CNOT gates to create qubit correlations
    3. Variational: parameterized rotations learned during training
    4. Measure: read out expectation values as output features
    """
    # Step 1: Amplitude encoding of input data
    qml.AngleEmbedding(inputs, wires=range(n_qubits), rotation='Y')

    # Step 2 & 3: Strongly entangling layers (trainable)
    qml.StronglyEntanglingLayers(weights, wires=range(n_qubits))

    # Step 4: Measure Pauli-Z expectation on each qubit
    return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]


class HybridQNN(nn.Module):
    """Hybrid quantum-classical classifier."""
    def __init__(self, n_classes=2):
        super().__init__()
        # Quantum layer weights: (layers, qubits, 3 rotation angles)
        weight_shapes = {"weights": (3, n_qubits, 3)}
        self.q_layer = qml.qnn.TorchLayer(quantum_layer, weight_shapes)
        # Classical post-processing
        self.fc = nn.Linear(n_qubits, n_classes)

    def forward(self, x):
        # x: [batch, n_qubits] after classical preprocessing
        q_out = self.q_layer(x)      # Quantum inference
        return self.fc(q_out)         # Classical classification head


# Training is identical to classical PyTorch
model = HybridQNN(n_classes=2)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()

# Dummy data: 4 features, 2 classes
X = torch.rand(32, n_qubits)
y = torch.randint(0, 2, (32,))

for epoch in range(50):
    optimizer.zero_grad()
    loss = criterion(model(X), y)
    loss.backward()   # Gradients flow through quantum circuit via parameter shift
    optimizer.step()
    if epoch % 10 == 0:
        print(f"Epoch {epoch}: loss={loss.item():.4f}")

How Quantum Circuits Learn: The Parameter Shift Rule

Classical neural networks use backpropagation to compute gradients. Quantum circuits can't use the same approach because quantum gates aren't differentiable in the classical sense — you can't differentiate with respect to a unitary matrix directly. Instead, QML uses the parameter shift rule: to compute the gradient of a quantum circuit with respect to a parameter θ, you evaluate the circuit at θ + π/2 and θ − π/2, then subtract:

# Parameter shift rule: quantum analogue of backpropagation

def quantum_gradient(circuit, theta, shift=np.pi/2):
    """
    Compute gradient of quantum circuit output w.r.t. parameter theta.
    
    dE/dtheta = [E(theta + pi/2) - E(theta - pi/2)] / 2
    
    This requires 2 circuit evaluations per parameter.
    A circuit with 100 parameters needs 200 evaluations for one gradient step.
    Compare: classical backprop needs ~2x forward passes total regardless of params.
    This is why QML training is currently slower than classical training.
    """
    shifted_plus  = circuit(theta + shift)
    shifted_minus = circuit(theta - shift)
    return (shifted_plus - shifted_minus) / 2

# PennyLane handles this automatically:
# qml.grad(circuit)(theta)  # uses parameter shift under the hood

QML vs Classical ML: The Key Differences

  • State space: N qubits represent 2^N dimensional Hilbert space — exponentially richer than N classical neurons
  • Entanglement as inductive bias: Quantum entanglement creates correlations between features that have no direct classical analogue — a potential source of expressiveness advantage
  • Training cost: Parameter shift rule requires 2 circuit evaluations per parameter per gradient step — more expensive than classical backprop at current hardware speeds
  • Output is probabilistic: QML models output expectation values averaged over many measurement shots, not deterministic predictions
  • Data encoding is hard: There's no established best practice for encoding classical data into quantum states — different encodings give wildly different results

Conclusion

QML is a legitimate and active research field with genuine theoretical foundations, practical tooling (PennyLane, Qiskit Machine Learning, TensorFlow Quantum), and early experiments on real quantum hardware. It is not science fiction. It is also not ready to replace classical ML for any production workload in 2026. The honest position: QML is worth understanding deeply if you're a researcher or building long-term AI infrastructure strategy. For near-term production systems, classical ML on GPUs remains the right choice. The two will converge into hybrid systems in the 2030s — understanding QML now positions you for that convergence.

Continue Reading

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK