{ "cells": [ { "cell_type": "markdown", "id": "c7a956c2", "metadata": {}, "source": [ "# Hello World: Quantum Machine Learning with Merlin (Cloud)\n", "\n", "Welcome! This notebook demonstrates how to use Merlin to build a quantum reservoir algorithm to then run on a real QPU (or in this case, a simulator emulating it). We chose a reservoir algorithm since gradient is not propagated through quantum layers when using a processor.\n", "\n", "To use the processor, follow the ``TODO`` comments." ] }, { "cell_type": "markdown", "id": "34a189db", "metadata": {}, "source": [ "## 1. Install and Import Dependencies\n", "\n", "First, lets make sure all required packages are installed and import them. \n", "If you haven't installed Merlin yet, run: \n", "`pip install merlinquantum` in your terminal or `!pip install merlinquantum` in your notebook." ] }, { "cell_type": "code", "execution_count": null, "id": "2ed0f1a9", "metadata": {}, "outputs": [], "source": [ "import torch\n", "import torch.nn as nn\n", "import merlin as ML\n", "import perceval as pcvl\n", "from merlin.datasets import iris" ] }, { "cell_type": "markdown", "id": "19e0d5db", "metadata": {}, "source": [ "To run your experiments on a processor (real QPU or a simulator emulating it), we need to create a `MerlinProcessor` object." ] }, { "cell_type": "code", "execution_count": null, "id": "cbd7e971", "metadata": {}, "outputs": [], "source": [ "#(OPTIONAL) Save your token in a file called .env in the same directory as this notebook, with the following content:\n", "# CLOUD_TOKEN=your_token_here\n", "\n", "## TODO Uncomment to load the token\n", "# from dotenv import load_dotenv\n", "# load_dotenv()\n", "# import os\n", "# CLOUD_TOKEN = os.getenv(\"CLOUD_TOKEN\")" ] }, { "cell_type": "markdown", "id": "14b5b1ea", "metadata": {}, "source": [ "Lets first load a perceval `RemoteProcessor`. It is the original way of accessing Quandela's cloud. For Scaleway hosted plateforms and any future session-based providers, use `pcvl.providers.scaleway` instead of `pcvl.RemoteProcessor`.\n", "\n", "Here we use `sim:slos` which is a noise-free simulator, use `sim:ascella` if you want a simultor which reproduces the noise of the `qpu:ascella` QPU." ] }, { "cell_type": "code", "execution_count": null, "id": "8fdd1982", "metadata": {}, "outputs": [], "source": [ "## TODO Uncomment to use the processor\n", "# pcvl.RemoteConfig.set_token(CLOUD_TOKEN)\n", "# rp = pcvl.RemoteProcessor(\"sim:slos\")" ] }, { "cell_type": "code", "execution_count": null, "id": "07ff2687", "metadata": {}, "outputs": [], "source": [ "## TODO Uncomment to use the processor\n", "# proc = ML.MerlinProcessor(\n", "# rp,\n", "# microbatch_size=32, # batch chunk size per cloud call\n", "# timeout=3600.0, # default wall-time per forward (seconds)\n", "# max_shots_per_call=None, # optional cap per cloud call (see below)\n", "# chunk_concurrency=1, # parallel chunk jobs within a quantum leaf\n", "# )" ] }, { "cell_type": "markdown", "id": "5c13b34f", "metadata": {}, "source": [ "## 2. Load and Prepare the Iris Dataset\n", "\n", "We'll use the classic Iris dataset, a simple and well-known benchmark for classification. \n", "Let's load the data and convert it to PyTorch tensors for training." ] }, { "cell_type": "code", "execution_count": null, "id": "c71851b3", "metadata": {}, "outputs": [], "source": [ "train_features, train_labels, train_metadata = iris.get_data_train()\n", "test_features, test_labels, test_metadata = iris.get_data_test()\n", "\n", "# Convert data to PyTorch tensors\n", "X_train = torch.FloatTensor(train_features)\n", "y_train = torch.LongTensor(train_labels)\n", "X_test = torch.FloatTensor(test_features)\n", "y_test = torch.LongTensor(test_labels)\n", "\n", "print(f\"Training samples: {X_train.shape[0]}\")\n", "print(f\"Test samples: {X_test.shape[0]}\")\n", "print(f\"Features: {X_train.shape[1]}\")\n", "print(f\"Classes: {len(torch.unique(y_train))}\")" ] }, { "cell_type": "markdown", "id": "74a16ed3", "metadata": {}, "source": [ "![iris](../_static/img/Iris_pipeline.png)" ] }, { "cell_type": "markdown", "id": "c80a7e7d", "metadata": {}, "source": [ "## 3. Define the Quantum reservoir Model\n", "\n", "The model can be split into two parts:\n", "- The `QuantumLayer` implements the quantum reservoir.\n", "- The `classical_out` is the classical model that takes the reservoir's output to classify the data.\n", "\n", "To make sure that the model runs on the processor, we will need to call the processor's `forward` method. We will also need to redefine the `parameters` method so that only the classical parameters are changed and not the reservoir's (MerLin's simple quantum layer creates trainable pytorch parameter by default)." ] }, { "cell_type": "code", "execution_count": null, "id": "282b544b", "metadata": {}, "outputs": [], "source": [ "class HybridIrisClassifier(nn.Module):\n", " \"\"\"\n", " Hybrid model for Iris classification:\n", " - Quantum reservoir processes the 4 features\n", " - Classical output layer for 3-class classification\n", " \"\"\"\n", " def __init__(self):\n", " super(HybridIrisClassifier, self).__init__()\n", "\n", " # Quantum layer: processes the 4 features\n", " self.quantum = ML.QuantumLayer.simple(\n", " input_size=4,\n", " ).eval()\n", " # Classical output layer: quantum → 8 → 3\n", " self.classical_out = nn.Sequential(\n", " nn.Linear(self.quantum.output_size, 8),\n", " nn.ReLU(),\n", " nn.Dropout(0.1),\n", " nn.Linear(8, 3)\n", " )\n", " self.params=self.classical_out.parameters()\n", "\n", " def forward(self, x):\n", " #TODO Use the commented return if you want to use the reservoir algorithm with the processor.\n", " #return self.classical_out(proc.forward(self.quantum.eval(),x))\n", " return self.classical_out(self.quantum.eval()(x))\n", " \n", " def parameters(self):\n", " return self.params" ] }, { "cell_type": "markdown", "id": "4ace89e0", "metadata": {}, "source": [ "This could be ran easily with the processor, but it takes a lot of time. Since the reservoir always return the same output (it is not trained and receives the same input), we can calculate all of the outputs of the reservoir and then reuse them. The next two code cells implement the same reservoir as earlier but in a more resource-efficient way.\n", "\n", "Lets calculate all of the reservoir outputs and add them in a dictionary." ] }, { "cell_type": "code", "execution_count": null, "id": "de945332", "metadata": {}, "outputs": [], "source": [ "reservoir=ML.QuantumLayer.simple(\n", " input_size=4,\n", " ).eval()\n", "\n", "output_size=reservoir.output_size\n", "\n", "## TODO Uncomment to use the processor\n", "# train_outputs= proc.forward(reservoir,X_train)\n", "# train_reservoir_map={tuple(x.tolist()):output for x,output in zip(X_train,train_outputs)}\n", "# test_outputs=proc.forward(reservoir,X_test)\n", "# test_reservoir_map={tuple(x.tolist()):output for x,output in zip(X_test,test_outputs)}\n", "\n", "## TODO Comment to use the processor\n", "reservoir.eval()\n", "with torch.no_grad():\n", " train_outputs= reservoir(X_train)\n", " train_reservoir_map={tuple(x.tolist()):output for x,output in zip(X_train,train_outputs)}\n", " test_outputs=reservoir(X_test)\n", " test_reservoir_map={tuple(x.tolist()):output for x,output in zip(X_test,test_outputs)}\n", "\n", "reservoir_map = {**train_reservoir_map, **test_reservoir_map}" ] }, { "cell_type": "code", "execution_count": null, "id": "b283ede6", "metadata": {}, "outputs": [], "source": [ "for i,(key,value) in enumerate(reservoir_map.items()):\n", " if (i+1)%10==0:\n", " print(f\"{key}: {value}\")" ] }, { "cell_type": "markdown", "id": "daad8d97", "metadata": {}, "source": [ "We can easily define the trainable classical model using this map." ] }, { "cell_type": "code", "execution_count": null, "id": "6d4f64fb", "metadata": {}, "outputs": [], "source": [ "class HybridIrisClassifier(nn.Module):\n", " \"\"\"\n", " Hybrid model for Iris classification:\n", " - Quantum reservoir processes the 4 features\n", " - Classical output layer for 3-class classification\n", " \"\"\"\n", " def __init__(self,output_size:int=1):\n", " super(HybridIrisClassifier, self).__init__()\n", " self.output_size=output_size\n", "\n", " # Classical output layer: quantum → 8 → 3\n", " self.model = nn.Sequential(\n", " nn.Linear(output_size, 8),\n", " nn.ReLU(),\n", " nn.Dropout(0.1),\n", " nn.Linear(8, 3)\n", " )\n", " \n", "\n", " def forward(self, x:torch.Tensor):\n", " if x.dim()==1:\n", " x.unsqueeze(0)\n", " input_to_classical=torch.empty(x.shape[0],self.output_size)\n", " for i,input in enumerate(x):\n", " input_to_classical[i]=reservoir_map[tuple(input.tolist())]\n", "\n", " return self.model(input_to_classical)" ] }, { "cell_type": "markdown", "id": "aac69a97", "metadata": {}, "source": [ "## 4. Set the Training Parameters\n", "\n", "You can adjust these parameters to see how they affect training and model performance." ] }, { "cell_type": "code", "execution_count": null, "id": "ebdcce3a", "metadata": {}, "outputs": [], "source": [ "learning_rate = 0.01\n", "number_of_epochs = 200" ] }, { "cell_type": "markdown", "id": "0b96210f", "metadata": {}, "source": [ "## 5. Train the Hybrid Model\n", "\n", "Lets train the model like a normal pytorch module." ] }, { "cell_type": "code", "execution_count": null, "id": "55159e90", "metadata": {}, "outputs": [], "source": [ "import random, numpy as np, torch\n", "def reset_seeds(s=0):\n", " random.seed(s); np.random.seed(s)\n", " torch.manual_seed(s); torch.cuda.manual_seed_all(s)\n", " torch.backends.cudnn.deterministic = True\n", " torch.backends.cudnn.benchmark = False\n", "\n", "reset_seeds(123)\n", "model = HybridIrisClassifier(output_size=output_size)\n", "optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n", "criterion = nn.CrossEntropyLoss()\n", "model.train()\n", "\n", "#Training loop\n", "for epoch in range(number_of_epochs):\n", " optimizer.zero_grad()\n", " loss = criterion(model(X_train), y_train)\n", " loss.backward()\n", " optimizer.step()\n", " model.eval()\n", " with torch.no_grad():\n", " preds = model(X_test).argmax(dim=1)\n", " accuracy=(preds == y_test).float().mean().item()\n", " model.train()\n", " if (epoch+1)%20==0:\n", " print(f\"Epoch {epoch+1} had a loss of {loss.item()} and a test accuracy of {accuracy}\")" ] }, { "cell_type": "markdown", "id": "e4d0bc29", "metadata": {}, "source": [ "## 6. Evaluate the Model\n", "\n", "After training, let's evaluate our model on the test set and print the accuracy." ] }, { "cell_type": "code", "execution_count": null, "id": "6020e4ef", "metadata": {}, "outputs": [], "source": [ "# Evaluate on test set\n", "model.eval()\n", "with torch.no_grad():\n", " test_outputs = model(X_test)\n", " predictions = torch.argmax(test_outputs, dim=1)\n", " accuracy = (predictions == y_test).float().mean().item()\n", " print(f\"Test accuracy: {accuracy:.4f}\")" ] }, { "cell_type": "code", "execution_count": null, "id": "f62ab77a", "metadata": {}, "outputs": [], "source": [ "number_of_runs = 10\n", "accuracies = []\n", "\n", "for i in range(number_of_runs):\n", " reset_seeds(i+123)\n", " model = HybridIrisClassifier(output_size=output_size)\n", " optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)\n", " criterion = nn.CrossEntropyLoss()\n", "\n", " #Training loop \n", " model.train()\n", " for epoch in range(number_of_epochs):\n", " optimizer.zero_grad()\n", " loss = criterion(model(X_train), y_train)\n", " loss.backward()\n", " optimizer.step()\n", "\n", " model.eval()\n", " with torch.no_grad():\n", " preds = model(X_test).argmax(dim=1)\n", " accuracy=(preds == y_test).float().mean().item()\n", " model.train()\n", " #print(f\"Epoch {epoch+1} had a loss of {loss.item()} and a test accuracy of {accuracy}\")\n", "\n", " #Final evaluation of the model\n", " model.eval()\n", " with torch.no_grad():\n", " preds = model(X_test).argmax(dim=1)\n", " accuracies.append((preds == y_test).float().mean().item())\n", "\n", "avg = torch.tensor(accuracies).mean().item()\n", "std = torch.tensor(accuracies).std(unbiased=True).item()\n", "print(f\"Average accuracy: {avg:.4f} ± {std:.4f}\")\n" ] }, { "cell_type": "markdown", "id": "7f6dd19d", "metadata": {}, "source": [ "# Conclusion\n", "\n", "Congratulations! You've trained and evaluated a hybrid quantum-classical neural network using Merlin. \n", "Feel free to experiment with the model architecture, quantum parameters, or try other datasets!\n", "\n", "Even though MerLin is built as a simulation-first package, it is still possible to run and optimize quantum layers with the processor. Although, a gradient-free optimizer such as COBYLA should be used.\n", "\n", "Also, if you want a better performing reservoir based on the litterature, you can try to reproduce the [Quantum optical reservoir computing powered by boson sampling](https://opg.optica.org/opticaq/fulltext.cfm?uri=opticaq-3-3-238&id=572317)'s resevoir. It is good exercice to familiarize yourself with MerLin and learn about PCA if you are not from a machine learning background." ] } ], "metadata": { "kernelspec": { "display_name": "venv (3.12.12)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.12" } }, "nbformat": 4, "nbformat_minor": 5 }