{ "cells": [ { "cell_type": "markdown", "source": [ "## Quantum-Enhanced Language Model Fine-Tuning with Merlin\n", "\n", "This notebook demonstrates how to fine-tune language models using quantum photonic circuits as classification heads. We compare classical approaches (Logistic Regression, SVM, MLP) with quantum photonic classifiers implemented using the Merlin framework.\n", "\n", "\n", " ## 1. Setup and Imports\n", "\n", " First, we'll import all necessary libraries and set up our environment. This includes:\n", " - PyTorch for neural network operations\n", " - SetFit for few-shot learning\n", " - Merlin for quantum photonic circuit simulation\n", " - Standard ML libraries for evaluation and data handling\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "import numpy as np\n", "from torch.utils.data import DataLoader, TensorDataset\n", "import torch\n", "import torch.nn as nn\n", "from tqdm import tqdm\n", "from sklearn.metrics import accuracy_score\n", "from sklearn.base import BaseEstimator, ClassifierMixin\n", "import merlin as ML # Using our Merlin framework\n", "import math\n", "import json\n", "import os\n", "from torch.utils.data import DataLoader, TensorDataset\n", "from datasets import load_dataset\n", "from setfit import SetFitModel, sample_dataset\n", "from sklearn.svm import SVC\n", "\n", "# Set random seeds for reproducibility\n", "torch.manual_seed(42)\n", "np.random.seed(42)\n", "\n", "# Check GPU availability\n", "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n", "print(f\"Using device: {device}\")\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:27.885549Z", "start_time": "2025-06-09T15:50:26.092264Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using device: cpu\n" ] } ], "execution_count": 2 }, { "cell_type": "markdown", "source": [ "\n", " ## 2. Model Wrapper for Sentence Transformers\n", "\n", " The `ModelWrapper` class provides a unified interface for handling tokenization and forward passes with sentence transformer models. This abstraction allows us to work with different model architectures seamlessly.\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "class ModelWrapper(nn.Module):\n", " def __init__(self, model):\n", " super(ModelWrapper, self).__init__()\n", " self.model = model\n", "\n", " def tokenize(self, texts):\n", " \"\"\"\n", " Delegates tokenization to the underlying model.\n", "\n", " Args:\n", " texts (List[str]): List of text strings to tokenize\n", "\n", " Returns:\n", " Dict or Tensor: Tokenized inputs in the format expected by the model\n", " \"\"\"\n", " try:\n", " # Try to use the tokenize method of the underlying model\n", " return self.model.tokenize(texts)\n", " except AttributeError:\n", " # If the model doesn't have a tokenize method, try alternative approaches\n", " if hasattr(self.model, 'tokenizer'):\n", " return self.model.tokenizer(texts, return_tensors='pt', padding=True, truncation=True)\n", " elif hasattr(self.model, '_first_module') and hasattr(self.model._first_module, 'tokenizer'):\n", " return self.model._first_module.tokenizer(texts, return_tensors='pt', padding=True, truncation=True)\n", " else:\n", " raise ValueError(\n", " \"Unable to tokenize texts with this model. Please provide a model that has a tokenize or tokenizer method.\")\n", "\n", " def forward(self, inputs):\n", " \"\"\"\n", " Process inputs through the model to get embeddings.\n", "\n", " Args:\n", " inputs: Can be raw text strings or pre-tokenized inputs\n", "\n", " Returns:\n", " torch.Tensor: The sentence embeddings\n", " \"\"\"\n", " try:\n", " # Handle different input formats\n", " if isinstance(inputs, dict) and all(isinstance(v, torch.Tensor) for v in inputs.values()):\n", " outputs = self.model(inputs)\n", " elif isinstance(inputs, list) and all(isinstance(t, str) for t in inputs):\n", " tokenized = self.tokenize(inputs)\n", " device = next(self.model.parameters()).device\n", " tokenized = {k: v.to(device) for k, v in tokenized.items()}\n", " outputs = self.model(tokenized)\n", " else:\n", " outputs = self.model(inputs)\n", "\n", " # Extract embeddings from various output formats\n", " if isinstance(outputs, dict) and \"sentence_embedding\" in outputs:\n", " return outputs[\"sentence_embedding\"]\n", " elif isinstance(outputs, dict) and \"pooler_output\" in outputs:\n", " return outputs[\"pooler_output\"]\n", " elif isinstance(outputs, tuple) and len(outputs) > 0:\n", " return outputs[0]\n", " else:\n", " return outputs\n", " except Exception as e:\n", " raise ValueError(f\"Error during forward pass: {str(e)}\")\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:30.621509Z", "start_time": "2025-06-09T15:50:30.616178Z" } }, "outputs": [], "execution_count": 3 }, { "cell_type": "markdown", "source": [ "\n", " ## 3. Evaluation Function\n", "\n", " This function evaluates a SetFit model on given texts and labels, processing data in batches for efficiency.\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "def evaluate(model, texts, labels):\n", " \"\"\"\n", " Evaluate SetFit model on given texts and labels.\n", "\n", " Args:\n", " model: SetFit model with a trained classification head\n", " texts: List of text strings to classify\n", " labels: True labels for evaluation\n", "\n", " Returns:\n", " tuple: (accuracy, predictions)\n", " \"\"\"\n", " batch_size = 16\n", " num_samples = len(texts)\n", " num_batches = (num_samples + batch_size - 1) // batch_size\n", "\n", " all_embeddings = []\n", "\n", " with torch.no_grad():\n", " for batch_idx in range(num_batches):\n", " start_idx = batch_idx * batch_size\n", " end_idx = min(start_idx + batch_size, num_samples)\n", "\n", " batch_texts = texts[start_idx:end_idx]\n", "\n", " # Get embeddings\n", " batch_embeddings = model.model_body.encode(batch_texts, convert_to_tensor=True)\n", " batch_embeddings_cpu = batch_embeddings.detach().cpu().numpy()\n", "\n", " all_embeddings.extend(batch_embeddings_cpu)\n", "\n", " # Use the classification head to predict\n", " predictions = model.model_head.predict(np.array(all_embeddings))\n", " accuracy = accuracy_score(labels, predictions)\n", " return accuracy, predictions" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:32.270898Z", "start_time": "2025-06-09T15:50:32.267479Z" } }, "outputs": [], "execution_count": 4 }, { "cell_type": "markdown", "source": [ "\n", " ## 4. Classical Classification Heads\n", "\n", " ### 4.1 MLP Classifier\n", "\n", " We implement a 3-layer Multi-Layer Perceptron (MLP) as one of our classical baselines. The architecture includes:\n", " - Input layer matching the embedding dimension (768 for most transformers)\n", " - Two hidden layers with ReLU activation and dropout\n", " - Output layer for classification\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "class MLPClassifier(nn.Module):\n", " \"\"\"3-layer MLP classifier with dropout regularization\"\"\"\n", "\n", " def __init__(self, input_dim, hidden_dim=100, num_classes=2):\n", " super(MLPClassifier, self).__init__()\n", " self.layers = nn.Sequential(\n", " nn.Linear(input_dim, hidden_dim),\n", " nn.ReLU(),\n", " nn.Dropout(0.2),\n", " nn.Linear(hidden_dim, hidden_dim // 2),\n", " nn.ReLU(),\n", " nn.Dropout(0.2),\n", " nn.Linear(hidden_dim // 2, num_classes)\n", " )\n", "\n", " def forward(self, x):\n", " return self.layers(x)\n", "\n", "\n", "class MLPClassifierWrapper(BaseEstimator, ClassifierMixin):\n", " \"\"\"Scikit-learn compatible wrapper for the MLP classifier\"\"\"\n", "\n", " def __init__(self, input_dim=768, hidden_dim=100, num_classes=2,\n", " lr=0.001, epochs=100, batch_size=32, device=None):\n", " self.input_dim = input_dim\n", " self.hidden_dim = hidden_dim\n", " self.num_classes = num_classes\n", " self.lr = lr\n", " self.epochs = epochs\n", " self.batch_size = batch_size\n", " self.device = device if device else ('cuda' if torch.cuda.is_available() else 'cpu')\n", " self.model = None\n", " self.classes_ = None\n", "\n", " def fit(self, X, y):\n", " \"\"\"Train the MLP classifier\"\"\"\n", " # Convert numpy arrays to PyTorch tensors\n", " X = torch.tensor(X, dtype=torch.float32).to(self.device)\n", "\n", " # Store unique classes\n", " self.classes_ = np.unique(y)\n", " y_tensor = torch.tensor(y, dtype=torch.long).to(self.device)\n", "\n", " # Initialize the model\n", " self.model = MLPClassifier(\n", " input_dim=self.input_dim,\n", " hidden_dim=self.hidden_dim,\n", " num_classes=len(self.classes_)\n", " ).to(self.device)\n", "\n", " print(f\"Number of parameters in MLP head: {sum([p.numel() for p in self.model.parameters()])}\")\n", "\n", " # Define loss function and optimizer\n", " criterion = nn.CrossEntropyLoss()\n", " optimizer = torch.optim.Adam(self.model.parameters(), lr=self.lr)\n", "\n", " # Training loop\n", " self.model.train()\n", " for epoch in range(self.epochs):\n", " # Mini-batch training\n", " indices = torch.randperm(len(X))\n", " total_loss = 0\n", "\n", " for i in range(0, len(X), self.batch_size):\n", " batch_indices = indices[i:i + self.batch_size]\n", " batch_X = X[batch_indices]\n", " batch_y = y_tensor[batch_indices]\n", "\n", " # Forward pass\n", " outputs = self.model(batch_X)\n", " loss = criterion(outputs, batch_y)\n", "\n", " # Backward pass and optimize\n", " optimizer.zero_grad()\n", " loss.backward()\n", " optimizer.step()\n", "\n", " total_loss += loss.item()\n", "\n", " # Print progress\n", " if (epoch + 1) % 10 == 0:\n", " avg_loss = total_loss / (len(X) // self.batch_size + 1)\n", " print(f'Epoch [{epoch + 1}/{self.epochs}], Loss: {avg_loss:.4f}')\n", "\n", " return self\n", "\n", " def predict(self, X):\n", " \"\"\"Predict classes for samples\"\"\"\n", " X_tensor = torch.tensor(X, dtype=torch.float32).to(self.device)\n", "\n", " self.model.eval()\n", " with torch.no_grad():\n", " outputs = self.model(X_tensor)\n", " _, predicted = torch.max(outputs, 1)\n", " return self.classes_[predicted.cpu().numpy()]\n", "\n", " def predict_proba(self, X):\n", " \"\"\"Predict class probabilities\"\"\"\n", " X_tensor = torch.tensor(X, dtype=torch.float32).to(self.device)\n", "\n", " self.model.eval()\n", " with torch.no_grad():\n", " outputs = self.model(X_tensor)\n", " probabilities = torch.softmax(outputs, dim=1).cpu().numpy()\n", " return probabilities\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:38.502479Z", "start_time": "2025-06-09T15:50:38.495880Z" } }, "outputs": [], "execution_count": 5 }, { "cell_type": "markdown", "source": [ "\n", " ### 4.2 Helper Function to Replace SetFit Head\n", "\n", " This function allows us to easily swap the default classification head with our custom MLP.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "def replace_setfit_head_with_mlp(model, input_dim=768, hidden_dim=100, num_classes=2, epochs=100):\n", " \"\"\"Replace the classification head of a SetFitModel with an MLP.\"\"\"\n", " # Get the device the model is on\n", " device = next(model.model_body.parameters()).device\n", "\n", " # Create new MLP head\n", " mlp_head = MLPClassifierWrapper(\n", " input_dim=input_dim,\n", " hidden_dim=hidden_dim,\n", " num_classes=num_classes,\n", " epochs=epochs,\n", " lr=0.001,\n", " device=device\n", " )\n", "\n", " # Replace the model head\n", " model.model_head = mlp_head\n", "\n", " return model" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:40.469419Z", "start_time": "2025-06-09T15:50:40.467067Z" } }, "outputs": [], "execution_count": 6 }, { "cell_type": "markdown", "source": [ "\n", " ## 5. Quantum Classification Head\n", "\n", " ### 5.1 Quantum Photonic Classifier\n", "\n", " The quantum classifier uses a photonic interferometer implemented with the Merlin framework. The architecture consists of:\n", "\n", " 1. **Downscaling layer**: Reduces the embedding dimension to match quantum circuit requirements\n", " 2. **Quantum photonic circuit**: Processes the downscaled features through quantum interference\n", " 3. **Output layer**: Maps quantum measurements to class predictions\n", "\n", " The quantum circuit parameters:\n", " - **Modes**: Number of optical modes in the interferometer\n", " - **Photons**: Number of photons in the input state\n", " - **Input state**: Distribution of photons across modes\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "class QuantumClassifier(nn.Module):\n", " def __init__(self, input_dim, hidden_dim=100, modes=10, num_classes=2, input_state=None):\n", " super(QuantumClassifier, self).__init__()\n", "\n", " # This layer downscales the inputs to fit in the QLayer\n", " self.downscaling_layer = nn.Linear(input_dim, hidden_dim)\n", "\n", " # Building the QLayer with Merlin\n", " experiment = ML.PhotonicBackend(\n", " circuit_type=ML.CircuitType.SERIES,\n", " n_modes=modes,\n", " n_photons=sum(input_state) if input_state else modes // 2,\n", " state_pattern=ML.StatePattern.PERIODIC\n", " )\n", "\n", " # Default input state\n", " if input_state is None:\n", " input_state = [(i + 1) % 2 for i in range(modes)]\n", "\n", " photons_count = sum(input_state)\n", " # PNR (Photon Number Resolving) output size\n", " #output_size_slos = math.comb(modes + photons_count - 1, photons_count)\n", "\n", " # Create ansatz for the quantum layer\n", " ansatz = ML.AnsatzFactory.create(\n", " PhotonicBackend=experiment,\n", " input_size=hidden_dim,\n", " # output_size=output_size_slos,\n", " output_mapping_strategy=ML.OutputMappingStrategy.NONE\n", " )\n", "\n", " # Build the QLayer using Merlin\n", " self.q_circuit = ML.QuantumLayer(input_size=hidden_dim, ansatz=ansatz)\n", "\n", " # Linear output layer as in the original paper\n", " self.output_layer = nn.Linear(self.q_circuit.output_size, num_classes)\n", "\n", " def forward(self, x):\n", " # Forward pass through the quantum-classical hybrid\n", " x = self.downscaling_layer(x)\n", " x = torch.sigmoid(x) # Normalize for quantum layer\n", " x = self.q_circuit(x)\n", " return self.output_layer(x)" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:41.880819Z", "start_time": "2025-06-09T15:50:41.877252Z" } }, "outputs": [], "execution_count": 7 }, { "cell_type": "markdown", "source": [ "\n", "\n", " ### 5.2 Quantum Layer Training Wrapper\n", "\n", " This wrapper provides scikit-learn compatible training for the quantum classifier, including proper initialization, training loops, and prediction methods.\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "class QLayerTraining(BaseEstimator, ClassifierMixin):\n", " def __init__(self, input_dim=768, hidden_dim=100, modes=10, num_classes=2,\n", " dropout_rate=0.2, lr=0.001, weight_decay=1e-5,\n", " epochs=100, batch_size=32, device=None, input_state=None):\n", " self.input_dim = input_dim\n", " self.hidden_dim = hidden_dim\n", " self.modes = modes\n", " self.input_state = input_state\n", " self.num_classes = num_classes\n", " self.dropout_rate = dropout_rate\n", " self.lr = lr\n", " self.weight_decay = weight_decay\n", " self.epochs = epochs\n", " self.batch_size = batch_size\n", " self.device = device if device else ('cuda' if torch.cuda.is_available() else 'cpu')\n", "\n", " # Initialize model\n", " self.model = None\n", " self.classes_ = None\n", " self.is_fitted_ = False\n", " # Training history\n", " self.history = {\n", " 'train_loss': [],\n", " 'val_loss': [],\n", " 'val_accuracy': []\n", " }\n", "\n", " def _initialize_model(self):\n", " \"\"\"Initialize or re-initialize the model.\"\"\"\n", " self.model = QuantumClassifier(\n", " input_dim=self.input_dim,\n", " hidden_dim=self.hidden_dim,\n", " modes=self.modes,\n", " num_classes=len(self.classes_),\n", " input_state=self.input_state,\n", " ).to(self.device)\n", "\n", " print(f\"Number of parameters in Quantum head: {sum([p.numel() for p in self.model.parameters()])}\")\n", "\n", " def _train_epoch(self, train_loader, criterion, optimizer):\n", " \"\"\"Train for one epoch.\"\"\"\n", " self.model.train()\n", " epoch_loss = 0\n", " for X_batch, y_batch in train_loader:\n", " X_batch, y_batch = X_batch.to(self.device), y_batch.to(self.device)\n", "\n", " # Forward pass\n", " outputs = self.model(X_batch)\n", " loss = criterion(outputs, y_batch)\n", "\n", " # Backward pass and optimizer step\n", " optimizer.zero_grad()\n", " loss.backward()\n", " optimizer.step()\n", "\n", " epoch_loss += loss.item()\n", "\n", " return epoch_loss / len(train_loader)\n", "\n", " def fit(self, X, y):\n", " \"\"\"Train the QLayer with a manual training loop.\"\"\"\n", " # Store classes\n", " self.classes_ = np.unique(y)\n", "\n", " # Initialize model\n", " self._initialize_model()\n", "\n", " # Convert to PyTorch tensors\n", " X_tensor = torch.tensor(X, dtype=torch.float32)\n", " y_tensor = torch.tensor(y, dtype=torch.long)\n", " train_dataset = TensorDataset(X_tensor, y_tensor)\n", " train_loader = DataLoader(train_dataset, batch_size=self.batch_size, shuffle=True)\n", "\n", " # Loss function and optimizer\n", " criterion = nn.CrossEntropyLoss()\n", " optimizer = torch.optim.Adam(\n", " self.model.parameters(),\n", " lr=self.lr,\n", " weight_decay=self.weight_decay\n", " )\n", "\n", " # Training loop\n", " for epoch in range(self.epochs):\n", " # Train for one epoch\n", " train_loss = self._train_epoch(train_loader, criterion, optimizer)\n", " self.history['train_loss'].append(train_loss)\n", "\n", " if (epoch + 1) % 50 == 0:\n", " print(f'Epoch {epoch + 1}/{self.epochs}, Train Loss: {train_loss:.4f}')\n", "\n", " self.is_fitted_ = True\n", " return self\n", "\n", " def predict(self, X):\n", " \"\"\"Predict class labels for samples in X.\"\"\"\n", " self._check_is_fitted()\n", " X_tensor = torch.tensor(X, dtype=torch.float32).to(self.device)\n", "\n", " self.model.eval()\n", " with torch.no_grad():\n", " outputs = self.model(X_tensor)\n", " _, predicted = torch.max(outputs, 1)\n", "\n", " return self.classes_[predicted.cpu().numpy()]\n", "\n", " def predict_proba(self, X):\n", " \"\"\"Predict class probabilities for samples in X.\"\"\"\n", " self._check_is_fitted()\n", " X_tensor = torch.tensor(X, dtype=torch.float32).to(self.device)\n", "\n", " self.model.eval()\n", " with torch.no_grad():\n", " outputs = self.model(X_tensor)\n", " probabilities = torch.softmax(outputs, dim=1).cpu().numpy()\n", "\n", " return probabilities\n", "\n", " def _check_is_fitted(self):\n", " \"\"\"Check if model is fitted.\"\"\"\n", " if not self.is_fitted_ or self.model is None:\n", " raise ValueError(\"This model has not been fitted yet. Call 'fit' before using this method.\")" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:50:43.118650Z", "start_time": "2025-06-09T15:50:43.113324Z" } }, "outputs": [], "execution_count": 8 }, { "cell_type": "markdown", "source": [ "\n", " ### 5.3 Helper Function for Quantum SetFit Models\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "def create_setfit_with_q_layer(model, input_dim=768, hidden_dim=100, modes=10,\n", " num_classes=2, epochs=100, input_state=None):\n", " \"\"\"\n", " Replace the classification head of a SetFit model with a quantum layer.\n", "\n", " Args:\n", " model: SetFit model to modify\n", " input_dim: Dimension of input embeddings\n", " hidden_dim: Dimension after downscaling\n", " modes: Number of modes in the quantum circuit\n", " num_classes: Number of output classes\n", " epochs: Training epochs for the quantum head\n", " input_state: Photon distribution across modes\n", "\n", " Returns:\n", " Modified SetFit model with quantum classification head\n", " \"\"\"\n", " # Replace model head with QLayer\n", " model.model_head = QLayerTraining(\n", " input_dim=input_dim,\n", " hidden_dim=hidden_dim,\n", " modes=modes,\n", " num_classes=num_classes,\n", " epochs=epochs,\n", " input_state=input_state\n", " )\n", "\n", " return model" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:51:01.989410Z", "start_time": "2025-06-09T15:51:01.985275Z" } }, "outputs": [], "execution_count": 10 }, { "cell_type": "markdown", "source": [ "\n", " ## 6. Utility Functions\n", "\n", " ### 6.1 Results Storage\n", "\n", " Function to save experimental results in JSON format for later analysis.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "def save_experiment_results(results, filename='ft-qllm_exp.json'):\n", " \"\"\"\n", " Append experiment results to a JSON file.\n", "\n", " Args:\n", " results (dict): Dictionary containing experiment results\n", " filename (str): Path to the JSON file to store results\n", " \"\"\"\n", " filename = os.path.join(\"./results\", filename)\n", "\n", " # Create results directory if it doesn't exist\n", " os.makedirs(\"./results\", exist_ok=True)\n", "\n", " # Check if file exists and load existing data\n", " if os.path.exists(filename):\n", " try:\n", " with open(filename, 'r') as file:\n", " all_results = json.load(file)\n", " except json.JSONDecodeError:\n", " all_results = []\n", " else:\n", " all_results = []\n", "\n", " # Append new results\n", " all_results.append(results)\n", "\n", " # Write updated data back to file\n", " with open(filename, 'w') as file:\n", " json.dump(all_results, file, indent=4)\n", "\n", " print(f\"Results saved. Total experiments: {len(all_results)}\")\n", " return len(all_results)" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T15:51:03.391059Z", "start_time": "2025-06-09T15:51:03.388003Z" } }, "outputs": [], "execution_count": 11 }, { "cell_type": "markdown", "source": [ "\n", " ### 6.2 Contrastive Loss Implementation\n", "\n", " Simplified supervised contrastive loss for fine-tuning the sentence transformer body. In production, you would implement the full contrastive loss formula.\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "class SupConLoss(nn.Module):\n", " \"\"\"Supervised Contrastive Learning: https://arxiv.org/pdf/2004.11362.pdf.\"\"\"\n", " def __init__(self, model, temperature=0.07, contrast_mode=\"all\", base_temperature=0.07):\n", " super(SupConLoss, self).__init__()\n", " self.model = model\n", " self.temperature = temperature\n", " self.contrast_mode = contrast_mode\n", " self.base_temperature = base_temperature\n", "\n", " def forward(self, sentence_features, labels=None, mask=None):\n", " \"\"\"Computes loss for model.\"\"\"\n", " # Au lieu d'utiliser encode() qui peut détacher le graphe de calcul,\n", " # utilisons directement le modèle pour générer les embeddings\n", " # Tokenize the inputs\n", " tokenized_inputs = self.model.tokenize(sentence_features[0])\n", "\n", " # Si le modèle est sur un device particulier, déplacer les inputs sur ce device\n", " device = next(self.model.parameters()).device\n", " tokenized_inputs = {k: v.to(device) for k, v in tokenized_inputs.items()}\n", "\n", " # Forward pass avec le modèle\n", " outputs = self.model(tokenized_inputs)\n", "\n", " # Récupérer les embeddings\n", " if isinstance(outputs, dict) and \"sentence_embedding\" in outputs:\n", " features = outputs[\"sentence_embedding\"]\n", " else:\n", " # Si le modèle renvoie un format différent, adaptez ici\n", " features = outputs # Ou une autre méthode pour extraire les embeddings\n", "\n", " # Normalize embeddings\n", " features = torch.nn.functional.normalize(features, p=2, dim=1)\n", " # Add n_views dimension\n", " features = torch.unsqueeze(features, 1)\n", " device = features.device\n", "\n", " # Le reste du code reste inchangé\n", " if len(features.shape) < 3:\n", " raise ValueError(\"`features` needs to be [bsz, n_views, ...],\" \"at least 3 dimensions are required\")\n", " if len(features.shape) > 3:\n", " features = features.view(features.shape[0], features.shape[1], -1)\n", "\n", " batch_size = features.shape[0]\n", " if labels is not None and mask is not None:\n", " raise ValueError(\"Cannot define both `labels` and `mask`\")\n", " elif labels is None and mask is None:\n", " mask = torch.eye(batch_size, dtype=torch.float32).to(device)\n", " elif labels is not None:\n", " labels = labels.contiguous().view(-1, 1)\n", " if labels.shape[0] != batch_size:\n", " raise ValueError(\"Num of labels does not match num of features\")\n", " mask = torch.eq(labels, labels.T).float().to(device)\n", " else:\n", " mask = mask.float().to(device)\n", "\n", " contrast_count = features.shape[1]\n", " contrast_feature = torch.cat(torch.unbind(features, dim=1), dim=0)\n", " if self.contrast_mode == \"one\":\n", " anchor_feature = features[:, 0]\n", " anchor_count = 1\n", " elif self.contrast_mode == \"all\":\n", " anchor_feature = contrast_feature\n", " anchor_count = contrast_count\n", " else:\n", " raise ValueError(\"Unknown mode: {}\".format(self.contrast_mode))\n", "\n", " # Compute logits\n", " anchor_dot_contrast = torch.div(torch.matmul(anchor_feature, contrast_feature.T), self.temperature)\n", " # For numerical stability\n", " logits_max, _ = torch.max(anchor_dot_contrast, dim=1, keepdim=True)\n", " logits = anchor_dot_contrast - logits_max.detach()\n", "\n", " # Tile mask\n", " mask = mask.repeat(anchor_count, contrast_count)\n", " # Mask-out self-contrast cases\n", " logits_mask = torch.scatter(\n", " torch.ones_like(mask),\n", " 1,\n", " torch.arange(batch_size * anchor_count).view(-1, 1).to(device),\n", " 0,\n", " )\n", " mask = mask * logits_mask\n", "\n", " # Compute log_prob\n", " exp_logits = torch.exp(logits) * logits_mask\n", " log_prob = logits - torch.log(exp_logits.sum(1, keepdim=True))\n", "\n", " # Compute mean of log-likelihood over positive\n", " mean_log_prob_pos = (mask * log_prob).sum(1) / mask.sum(1)\n", "\n", " # Loss\n", " loss = -(self.temperature / self.base_temperature) * mean_log_prob_pos\n", " loss = loss.view(anchor_count, batch_size).mean()\n", "\n", " return loss\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:08:22.838292Z", "start_time": "2025-06-09T16:08:22.830766Z" } }, "outputs": [], "execution_count": 23 }, { "cell_type": "markdown", "source": [ "\n", " ## 7. Main Training Pipeline\n", "\n", " Now we'll set up the complete training pipeline that:\n", " 1. Loads the SST-2 sentiment analysis dataset\n", " 2. Fine-tunes the sentence transformer with contrastive learning\n", " 3. Trains multiple classification heads (classical and quantum)\n", " 4. Evaluates and compares all approaches\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "SAMPLES_PER_CLASS = 8 # Few-shot setting\n", "BODY_EPOCHS = 20 # Epochs for sentence transformer fine-tuning\n", "HEAD_EPOCHS = 200 # Epochs for classification head training\n", "LEARNING_RATE = 1e-5 # Learning rate for body fine-tuning\n", "BATCH_SIZE = 16 # Batch size for evaluation\n", "\n", "print(f\"Configuration:\")\n", "print(f\"- Samples per class: {SAMPLES_PER_CLASS}\")\n", "print(f\"- Body training epochs: {BODY_EPOCHS}\")\n", "print(f\"- Head training epochs: {HEAD_EPOCHS}\")\n", "print(f\"- Learning rate: {LEARNING_RATE}\")\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:08:25.141288Z", "start_time": "2025-06-09T16:08:25.138320Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Configuration:\n", "- Samples per class: 8\n", "- Body training epochs: 20\n", "- Head training epochs: 200\n", "- Learning rate: 1e-05\n" ] } ], "execution_count": 24 }, { "cell_type": "markdown", "source": [ "\n", " ### 7.1 Load Dataset\n", "\n", " We use the Stanford Sentiment Treebank (SST-2) dataset for binary sentiment classification. In the few-shot setting, we sample only a small number of examples per class for training.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(f\"\\nLoading dataset with {SAMPLES_PER_CLASS} samples per class...\")\n", "dataset = load_dataset(\"sst2\")\n", "\n", "# Simulate few-shot regime by sampling examples per class\n", "train_dataset = sample_dataset(dataset[\"train\"], label_column=\"label\", num_samples=SAMPLES_PER_CLASS)\n", "eval_dataset = dataset[\"validation\"].select(range(250))\n", "test_dataset = dataset[\"validation\"].select(range(250, len(dataset[\"validation\"])))\n", "\n", "# Extract texts and labels\n", "texts = [example[\"sentence\"] for example in train_dataset]\n", "features = [texts]\n", "labels = torch.tensor([example[\"label\"] for example in train_dataset])\n", "\n", "print(f\"Dataset sizes:\")\n", "print(f\"- Training: {len(train_dataset)} samples\")\n", "print(f\"- Validation: {len(eval_dataset)} samples\")\n", "print(f\"- Test: {len(test_dataset)} samples\")" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:08:28.146285Z", "start_time": "2025-06-09T16:08:26.192309Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Loading dataset with 8 samples per class...\n", "Dataset sizes:\n", "- Training: 16 samples\n", "- Validation: 250 samples\n", "- Test: 622 samples\n" ] } ], "execution_count": 25 }, { "cell_type": "markdown", "source": [ "\n", " ### 7.2 Initialize Base Model\n", "\n", " We use a pre-trained sentence transformer as our base model. The SetFit framework provides an efficient way to perform few-shot learning.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\nLoading pre-trained model...\")\n", "model = SetFitModel.from_pretrained(\"sentence-transformers/paraphrase-mpnet-base-v2\")\n", "sentence_transformer = model.model_body\n", "classification_head = model.model_head\n", "\n", "print(f\"Model loaded: {type(sentence_transformer).__name__}\")\n", "print(f\"Embedding dimension: 768\")" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:08:31.416926Z", "start_time": "2025-06-09T16:08:29.635244Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Loading pre-trained model...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference.\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Model loaded: SentenceTransformer\n", "Embedding dimension: 768\n" ] } ], "execution_count": 26 }, { "cell_type": "markdown", "source": [ "\n", " ### 7.3 Fine-tune Sentence Transformer Body\n", "\n", " We fine-tune the sentence transformer using contrastive learning to better adapt it to our specific task.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\nTraining model body with contrastive learning...\")\n", "model_wrapped = ModelWrapper(sentence_transformer)\n", "criterion = SupConLoss(model=model_wrapped)\n", "\n", "# Enable gradients for fine-tuning\n", "for param in sentence_transformer.parameters():\n", " param.requires_grad = True\n", "\n", "optimizer = torch.optim.Adam(model_wrapped.parameters(), lr=LEARNING_RATE)\n", "model_wrapped.train()\n", "\n", "# Training loop\n", "for iteration in tqdm(range(BODY_EPOCHS), desc=\"Contrastive Learning\"):\n", " optimizer.zero_grad()\n", " loss = criterion(features, labels)\n", " loss.backward()\n", " optimizer.step()\n", "\n", " if (iteration + 1) % 5 == 0:\n", " print(f\"Iteration {iteration + 1}/{BODY_EPOCHS}, Loss: {loss.item():.6f}\")\n", "\n", "print(\"Model body fine-tuning completed!\")" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:08:38.008810Z", "start_time": "2025-06-09T16:08:31.900320Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Training model body with contrastive learning...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Contrastive Learning: 30%|███ | 6/20 [00:03<00:04, 3.15it/s]" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Iteration 5/20, Loss: 2.355021\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Contrastive Learning: 55%|█████▌ | 11/20 [00:04<00:01, 4.93it/s]" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Iteration 10/20, Loss: 2.112763\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Contrastive Learning: 80%|████████ | 16/20 [00:05<00:00, 5.45it/s]" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Iteration 15/20, Loss: 2.109605\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Contrastive Learning: 100%|██████████| 20/20 [00:06<00:00, 3.28it/s]" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Iteration 20/20, Loss: 2.044540\n", "Model body fine-tuning completed!\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n" ] } ], "execution_count": 27 }, { "cell_type": "markdown", "source": [ "\n", "\n", " ### 7.4 Generate Embeddings\n", "\n", " After fine-tuning, we generate embeddings for all training samples. These embeddings will be used to train the various classification heads.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\nGenerating embeddings for training data...\")\n", "sentence_transformer.eval()\n", "train_embeddings = []\n", "train_labels = []\n", "\n", "with torch.no_grad():\n", " num_batches = (len(train_dataset[\"sentence\"]) + BATCH_SIZE - 1) // BATCH_SIZE\n", "\n", " for batch_idx in tqdm(range(num_batches), desc=\"Encoding\"):\n", " start_idx = batch_idx * BATCH_SIZE\n", " end_idx = min(start_idx + BATCH_SIZE, len(train_dataset[\"sentence\"]))\n", "\n", " batch_texts = train_dataset[\"sentence\"][start_idx:end_idx]\n", " batch_labels = train_dataset[\"label\"][start_idx:end_idx]\n", "\n", " batch_embeddings = sentence_transformer.encode(batch_texts, convert_to_tensor=True)\n", " batch_embeddings_cpu = batch_embeddings.detach().cpu().numpy()\n", "\n", " for emb, lbl in zip(batch_embeddings_cpu, batch_labels):\n", " train_embeddings.append(emb)\n", " train_labels.append(lbl)\n", "\n", "train_embeddings = np.array(train_embeddings)\n", "train_labels = np.array(train_labels)\n", "\n", "print(f\"Embeddings shape: {train_embeddings.shape}\")\n", "print(f\"Labels shape: {train_labels.shape}\")" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:09:02.081794Z", "start_time": "2025-06-09T16:09:01.700677Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Generating embeddings for training data...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "Encoding: 100%|██████████| 1/1 [00:00<00:00, 2.81it/s]" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Embeddings shape: (16, 768)\n", "Labels shape: (16,)\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n" ] } ], "execution_count": 28 }, { "cell_type": "markdown", "source": [ "\n", " ## 8. Train and Evaluate Classification Heads\n", "\n", " Now we'll train different classification heads and compare their performance:\n", " 1. **Logistic Regression**: Simple linear classifier (baseline)\n", " 2. **SVM**: Support Vector Machine with linear kernel\n", " 3. **MLP**: Multi-layer perceptron\n", " 4. **Quantum Layers**: Multiple configurations with different numbers of modes and photons" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "num_classes = len(set(train_dataset[\"label\"]))\n", "results = {\n", " \"training_samples\": SAMPLES_PER_CLASS,\n", " \"epochs\": BODY_EPOCHS,\n", " \"lr\": LEARNING_RATE\n", "}\n", "\n", "print(f\"\\nTraining classification heads for {num_classes}-class classification...\")\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:09:08.255523Z", "start_time": "2025-06-09T16:09:08.252854Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Training classification heads for 2-class classification...\n" ] } ], "execution_count": 29 }, { "cell_type": "markdown", "source": [ "\n", " ### 8.1 Logistic Regression Head\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\n1. Training Logistic Regression head...\")\n", "# Reset to default logistic regression head\n", "model = SetFitModel.from_pretrained(\"sentence-transformers/paraphrase-mpnet-base-v2\")\n", "model.model_body = sentence_transformer # Use our fine-tuned body\n", "\n", "# Train\n", "model.model_head.fit(train_embeddings, train_labels)\n", "\n", "# Evaluate\n", "lg_val_accuracy, _ = evaluate(model, eval_dataset[\"sentence\"], eval_dataset[\"label\"])\n", "lg_test_accuracy, _ = evaluate(model, test_dataset[\"sentence\"], test_dataset[\"label\"])\n", "\n", "print(f\"Logistic Regression - Val: {lg_val_accuracy:.4f}, Test: {lg_test_accuracy:.4f}\")\n", "results[\"LogisticRegression\"] = [lg_val_accuracy, lg_test_accuracy]" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:09:19.043745Z", "start_time": "2025-06-09T16:09:10.638256Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "1. Training Logistic Regression head...\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference.\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/linear_model/_linear_loss.py:200: RuntimeWarning: divide by zero encountered in matmul\n", " raw_prediction = X @ weights + intercept\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/linear_model/_linear_loss.py:200: RuntimeWarning: overflow encountered in matmul\n", " raw_prediction = X @ weights + intercept\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/linear_model/_linear_loss.py:200: RuntimeWarning: invalid value encountered in matmul\n", " raw_prediction = X @ weights + intercept\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/linear_model/_linear_loss.py:330: RuntimeWarning: divide by zero encountered in matmul\n", " grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/linear_model/_linear_loss.py:330: RuntimeWarning: overflow encountered in matmul\n", " grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/linear_model/_linear_loss.py:330: RuntimeWarning: invalid value encountered in matmul\n", " grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: divide by zero encountered in matmul\n", " ret = a @ b\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: overflow encountered in matmul\n", " ret = a @ b\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: invalid value encountered in matmul\n", " ret = a @ b\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Logistic Regression - Val: 0.8000, Test: 0.7669\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: divide by zero encountered in matmul\n", " ret = a @ b\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: overflow encountered in matmul\n", " ret = a @ b\n", "/Users/cassandrenotton/Documents/projects/CoreDev/venv-merlin-quandela/lib/python3.12/site-packages/sklearn/utils/extmath.py:203: RuntimeWarning: invalid value encountered in matmul\n", " ret = a @ b\n" ] } ], "execution_count": 30 }, { "cell_type": "markdown", "source": [ "\n", " ### 8.2 SVM Head\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\n2. Training SVM head...\")\n", "# Replace head with SVM\n", "model.model_head = SVC(C=1.0, kernel='linear', gamma='scale', probability=True)\n", "model.model_head.fit(train_embeddings, train_labels)\n", "\n", "# Evaluate\n", "svc_val_accuracy, _ = evaluate(model, eval_dataset[\"sentence\"], eval_dataset[\"label\"])\n", "svc_test_accuracy, _ = evaluate(model, test_dataset[\"sentence\"], test_dataset[\"label\"])\n", "\n", "print(f\"SVM - Val: {svc_val_accuracy:.4f}, Test: {svc_test_accuracy:.4f}\")\n", "results[\"SVC\"] = [svc_val_accuracy, svc_test_accuracy]" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:12:15.636156Z", "start_time": "2025-06-09T16:12:11.346850Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "2. Training SVM head...\n", "SVM - Val: 0.8080, Test: 0.7846\n" ] } ], "execution_count": 31 }, { "cell_type": "markdown", "source": [ " ### 8.3 MLP Head" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\n3. Training MLP head...\")\n", "# Replace head with MLP\n", "model = replace_setfit_head_with_mlp(\n", " model,\n", " input_dim=768,\n", " hidden_dim=100,\n", " num_classes=num_classes,\n", " epochs=HEAD_EPOCHS\n", ")\n", "model.model_head.fit(train_embeddings, train_labels)\n", "\n", "# Evaluate\n", "mlp_val_accuracy, _ = evaluate(model, eval_dataset[\"sentence\"], eval_dataset[\"label\"])\n", "mlp_test_accuracy, _ = evaluate(model, test_dataset[\"sentence\"], test_dataset[\"label\"])\n", "\n", "print(f\"MLP - Val: {mlp_val_accuracy:.4f}, Test: {mlp_test_accuracy:.4f}\")\n", "results[\"MLP\"] = [mlp_val_accuracy, mlp_test_accuracy]\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:12:32.002902Z", "start_time": "2025-06-09T16:12:26.110869Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "3. Training MLP head...\n", "Number of parameters in MLP head: 82052\n", "Epoch [10/200], Loss: 0.4457\n", "Epoch [20/200], Loss: 0.1034\n", "Epoch [30/200], Loss: 0.0177\n", "Epoch [40/200], Loss: 0.0014\n", "Epoch [50/200], Loss: 0.0009\n", "Epoch [60/200], Loss: 0.0002\n", "Epoch [70/200], Loss: 0.0006\n", "Epoch [80/200], Loss: 0.0002\n", "Epoch [90/200], Loss: 0.0001\n", "Epoch [100/200], Loss: 0.0001\n", "Epoch [110/200], Loss: 0.0004\n", "Epoch [120/200], Loss: 0.0002\n", "Epoch [130/200], Loss: 0.0001\n", "Epoch [140/200], Loss: 0.0001\n", "Epoch [150/200], Loss: 0.0002\n", "Epoch [160/200], Loss: 0.0001\n", "Epoch [170/200], Loss: 0.0001\n", "Epoch [180/200], Loss: 0.0001\n", "Epoch [190/200], Loss: 0.0008\n", "Epoch [200/200], Loss: 0.0001\n", "MLP - Val: 0.8040, Test: 0.7701\n" ] } ], "execution_count": 32 }, { "cell_type": "markdown", "source": [ "\n", "\n", " ### 8.4 Quantum Layer Heads\n", "\n", " We test multiple quantum configurations with varying numbers of modes and photons. Each configuration represents a different quantum circuit complexity and expressivity.\n", "\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "print(\"\\n4. Training Quantum Layer heads...\")\n", "modes_to_test = [ 2, 4, 6, 8]\n", "quantum_results = {}\n", "\n", "for mode in modes_to_test:\n", " photon_max = int(mode // 2)\n", "\n", " for k in range(1, photon_max + 1):\n", " # Create input state with k photons\n", " input_state = [0] * mode\n", " for p in range(k):\n", " input_state[2 * p] = 1\n", "\n", " print(f\"\\n Training Quantum Head: {mode} modes, {k} photons\")\n", " print(f\" Input state: {input_state}\")\n", "\n", " # Create quantum model\n", " model = create_setfit_with_q_layer(\n", " model,\n", " input_dim=768,\n", " hidden_dim=100,\n", " modes=mode,\n", " num_classes=num_classes,\n", " epochs=HEAD_EPOCHS,\n", " input_state=input_state\n", " )\n", "\n", " # Train the quantum head\n", " model.model_head.fit(train_embeddings, train_labels)\n", "\n", " # Evaluate\n", " q_val_predictions = model.model_head.predict(\n", " sentence_transformer.encode(eval_dataset[\"sentence\"], convert_to_tensor=True).cpu().numpy()\n", " )\n", " q_val_accuracy = accuracy_score(eval_dataset[\"label\"], q_val_predictions)\n", "\n", " q_test_predictions = model.model_head.predict(\n", " sentence_transformer.encode(test_dataset[\"sentence\"], convert_to_tensor=True).cpu().numpy()\n", " )\n", " q_test_accuracy = accuracy_score(test_dataset[\"label\"], q_test_predictions)\n", "\n", " print(f\" Quantum {mode}-{k} - Val: {q_val_accuracy:.4f}, Test: {q_test_accuracy:.4f}\")\n", " quantum_results[f\"{mode}-qlayer-{k}\"] = [q_val_accuracy, q_test_accuracy]\n", "\n", "results[\"Qlayer\"] = quantum_results" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:15:03.048557Z", "start_time": "2025-06-09T16:14:24.657523Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "4. Training Quantum Layer heads...\n", "\n", " Training Quantum Head: 2 modes, 1 photons\n", " Input state: [1, 0]\n", "Number of parameters in Quantum head: 76914\n", "Epoch 50/200, Train Loss: 0.6942\n", "Epoch 100/200, Train Loss: 0.6181\n", "Epoch 150/200, Train Loss: 0.5496\n", "Epoch 200/200, Train Loss: 0.4893\n", " Quantum 2-1 - Val: 0.5800, Test: 0.5691\n", "\n", " Training Quantum Head: 4 modes, 1 photons\n", " Input state: [1, 0, 0, 0]\n", "Number of parameters in Quantum head: 76942\n", "Epoch 50/200, Train Loss: 0.6326\n", "Epoch 100/200, Train Loss: 0.5825\n", "Epoch 150/200, Train Loss: 0.5257\n", "Epoch 200/200, Train Loss: 0.4681\n", " Quantum 4-1 - Val: 0.8120, Test: 0.7990\n", "\n", " Training Quantum Head: 4 modes, 2 photons\n", " Input state: [1, 0, 1, 0]\n", "Number of parameters in Quantum head: 76946\n", "Epoch 50/200, Train Loss: 0.6133\n", "Epoch 100/200, Train Loss: 0.5335\n", "Epoch 150/200, Train Loss: 0.4743\n", "Epoch 200/200, Train Loss: 0.4264\n", " Quantum 4-2 - Val: 0.7320, Test: 0.6961\n", "\n", " Training Quantum Head: 6 modes, 1 photons\n", " Input state: [1, 0, 0, 0, 0, 0]\n", "Number of parameters in Quantum head: 76986\n", "Epoch 50/200, Train Loss: 0.5370\n", "Epoch 100/200, Train Loss: 0.4549\n", "Epoch 150/200, Train Loss: 0.3864\n", "Epoch 200/200, Train Loss: 0.3370\n", " Quantum 6-1 - Val: 0.8360, Test: 0.8280\n", "\n", " Training Quantum Head: 6 modes, 2 photons\n", " Input state: [1, 0, 1, 0, 0, 0]\n", "Number of parameters in Quantum head: 77004\n", "Epoch 50/200, Train Loss: 0.6424\n", "Epoch 100/200, Train Loss: 0.5845\n", "Epoch 150/200, Train Loss: 0.5213\n", "Epoch 200/200, Train Loss: 0.4624\n", " Quantum 6-2 - Val: 0.8160, Test: 0.7781\n", "\n", " Training Quantum Head: 6 modes, 3 photons\n", " Input state: [1, 0, 1, 0, 1, 0]\n", "Number of parameters in Quantum head: 77014\n", "Epoch 50/200, Train Loss: 0.6078\n", "Epoch 100/200, Train Loss: 0.5386\n", "Epoch 150/200, Train Loss: 0.4767\n", "Epoch 200/200, Train Loss: 0.4192\n", " Quantum 6-3 - Val: 0.6720, Test: 0.6399\n", "\n", " Training Quantum Head: 8 modes, 1 photons\n", " Input state: [1, 0, 0, 0, 0, 0, 0, 0]\n", "Number of parameters in Quantum head: 77046\n", "Epoch 50/200, Train Loss: 0.5943\n", "Epoch 100/200, Train Loss: 0.5106\n", "Epoch 150/200, Train Loss: 0.4470\n", "Epoch 200/200, Train Loss: 0.3959\n", " Quantum 8-1 - Val: 0.7960, Test: 0.7572\n", "\n", " Training Quantum Head: 8 modes, 2 photons\n", " Input state: [1, 0, 1, 0, 0, 0, 0, 0]\n", "Number of parameters in Quantum head: 77086\n", "Epoch 50/200, Train Loss: 0.6094\n", "Epoch 100/200, Train Loss: 0.5392\n", "Epoch 150/200, Train Loss: 0.4776\n", "Epoch 200/200, Train Loss: 0.4242\n", " Quantum 8-2 - Val: 0.8480, Test: 0.8376\n", "\n", " Training Quantum Head: 8 modes, 3 photons\n", " Input state: [1, 0, 1, 0, 1, 0, 0, 0]\n", "Number of parameters in Quantum head: 77142\n", "Epoch 50/200, Train Loss: 0.6362\n", "Epoch 100/200, Train Loss: 0.5685\n", "Epoch 150/200, Train Loss: 0.5048\n", "Epoch 200/200, Train Loss: 0.4432\n", " Quantum 8-3 - Val: 0.7960, Test: 0.7781\n", "\n", " Training Quantum Head: 8 modes, 4 photons\n", " Input state: [1, 0, 1, 0, 1, 0, 1, 0]\n", "Number of parameters in Quantum head: 77170\n", "Epoch 50/200, Train Loss: 0.6389\n", "Epoch 100/200, Train Loss: 0.5645\n", "Epoch 150/200, Train Loss: 0.4995\n", "Epoch 200/200, Train Loss: 0.4413\n", " Quantum 8-4 - Val: 0.6600, Test: 0.6270\n" ] } ], "execution_count": 35 }, { "cell_type": "markdown", "source": [ "\n", " ## 9. Results Summary and Visualization\n", "\n", " Let's visualize and analyze the results from all classification heads.\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "import matplotlib.pyplot as plt\n", "%matplotlib inline\n", "# Extract results for visualization\n", "classical_methods = ['LogisticRegression', 'SVC', 'MLP']\n", "classical_val_accs = [results[method][0] for method in classical_methods]\n", "classical_test_accs = [results[method][1] for method in classical_methods]\n", "\n", "# Process quantum results\n", "quantum_configs = list(results['Qlayer'].keys())\n", "quantum_val_accs = [results['Qlayer'][config][0] for config in quantum_configs]\n", "quantum_test_accs = [results['Qlayer'][config][1] for config in quantum_configs]\n", "\n", "# Create comparison plot\n", "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n", "\n", "# Validation accuracies\n", "x_classical = range(len(classical_methods))\n", "x_quantum = range(len(classical_methods), len(classical_methods) + len(quantum_configs))\n", "x_labels_quantum = [f\"{c.split('-qlayer-')[0]}-{c.split('-qlayer-')[1]}\" for c in quantum_configs]\n", "\n", "ax1.bar(x_classical, classical_val_accs, color='skyblue', label='Classical')\n", "ax1.bar(x_quantum, quantum_val_accs, color='lightcoral', label='Quantum')\n", "ax1.set_xticks(list(x_classical) + list(x_quantum))\n", "ax1.set_xticklabels(classical_methods + x_labels_quantum, rotation=45, ha='right')\n", "ax1.set_ylabel('Validation Accuracy')\n", "ax1.set_title('Validation Performance Comparison')\n", "ax1.legend()\n", "ax1.grid(axis='y', alpha=0.3)\n", "\n", "# Test accuracies\n", "ax2.bar(x_classical, classical_test_accs, color='skyblue', label='Classical')\n", "ax2.bar(x_quantum, quantum_test_accs, color='lightcoral', label='Quantum')\n", "ax2.set_xticks(list(x_classical) + list(x_quantum))\n", "ax2.set_xticklabels(classical_methods + x_labels_quantum, rotation=45, ha='right')\n", "ax2.set_ylabel('Test Accuracy')\n", "ax2.set_title('Test Performance Comparison')\n", "ax2.legend()\n", "ax2.grid(axis='y', alpha=0.3)\n", "\n", "plt.tight_layout()\n", "plt.show()\n", "\n", "# Print summary statistics\n", "print(\"\\n=== RESULTS SUMMARY ===\")\n", "print(\"\\nClassical Methods:\")\n", "for i, method in enumerate(classical_methods):\n", " print(f\"{method:20s} - Val: {classical_val_accs[i]:.4f}, Test: {classical_test_accs[i]:.4f}\")\n", "\n", "print(\"\\nQuantum Methods (best per mode count):\")\n", "modes_processed = set()\n", "for config in quantum_configs:\n", " mode_count = config.split('-')[0]\n", " if mode_count not in modes_processed:\n", " # Find best accuracy for this mode count\n", " mode_configs = [c for c in quantum_configs if c.startswith(mode_count + '-')]\n", " best_val = max(results['Qlayer'][c][0] for c in mode_configs)\n", " best_test = max(results['Qlayer'][c][1] for c in mode_configs)\n", " print(f\"{mode_count + ' modes':20s} - Val: {best_val:.4f}, Test: {best_test:.4f}\")\n", " modes_processed.add(mode_count)\n" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:23:56.268795Z", "start_time": "2025-06-09T16:23:56.167534Z" } }, "outputs": [ { "data": { "text/plain": [ "
" ], "image/png": "iVBORw0KGgoAAAANSUhEUgAABccAAAJOCAYAAABycr+9AAAAOnRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjEwLjMsIGh0dHBzOi8vbWF0cGxvdGxpYi5vcmcvZiW1igAAAAlwSFlzAAAPYQAAD2EBqD+naQAAkVFJREFUeJzt3Qm8TPX/+PH35dr3fcsWSrJGZGsVpUKbpYWQikREiNykLIkQUUpayFKSUpYsbRQJqZB9v5bsxJU7/8f78/3N/c/duPeaO+d85ryej8c83DlzZuY9Z8693vOe9+fzifD5fD4BAAAAAAAAAMBDMjgdAAAAAAAAAAAAoUZxHAAAAAAAAADgORTHAQAAAAAAAACeQ3EcAAAAAAAAAOA5FMcBAAAAAAAAAJ5DcRwAAAAAAAAA4DkUxwEAAAAAAAAAnkNxHAAAAAAAAADgORTHAQAAAAAAAACeQ3EcQIrs2LFDIiIiZMqUKXHbXnrpJbMtJXQ/3T+Ybr75ZnMJdx999JFUrFhRMmXKJHnz5nU6HITh7zIAAEA4IG9GMJE3A95AcRwIQ82aNZPs2bPLyZMnk93n4YcflsyZM8s///wjbvbXX3+ZoromJm6xbNkykyT5L5p8X3nlldK2bVvZtm1bUJ9r48aN8thjj0m5cuVk0qRJ8s477wT18b1q7dq18sgjj0jJkiUlS5Yskj9/fmnUqJG8//77cuHCBafDAwAASLHAvPRiF81hL9eZM2dMbp7SxyJvth95M4BwF+l0AACCTwvfX375pXz++ecm8Uwqqf3iiy/kjjvukAIFCqT5eQYMGCB9+/aV9C6ODxo0yHSIlylTJt5tCxcuFCd169ZNrr/+ejl//rz89ttvJgGfN2+erF+/XooXLx6U59APFLGxsTJmzBgpX758UB7T695991156qmnpEiRIvLoo49KhQoVzBdJixcvlo4dO8r+/fvlhRdekHBVunRp+ffff82HUwAAEB7d0oE+/PBDWbRoUaLt11xzzWU/l36O0NxcpWYEJ3mzncibyZsBL6A4DoRp53iuXLlk2rRpSRbHtTB++vRpU0S/HJGRkebiFO18d1LDhg3lgQceMD+3b99errrqKpP4f/DBB9KvX7/Lemx9f3LkyCEHDx4014M5LFQ/1OjIAi/6+eefTYJft25d+frrr83vid+zzz4rv/76q/zxxx8Sjv777z/zgVF/b7Jmzep0OAAAIEi0qzdhvqPF8YTbnUTebB/yZvJmwCuYVgUIQ9myZZP77rvPfKPvTxIDadFckxstoh85ckR69eolVapUkZw5c0ru3LnlzjvvlHXr1l3yeZKac/zcuXPSo0cPKVSoUNxz7NmzJ9F9d+7cKV26dJGrr77axKsd7A8++GC86VN0bjfdpm655ZZEQ0KTmnNcX692MWh3gyYy1apVM0l3UnPHvf7666ZrRYde6hBB7WZZtWqVpNWtt95q/t2+fXvctm+++cZ8GNCEXY/HXXfdJX/++We8++nwTz32W7dulaZNm5r99IsL7ZSPiooy++jxTDhv+1tvvSXXXnutiV07bp5++mk5duxYvMfW41O5cmVZvXq13HjjjSa51+6OwGMwfvx4M7xVb2vcuLHs3r1bfD6fDB48WK644grz/jRv3tycKwm/ZNHXo8+tMehx1PskHF7pj0FHAej7qM9TokQJee211xIdw7Nnz5rXqB+Y9P0rVqyYOZf12Phpojp69Gjz2nUffa+ffPJJOXr06CXfI+100tc9derUeAm+X61atcz7Efhh67nnnosbRqrnqx4zPT6B9DG7du0qs2bNkkqVKpljph8ktBtKvf3226aDSePV45FwmqDA96levXrm/mXLlpWJEyfG2y8mJkYGDhwoNWvWlDx58pjzSs+vpUuXxtsv8P3VY+U/x/U9SGruxOjoaPNBVd9v3U+Pu77nCeNMzTmXkvcbAACERkrzJy14NmnSRAoWLBiXj3To0MHcpnmB5qSBOVVa1xUibyZvJm8mbwbcgs5xIExpkqhF4ZkzZ5rkw08TtQULFkibNm1MIqEJ55w5c0wRWpOKAwcOmITkpptuMv9Jp3aY4+OPPy4ff/yxPPTQQyZZWbJkiUkEE9Ii9PLly6V169YmsdBkYsKECSZB0OfVxECTUu0oGTt2rElM/UNBkxsSqkPe9P5btmwxr1lfjyZdmrRpItK9e/dEXxLosEBNEDXp0SREE0qd/zAtQ+f8iah/qhodytquXTvzAWP48OGm80RfY4MGDWTNmjXxponR7gTdT2/TxExfv8atw2J1ehy9n34QqFq1qtlfE2FNWHW+v86dO8umTZvMPnpcf/rpp3jx67zy+oWHHmvtINKk2E+TXU0cn3nmGXNu6DFo2bKl+cCiX0L06dPHHM8333zTfIkyefLkuPtqkqgx9ezZ0/yr77UmoCdOnJARI0bEOzaagOs0Pnp89fE//fRT89j6pYzGpvTDwd13322+1NFY9f3S90c7n7QrRRNVpe+XPrcmpXp+6IeqcePGmWOa8LUH0uOvj63nValSpS75fmoir1/uaAKtX7hUr17d/O707t1b9u7dK2+88Ua8/X/44QeZO3euSXzV0KFDzet5/vnnTXKsXwbpcdBjrB8y9XglPEb6IU+Pj/5+6u+uvrfaseL/UKrHVoe36u2dOnUyx+e9994z587KlStNjIF0Lkj94PTEE0/EzRGpH5ISuv/++83fAj0P9LzUL5n0uO/atSvuPE3NOZeS9xsAAIROSvIn/f9fC75aXNapE7UDW3P02bNnm8fQ7fp/v+YB9957r/l/Xvnz09QgbyZvJm9O+fsNIJ35AISl//77z1esWDFf3bp1422fOHGifnXvW7Bggbl+9uxZ34ULF+Lts337dl+WLFl8L7/8crxter/3338/bltUVJTZ5rd27VpzvUuXLvEe76GHHjLbdX+/M2fOJIp5xYoVZr8PP/wwbtusWbPMtqVLlyba/6abbjIXv9GjR5t9P/7447htMTEx5hjkzJnTd+LEiXivpUCBAr4jR47E7fvFF1+Y7V9++aXvYjQW3W/y5Mm+Q4cO+fbt2+ebN2+er0yZMr6IiAjfqlWrfCdPnvTlzZvX16lTp3j3jY6O9uXJkyfe9nbt2pnH69u3b6Ln8h9jfR6/gwcP+jJnzuxr3LhxvPdu3LhxcXEFHiPdpu97IP8xKFSokO/YsWNx2/v162e2V6tWzXf+/Pm47W3atDHPqefLxd7DJ5980pc9e/Z4+/ljCHxfz5075ytatKjv/vvvj9umcet+o0aNSvS4sbGx5t8ffvjB7DN16tR4t8+fPz/J7YHWrVtn9unevbsvJebMmWP2f+WVV+Jtf+CBB8z7vGXLlrhtup/+zuhx9Xv77bfNdn2d/nMv8BgH7us/RiNHjox3jKpXr+4rXLiwOY/9v9e6PdDRo0d9RYoU8XXo0CHR+5s7d25zvgRK+Lus99frI0aMSPZYpOWcu9T7DQAA0sfTTz8dL0dPaf70+eefm+uayyZHc9KEef3FkDf/D3kzeTN5M+BeTKsChKmMGTOaLoIVK1bEG+Kl3dLaAXHbbbeZ6/qteIYMGeI6ELRbQrsZdBicLpaTGjoXndKuhEA6J11C2rXupwvz6PPq8DntUEnt8wY+f9GiRU13gJ9+K6/xnDp1Sr777rt4+7dq1Ury5csXd12H2SntHE8J7UrQDhrtrtfueB1KqN36OsRQuwe0W11jOXz4cNxF35c6deokGs6ntKsgJb799lvTtaLH1f/eKe2I0GlxdHGjQPoea7dIUnTEgA4z9NPYlHbKBM4nr9v1ObXzI6n3UDsx9PXpMdROk40bN8Z7Hj2nAue91K6O2rVrxzvWn332mRnCq10YCfmn79GRABrv7bffHu+46nBJfY6kjqufdo+opIaFJnc+6fuV8HzW4aKa1+vQ30D6OxXY1eQ/ltpdEvic/u0JzzM93trdE3iM9Lp2o+iwUaXx+Ofa104W7VrS7ik955L6vdHn9g9/To6+j/qY2vGU3BDb1J5zKXm/AQBAaKQ0f/LP1f3VV1+Z/DyYyJvJmwORN/9/5M2A85hWBQjzqVV0CJsWxHVaEp37W4ewadKiyYLyr+iuw9d0mF3gvHf+YY4ppfOIawLgH8bnp4X2pKZA0eFzOnxNE8fAueiOHz+ehlf7v+fXFdQDk5DAaVj09kAJhwj6C+UpmYNP6VBITWr1WGpyqs/jT4w3b94cbz7FhDQxCqT30+llUsL/OhIeV02kdA7EhK9T561LbvHShMfAn/DrXIFJbQ88NjqccMCAAWaYoz+BTu491NeWcH56Pd6///57vOG1+poutsirHld97MKFCyd5e1Jz7Cc85vqBJCX0OOoHuIQfClJ6PqXmWCp9Lp0LMZDOIan0C64bbrjB/KwfJEeOHGk+SAV+cNVphBJKaltC+iFQhy/rhxf94kyfR4e16mK++mVTWs65lLzfAAAgNFKaP+m0ilog1Okg9DOETlfYokULM12i5guXg7yZvDkQefP/R94MOI/iOBDGtCugYsWK8sknn5jiuP6rRWgtmvsNGTJEXnzxRdPNoYvC6NxqWlzWb7qTmmMtWLTLQQvj+jy6AIsmPpoUaLd7ej5vIP8XBAklXDQmOToPnM4jlxT/a9D5E/2JUqCEiWxgB3+wBXaqpPQYXOrYaHePfoDSxPnll182X4joojnahaFz5CV8Dy/3WPvp42qCr3M+JuVi3R46MkGPu3+xn2BL67FMDZ3PX+fU1A+qOoejHgt9fP2iKXDxpZS894H09/Cee+4x6w/o/JD6N0EfUz/A1ahRI9VxBvM1AwCAy5PS/ElzcZ3v+Oeff5Yvv/zS5AT6GUGLi7pNO1zTiryZvDkQefP/R94MOI/iOBDmtBCu/2HrN8/aQa6d1ddff33c7ZoA68rYujhJIE3itKsjNUqXLm2SMH8ng58uQJKQPq8uuqPJtp8ugJJwBe+E36Jf6vn1dWoMgQmzf6ii3h4q/u55TcKS+yCQVv7XocdVuw/8dPiedv8H+/mSokMJdSocXaBJF+rx0+e/nGP2yy+/mK6O5BYH0n10qGL9+vVTnMD66WJN2pGkievu3bsTdaYkdZz1ubRjJrALJr3Op3379pkhxoFdMH///bf51z/sVH9v9D3X4x74uxEVFXXZz6/HVrtg9KKdRrpIkf5+6gcLN5xzAAAgbVKbP2k3rF5effVV8/lBP09Mnz5dHn/88VTl5qmJT5E3pxx5M3kzgOBhznEgzPm7xHUo49q1a+N1jfu/qU74rbTOTxc4R15K+VfTHjt2bLzto0ePTrRvUs+rK7sHTuui/AlPwqJ5UnTF8ujoaJkxY0bcNp1XTh9XO120YyNUdBV07Q7Rzvyk5mw8dOhQmh9bEyodlqfHOfAY6hccOnRS53FMb/4Oh8Dn14RPp+dJKx3Gq/Mgjhs3LtFt/ufRFdz1HNFRDgnpe32p80STYX2sRx991MxDn5DOUajDL/3nkz5Xwnh0mLEm2MFePV7jf/vtt+MdT72uXT06CiS5464fjHRtgbTSuS71i6mECb9+sDl37pxrzjkAAJA2Kc2fdOqKhPm5Fv2UPyfQomlKc/OUIm9OPfJm8mYAwUPnOBDmdO60evXqyRdffGGuJyyO6xxpOrxPF57R/XTonA69C/yWO6U0edaFdDTR0//49fEWL14sW7ZsSbSvPq8OndTpVCpVqmSSFO02SDjPuT6mJjY6t5s+pg6j1C6GpObOe+KJJ0xSpMPnNFnTrgHtGPjpp59MgT6lC8oEgyb4EyZMMMnkddddZ6aL0WRt165dZhEW7eBIKplNCX2cfv36mfkg77jjDmnWrJnpTNDjrqMCAhd0SS/63upceNr9r3PYa9Kr7+flDP/Tufo+/PBD6dmzp6xcudLMS6kdIXpedOnSRZo3b26+4NDFdnToon7Z07hxY9Mtox0b+qWOzp//wAMPXDTu8ePHm8fTKYf0/dHRFNrlol09c+fOlVdeecXsq8MldVRF//79zdyF1apVk4ULF5rfJR1OmXBu/culcyfqea7PpXMm6pc8+hrfeeeduI4g/b3R7pd7773XJNbafTJx4kTzO5TUh5aU0C4bXRRJP0Dp4+gQ2s8//1wOHDhgzlu3nHMAACBtUpo/aaFT/2/XPEPzHM2PJk2aZPJaLX4q7UDWfEHzFM1XdErGypUrm0takTenHnkzeTOA4KE4DniAFsSXL19uVr3W+eMC6VzkmkjpkElNKjQh1SS0b9++aXquyZMnm4RAC+w6D5sWsvXxEg7F02RMi966n377rkmvJnPaORJI5x3UJEaTuo4dO5qOBF1ZPaniuCbrmqhp7Jrc62I3Or2Lzm2uBfNQ08WLNHEbNmyYjBgxwnQT6CI/mrzqlxGX46WXXjLHWT8o9OjRw3ww0S8HtOMmuaGVwaRfYnz11VdmKKEuLqQJvyZ6miwmfA9TSs8HXeneP4T3s88+M8/ToEEDM0+ln54P2hGiX4To+atJqX4Ros+v59Gl6IcETUx16KN+qNBuJB1ZoOe+niv+hFWn5tGkX0dd6O+G3qbPo++lvu5g02Oo563Ox68fRHWRH31/dWV7Pz2PdXSEvnad41CTch2+qR9w9NxPC/3d1C+19Iss/aCmx1M/AM2cOdN0JbnlnAMAAGmXkvxJi6laaNUpVLTYp00s+vlB8/XAxQrfffddk69oPqAdu9phfDnFcUXenDrkzeTNAIInwscs/wAAOOrmm282Q2P/+OMPp0MBAAAAXIu8GUCwMec4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHOYcxwAAAAAAAAA4Dl0jgMAAAAAAAAAPIfiOAAAAAAAAADAcyLFg2JjY2Xfvn2SK1cuiYiIcDocAAAAhAGdrfDkyZNSvHhxyZCBHpSUIC8HAACAk7m5J4vjmoCXLFnS6TAAAAAQhnbv3i1XXHGF02FYgbwcAAAATubmniyOa2eK/+Dkzp3b6XAAAAAQBk6cOGEKvf5cE5dGXg4AAAAnc3NPFsf9QzY1AScJBwAAQDAxPUjKkZcDAADAydycyRABAAAAAAAAAJ5DcRwAAAAAAAAA4DkUxwEAAAAAAAAAnuPJOccBAACcEhsbKzExMU6HgTTKnDmzZMhAfwkAAEA4uHDhgpw/f97pMJAGmTJlkowZM8rlojgOAAAQIloU3759uymQw05aGC9btqwpkgMAAMBOPp9PoqOj5dixY06HgsuQN29eKVq06CUX3bwYiuMAAAAhSsD3799vuhtKlixJ97GF9EuNffv2mfexVKlSl5WEAwAAwDn+wnjhwoUle/bs5HUWfrY6c+aMHDx40FwvVqxYmh+L4jgAAEAI/PfffyaBK168uEnAYadChQqZArm+nzqUEwAAAPZNpeIvjBcoUMDpcJBG2bJlM/9qgVzfy7ROsULLEgAAQIiScMV0HHbzv3/+9xMAAAB28c8xTsOK/fzv4eXMG09xHAAAIIQYsmk33j8AAIDwQF5nv2C8hxTHAQAAAAAAAACeQ3EcAAAAQenamDNnTro/z7Jly8xz6TyRwbBjxw7zeGvXrg3K4wEAAABOIi9PHRbkBAAAcNCwNYdD+nx9axRM0/2io6Pl1VdflXnz5snevXvNojfVq1eXZ599Vm677TYJlXr16sn+/fslT548IXtOAAAAeEMoc3PycnegOA4AAIBLdnHUr19f8ubNKyNGjJAqVaqYRW8WLFggTz/9tGzcuDGkC2IWLVo0ZM8HAAAAuAV5efAxrQoAAAAuqkuXLmaI48qVK+X++++Xq666Sq699lrp2bOn/Pzzz0nep0+fPmY/XUH+yiuvlBdffDHeKvLr1q2TW265RXLlyiW5c+eWmjVryq+//mpu27lzp9xzzz2SL18+yZEjh3mur7/+Otnhmz/99JPcfPPN5rn0Pk2aNJGjR4+a2+bPny8NGjQwHyAKFCggd999t2zdujWdjxgAAAAQfOTlwUfnOAAAAJJ15MgRk8jq0E1NiBPS5DYpmlxPmTJFihcvLuvXr5dOnTqZbc8//7y5/eGHH5YaNWrIhAkTJGPGjGZuwUyZMpnbtOslJiZGvv/+e/Ocf/31l+TMmTPJ59H76fDRDh06yJgxYyQyMlKWLl0qFy5cMLefPn3afFioWrWqnDp1SgYOHCj33nuvuV+GDPSJAAAAwA7k5emD4jgAAACStWXLFvH5fFKxYsVU3W/AgAFxP5cpU0Z69eol06dPj0vCd+3aJb1794573AoVKsTtr7dpJ4wOE1Xa4ZKc1157TWrVqiVvvfVW3DbtaPHTxwk0efJkKVSokEnsK1eunKrXBAAAADiFvDx90C4DAACAZGkCnhYzZsww8yHqPITaXaJJuSbXfto18vjjj0ujRo1k2LBh8YZUduvWTV555RVz/6ioKPn999+TfR5/h0pyNm/eLG3atDGJvA4T1Q8EKjAWAAAAwO3Iy9MHxXEAAAAkSztHdC7B1Czus2LFCjM8s2nTpvLVV1/JmjVrpH///mZIpt9LL70kf/75p9x1112yZMkSqVSpknz++efmNk3Ot23bJo8++qgZ+qkdKG+++WaSz5UtW7aLxqJzJOoQ1EmTJskvv/xiLiowFgAAAMDtyMvTB9OqAACAsHB80CBxozxRUWKz/Pnzm4V0xo8fbzpHEs5vqAvwJJzfcPny5VK6dGmTePvpYj4J6cJAeunRo4fpInn//ffNvIOqZMmS8tRTT5lLv379TBL9zDPPJHoMnbNw8eLFMiiJ9/+ff/6RTZs2mfs2bNjQbPvxxx8v42gAAADgUsjL0wd5efqgcxwAAAAXpQm4LqRTu3Zt+eyzz8yQyA0bNsjYsWOlbt26SXa16PBInctQh2Xqfv7uE/Xvv/9K165dzQr3mpzrqvarVq2Sa665xtz+7LPPyoIFC2T79u3y22+/mYV8/LclpAm63rdLly5mmKd20uhiQocPH5Z8+fJJgQIF5J133jFzNGonjA4bBQAAAGxEXh58FMcBAABwUTovoCbDt9xyizz33HNmwZzbb7/ddIZowptQs2bNTNeJJtrVq1c3HSsvvvhi3O0ZM2Y03SNt27Y1HSotW7aUO++8M67LRBP+p59+2iTed9xxh9kncGGfQHrbwoULZd26deZDgn4o+OKLLyQyMtKseq8fBFavXm1i1phGjBiRjkcKAAAASD/k5cEX4UvrbO4WO3HihOTJk0eOHz9uJoAHAAD2c/vwzbNnz5qOi7Jly0rWrFmdDgtpdLH3kRwz9ThmAACEH/Jy2JSb0zkOAAAAAAAAAPAcFuQEAAAAAADwaBctAHgZxXEACFNuTMJJwAEAAAAAgFswrQoAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMinQ4AAADAy44PGhTS58sTFZWm++3evVuioqJk/vz5cvjwYSlWrJi0aNFCBg4cKAUKFJBQuvnmm6V69eoyevTokD6vF4wfP15GjBgh0dHRUq1aNXnzzTeldu3aye6v78GECRNk165dUrBgQXnggQdk6NChkjVr1pDGDQAAYFtuTl7uDnSOAwAA4KK2bdsmtWrVks2bN8snn3wiW7ZskYkTJ8rixYulbt26cuTIEadDRBDMmDFDevbsaT5s/fbbb6Y43qRJEzl48GCS+0+bNk369u1r9t+wYYO899575jFeeOGFkMcOAADgBeTlwUfnOABAvN5Jm97f7AO2e/rppyVz5syycOFCyZYtm9lWqlQpqVGjhpQrV0769+9vuocjIiLk888/N50rfnnz5jWdJI899pi53qdPH7PPnj17pGjRovLwww+bLpdMmTKZ21966SWZM2eOPPfcc/Liiy/K0aNH5c4775RJkyZJrly5zON899135jJmzBhzn+3bt8uyZcvk2WeflWPHjsU9tz7OvffeKz6fL95jd+vWzfysHx7atm1ruqNHjhwpo0aNktjYWOnevbt5TV6jr79Tp07Svn17c10/aM2bN08mT55siuAJLV++XOrXry8PPfSQuV6mTBlp06aN/PLLLyGPHQAAwAvIy4OPznEAAAAkSxPVBQsWSJcuXeIScD9/Eq3dwv5E91I0kZ4yZYr89ddfJonW5PqNN96It8/WrVtNsvzVV1+Ziybcw4YNM7fpfbQrRou4+/fvN5eSJUum+PXoY3/zzTdmGKp222i381133WU+FOjzDB8+XAYMGOC5Am9MTIysXr1aGjVqFLctQ4YM5vqKFSuSvE+9evXMfVauXBnXyfT1119L06ZNQxY3AACAV5CXpw86xwGkKzqAAcBuOmRTE+xrrrkmydt1u3aRHDp0KEWPpwmun3Ya9+rVS6ZPny7PP/983HbtEtFEXRN29eijj5qhoq+++qrkyZPHdMtkz57dfAhILX1s7YTWx65UqZLccsstsmnTJlPU1WLw1VdfbRLxpUuXSp06dcQrdL7KCxcuSJEiReJt1+sbN25M8j7aMa73a9CggTlH/vvvP3nqqacuOq3KuXPnzMXvxIkTce+LXgAgHLn1rxt/d5Fe3H7O67+au/gvTkntc//999/mPhUrVkzyvrpd83L/lHhJvb7AbYEd2aVLlzYd4lpc7927d9y+eqzef//9uLz8kUceMXn5K6+8Irlz5zZ5uRbqA3NI/+MHPnfCbf7H1oK4PrZ+pvDn5TpyUfPyq666yuTlS5YsSXYNHP/rSSqXTOnfOFcUx1n4BwAAwN0ulbxrYpwSmnCPHTvWdIqcOnXKFFQ1sQ6kRXN/Aq50kaHk5r1OrYSPrYl8xowZTQIeuC1YzxfOdMjskCFD5K233jJfJOiclzr0dfDgwWbobVI0Zx+UxBfn+uXK2bNnQxA14G2nP/lE3CZHmzYS7k4n+H/OLc7xfx08es6fP3/eFE41D9WLU1L73NrI4L9fUvf1F4P9ea3un3A//+tWM2fONDVZHf0XmJf7b9d9tWiuxW//Nn+e7L/uL04HPo8/jsBtgbEn99iFChUysQcWugsXLiwHDhxI9ljpdt33n3/+iZsOxu/kyZN2FMf9C//onIaaVGvhWxf+0W8K9AAkt/CPdvzoUE791kTnuNG5dHQ+GqSfYWsOixv1rVHQ6RAAAAhb5cuXN3mWLrio8wQmpNs1kdU5DHW/hEV0/fDhp9Nz6HBPLY5qvqdd4No1rvMKBkqY2OrjXqrzQxPpiz33xR47Lc8XbrThRL8k0A8fgfR6ch36WgDXrv7HH3/cXK9SpYqcPn1annjiCdOJFPiFg1+/fv1M7h/YOa7Db/UcSvglCYDgO/5/ozXcJE8Sn/vDjRuPu1eOPZzh9nNev5DXwmlkZKS5OCW1z60jHDVP1VpoUvfVWqrmVJrX6X6aiwXup7mxf5vm5e3atTPzfQfm5Vpb9d9H99UGmMDH0HxR82T/Nn0evQTuoz9rXh64zZ9bX+qxE27z5/jJHSvdrvsUKFAgUdN0SpuoHS+Os/APAACAe2miefvtt5vu4B49esSb31BH/U2dOtUsDKQ0Gde5BgOnZDlz5ky8PE47RAKHcO7cuTPVMWnS7O8+8dPn1g85WpzNkSOH2bZ27dpUP7ZX6TGtWbOmGSbrX7hJP8To9a5duyZ5H31vExbA9UPNxUYaZMmSxVwS0sdJqpgOILjc+Fvmhd99t75CLxx7OMPt57z+6y/q6sUpqX1uLXprXq6zaWizQcK8XBuKNS/Xx9XcWLf5n8Ofl/tfsxbHNS8fEDDloc7QERhXwn+T2qY5pOaMgftos7Pm5fp8/rx83bp1l3zs5I7Lxd4n/21J5ZIp/RsX6YaFf7SDJDUL/3z88cdm4R+desW/8I92rQDJoesdAIC0GzdunMnBtKtE5xcsW7as/Pnnn2Y+Qp0LUFe1V7feeqvZVxfm0eJ1nz594nVlV6hQwSTd2pVy/fXXm4aIzz//PNXxaHOENkbs2LFDcubMKfnz5zcjEHUecp3vWle919t13nKknH7I0g6iWrVqmTxbR3Tqlw3+Jpa2bdtKiRIlzNQo6p577jGNLjVq1IibVkW7yXW7v0gOAACA4CEvDz5Hi+Ms/GMZBxcpuJgUvYc2x245t75Cjr27j7sbY/fKeWMzt747ti/8459aRRsTdDqUli1bmnkG9XHuu+8++fDDD03yq9dff/116dChgzRs2FCKFy9uiqvaCOF/zVo0ffbZZ00nsuZmuhq9dqvo4ya1SE/CmP3/6mJBOq2eLqj577//mmYJTcw/+ugjs7DnpEmT5LbbbpOoqCh58sknL/rYyR2X5N6nYCz641atWrUyc3/rhyrtNKpevbrMnz8/LlfXD1CBHTj63mmnjv67d+9e06Gk77EunAoAAIDg06L2qlWrzHQoCfNyzYU1L1c6baE2OPjz8jFjxpi83K9Zs2ZmVGjXgLxcmxz0cVOjV69eprnCn5dv377d5OXa2KwFe39ero+rU++5UYTPwU9n+/btM90nOsRWv8nw0w813333XZJTpejCP61btzbfjgQu/KNTsyS38I++AUkt/KNz9AQuyBQKn25157xLD5S79ByPxO5M7LZz46I/nln4x4XHPqXH3Y2xe+W8sZnbzxud4+/48eNm+GI4LCKuuZUm2d98843JybxC56jUqWB0XsakFv3Rjh19n5k/O2W0aUWPJccMCI3jSXwudlqeqCgJd2487l459nCG2895zee0iKtd1+GQl2tDiI7mW7Rokdxwww3iJWcv8l6mNM90tHPciwv/nNrnziGmhQsXuOQ+xO5M7LZz+0Ic4cyNxz6lx92NsXvlvLGZ288btyz8EyyDBw+WK6+80nSuaJODV+YtDcaiPwAAAEAwm1a0W/vnn382U+N5JS8PFkc/mXly4R8HJ/q/mBQdB2IPOi/8wXLrK+TYu/u4uzF2L5w3bu/wuBSbF/75b98+caPI4sUvertOoeI1wVj0BwAAAAgm/xoxSD3H25ZY+AcAAAAAUocF5wEAAMKgOM7CPwAAAAAAAAAAzxXHlU6hktw0KroAZ8J5HnWieb0AAAAAAAAAAGBtcRwAAMArklsjBXbg/QMAeInta9IAF6PrHibFjesBXWotIK+KTeY9TA2K4wAAACGQKVMmMzWcTien08IFLsr533//iRtFnj3rdAiuK4zr+6fvnb6fAAAAsE/mzJnNFM779u0zebled3tuTl6eOC+PiYkxubm+l/oephXFcQAAgBDQhcOvuOIK2bNnj+zYsSPebbHHjokbZTh92ukQXEc/OOn7yELwAAAAdtJiatmyZWX//v2mQJ6QG3Nz8vKkZc+eXUqVKhVvvcrUojgOAAAQIjlz5pQKFSrI+fPn420/OW6cuFGuZNaE8TLtGKcwDgAAYDftNNaiqnaJX7hwwfW5OXl5YpqT69qUgV3/aUFxHAAAIMRJXMLi6jmXdoJkzZrV6RAAhKFhaw6LG/WtUdDpEAAAIeSfKi/hdHluzM3Jy9MPxXEAAAAAnkOBFgAAAGmfkAUAAAAAAAAAAEtRHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DgtyAgAAAAAAAC5yfNAgcaM8UVFOhwAEFZ3jAAAAAAAAAADPoTgOAAAAAAAAAPAciuMAAAAAAAAAAM+hOA4AAAAAAAAA8ByK4wAAAAAAAAAAz6E4DgAAAAAAAADwHIrjAAAAAAAAAADPoTgOAAAAAAAAAPAciuMAAAAAAAAAAM+hOA4AAAAAAAAA8ByK4wAAAAAAAAAAz6E4DgAAAAAAAADwHIrjAAAAAAAAAADPoTgOAAAAAAAAAPAciuMAAAAAAAAAAM+hOA4AAAAAAAAA8JxIpwMAALc6PmiQuFGeqCinQwAAAAAAALAeneMAAAAAAAAAAM+hOA4AAAAAAAAA8ByK4wAAAAAAAAAAz6E4DgAAAAAAAADwHIrjAAAAAAAAAADPoTgOAAAAAAAAAPAciuMAAAAAjPHjx0uZMmUka9asUqdOHVm5cmWy+958880SERGR6HLXXXeFNGYAAAAgrSiOAwAAAJAZM2ZIz549JSoqSn777TepVq2aNGnSRA4ePJjk/rNnz5b9+/fHXf744w/JmDGjPPjggyGPHQAAAEgLiuMAAAAAZNSoUdKpUydp3769VKpUSSZOnCjZs2eXyZMnJ7l//vz5pWjRonGXRYsWmf0pjgMAAMAWFMcBAAAAj4uJiZHVq1dLo0aN4rZlyJDBXF+xYkWKHuO9996T1q1bS44cOdIxUgAAACB4IsUlcxuOGDFCoqOjzfDNN998U2rXrp3s3Ibfffddou1NmzaVefPmhSBaAAAAILwcPnxYLly4IEWKFIm3Xa9v3LjxkvfXucl1WhUtkF/MuXPnzMXvxIkT5t/Y2FhzCSmfT9woxcfB5vhtjt1ybnyFHHd3H3ubY7edW1+hF84bN8bvhXPeqWMW6Za5DXXYpi76M3r0aDO34aZNm6Rw4cJJzm2onS1+//zzjymoM3wTAAAAcIYWxatUqZJsg4vf0KFDZdCgQYm2Hzp0SM6ePSuhlPPf/xXm3ebgwQthH7/NsdvudO7c4jbnklnXIJy48bin9NjbHLvtbD72Nsfu1vi9cM4H28mTJ+0ojgfObai0SK4d4Dq3Yd++fZOc2zDQ9OnTmdsQAAAAuAwFCxY0i2keOHAg3na9rvOJX8zp06dNTv7yyy9f8nn69etnGmMCO8dLliwphQoVktwh/iB6al9GcaPChQuEffw2x2674/83WsNN8iTRFJeU44MHixvlefFFK497So+9zbHbzuZjb3Psbo3fC+d8sGXNmtX9xXH/3IaaJPsxtyEAAAAQWpkzZ5aaNWvK4sWLpUWLFnFDUfV6165dL3rfWbNmmalSHnnkkUs+T5YsWcwlIf0MoJeQiogQN0rxcbA5fptjt1wGi4+7G2NPafzEHnz8vjrHC+eNG+P3wjnv1DFztDjO3IbuEfZzA9ocu+Xc+gqZJ80ZNsfuhd9Zt746L5w3NseO8Dle2tHdrl07qVWrlpkeRac71K5w/wjPtm3bSokSJczUKIE0F9eCeoEC4d91CwAAgPDi+LQql4O5DYMn3OcGtDl227lxri7FPGnOsDl2L8zzZvtxtzl+m2NH6uc1dKtWrVqZ/HjgwIESHR0t1atXl/nz58c1suzatStRB46uE/Tjjz/KwoULHYoaAAAAsLQ4ztyG7hHucwPaHLvt3DhXl2KeNGfYHLsX5nmz/bjbHL/NsSP18xq6mU6hktw0KsuWLUu07eqrrxafS0foAQAAAK4ujjO3oXuE/dyANsduObe+QuZJc4bNsXvhd9atr84L543NseP/43gBAAAAdnF8WhXmNgQAAAAAAAAAeK44ztyGAAAAAAAAAADPFccVcxsCAAAAAAAAAEKJiREBAAAAAAAAAJ5DcRwAAAAAAAAA4DkUxwEAAAAAAAAAnkNxHAAAAAAAAADgORTHAQAAAAAAAACeQ3EcAAAAAAAAAOA5FMcBAAAAAAAAAJ5DcRwAAAAAAAAA4DmRTgcAAEA4OT5okLhRnqgop0MAAAAAAMBV6BwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA57AgJwAAANKMRWgBAAAA2IrOcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdEOh0AAAAAAAAAEGzHBw0St8kTFeV0CAAC0DkOAAAAAAAAAPAciuMAAAAAAAAAAM+hOA4AAAAAAAAA8ByK4wAAAAAAAAAAz6E4DgAAAAAAAADwHIrjAAAAAAAAAADPiXQ6AAAAAAAAAADA5Tk+aJC4UZ6oKHErOscBAAAAAAAAAJ5DcRwAAAAAAAAA4DkUxwEAAAAAAAAAnkNxHAAAAAAAAADgORTHAQAAAAAAAACeQ3EcAAAAAAAAAOA5FMcBAAAAAAAAAJ5DcRwAAAAAAAAA4DkUxwEAAAAY48ePlzJlykjWrFmlTp06snLlyovuf+zYMXn66aelWLFikiVLFrnqqqvk66+/Dlm8AAAAwOWIvKx7AwAAAAgLM2bMkJ49e8rEiRNNYXz06NHSpEkT2bRpkxQuXDjR/jExMXL77beb2z799FMpUaKE7Ny5U/LmzetI/AAAAEBqURwHAAAAIKNGjZJOnTpJ+/btzXUtks+bN08mT54sffv2TbS/bj9y5IgsX75cMmXKZLZp1zkAAABgi0i3DN8cMWKEREdHS7Vq1eTNN9+U2rVrX3T4Zv/+/WX27NkmIS9durTpbGnatGlI4wYAAADCgXaBr169Wvr16xe3LUOGDNKoUSNZsWJFkveZO3eu1K1b10yr8sUXX0ihQoXkoYcekj59+kjGjBmTvM+5c+fMxe/EiRPm39jYWHMJKZ9P3CjFx8Hm+G2O3XKxFh93N8ae0viJPfhsPm9sjl1x3jjD5tid+j82pc/peHGc4ZsAAACAsw4fPiwXLlyQIkWKxNuu1zdu3JjkfbZt2yZLliyRhx9+2MwzvmXLFunSpYucP39eoqKikrzP0KFDZdCgQYm2Hzp0SM6ePSuhlPPf/xXm3ebgwQthH7/NsdvudO7c4jbnDh60NvaUxk/swWfzeWNz7Irzxhk2x56a+IPp5MmTdhTHGb4JAAAA2Ee7cbRh5Z133jGd4jVr1pS9e/eaEaHJFce1M10bYwI7x0uWLGm6znOH+MPcqX1Jd7c7rXDhAmEfv82x2+74/43WcJM8STTF2RJ7SuMn9uCz+byxOXbFeeMMm2NPTfzBpAvMu744HqrhmwAAAACSV7BgQZNLHzhwIN52vV60aNEk71OsWDHTrBKYg19zzTVmqkTN8zNnzpzoPlmyZDGXhPQzgF5CKiJC3CjFx8Hm+G2O3XIZLD7ubow9pfETe/DZfN7YHLvivHGGzbE79X9sSp8z0gvDN5nb8NLCfm5Am2O3nFtfIfOkOcPm2L1w3tgcu9lP3Cncj73NsYfDcwaLFrK183vx4sXSokWLuNej17t27ZrkferXry/Tpk0z+/k/fPz999+maJ5UYRwAAABwG8enVQnF8E3mNry0cJ8b0ObYbWfzfFc2x+7W+G2O3Qvnjc2x2x4/sXtrXkO30ulO2rVrJ7Vq1ZLatWubtYBOnz4dN/1h27ZtzXo/mlurzp07y7hx46R79+7yzDPPyObNm2XIkCHSrVs3h18JAAAAYEFxPFTDN5nb8NLCfW5Am2O3nc3zXdkcu1vjtzl2L5w3Nsdue/zE7q15Dd2qVatWpnlk4MCBJreuXr26zJ8/P26U565du+INT9V8esGCBdKjRw+pWrWqKZxroVynOwQAAABsEOmF4ZvMbXhpYT83oM2xW86tr5B50pxhc+xeOG9sjt3sJ+4U7sfe5tjD4TmDTXPw5PLwZcuWJdqmawH9/PPPIYgMAAAACD7HM3jt6J40aZJ88MEHsmHDBjM8M+HwzcAFO/X2I0eOmK4ULYrPmzfPDN/UBToBAAAAAAAAALBiznGGbwIAAAAAAAAAPFccVwzfBAAAAAAAAAB4aloVAAAAAAAAAABCjeI4AAAAAAAAAMBzUl0c37ZtW/pEAgAAAAAAAACAW4vj5cuXl1tuuUU+/vhjOXv2bPpEBQAAAAAAAACAmxbk/O233+T999+Xnj17mkU0W7VqJR07dpTatWunT4QA5PigQeJGeaKinA4BAAAAAAAACE3nePXq1WXMmDGyb98+mTx5suzfv18aNGgglStXllGjRsmhQ4fSFgkAAAAAAAAAAG7tHI+7Y2Sk3HfffXLXXXfJW2+9Jf369ZNevXrJCy+8IC1btpThw4dLsWLFghstAAAAAAAOYUQnAAAe7xz3+/XXX6VLly6mAK4d41oY37p1qyxatMh0lTdv3jy4kQIAAAAAAAAA4FTnuBbCdc7xTZs2SdOmTeXDDz80/2bI8L86e9myZWXKlClSpkyZYMUIAAAAAAAAAICzxfEJEyZIhw4d5LHHHkt22pTChQvLe++9F4z4AAAAAAAAAABwvji+efPmS+6TOXNmadeuXVpjAgAAAAAAAADAXXOO65Qqs2bNSrRdt33wwQfBigsAAAAAAAAAAPcUx4cOHSoFCxZMciqVIUOGBCsuAAAAAAAAAADcUxzftWuXWXQzodKlS5vbAAAAAAAAAAAIu+K4doj//vvvibavW7dOChQoEKy4AAAAAAAAAABwT3G8TZs20q1bN1m6dKlcuHDBXJYsWSLdu3eX1q1bp0+UAAAAAAAAAAAEUWRq7zB48GDZsWOH3HbbbRIZ+b+7x8bGStu2bZlzHAAAAAAAAAAQnsXxzJkzy4wZM0yRXKdSyZYtm1SpUsXMOQ4AAAAAAAAAQFgWx/2uuuoqcwEAAAAAAAAAwBPF8T179sjcuXNl165dEhMTE++2UaNGBSs2AAAAAAAAAADcURxfvHixNGvWTK688krZuHGjVK5c2cxB7vP55LrrrkufKAEAAAAAAAAACKIMqb1Dv379pFevXrJ+/XrJmjWrfPbZZ7J792656aab5MEHHwxmbAAAAAAAAAAAuKM4vmHDBmnbtq35OTIyUv7991/JmTOnvPzyyzJ8+PD0iBEAAAAAAAAAAGeL4zly5IibZ7xYsWKydevWuNsOHz4c3OgAAAAAJCsqKkp27tzpdBgAAACAN4rjN9xwg/z444/m56ZNm8pzzz0nr776qnTo0MHcBgAAACA0vvjiCylXrpzcdtttMm3aNDl37pzTIQEAAADhWxwfNWqU1KlTx/w8aNAgk4jPmDFDypQpI++99156xAgAAAAgCWvXrpVVq1bJtddeK927d5eiRYtK586dzTYAAAAAQSyOX7hwQfbs2SOlSpWKm2Jl4sSJ8vvvv5uFOUuXLp2ahwMAAABwmWrUqCFjx46Vffv2mWYVzdfr168vVatWlTFjxsjx48edDhEAAACwvzieMWNGady4sRw9ejT9IgIAAACQaj6fT86fP2/WB9Kf8+XLJ+PGjZOSJUuakZ4AAAAALnNalcqVK8u2bdtSezcAAAAA6WD16tXStWtXKVasmPTo0cN0km/YsEG+++472bx5s1kfqFu3bk6HCQAAANhfHH/llVekV69e8tVXX8n+/fvlxIkT8S4AAAAAQqNKlSpyww03yPbt282UKrt375Zhw4ZJ+fLl4/Zp06aNHDp0yNE4AQAAADeKTO0dmjZtav5t1qyZRERExG3XoZt6XeclBwAAAJD+WrZsKR06dJASJUoku0/BggUlNjY2pHEBAAAAYVkcX7p0afpEAgAAACBVXnzxRadDAAAAALxTHL/pppvSJxIAAAAAqXL//fdL7dq1pU+fPvG2v/baa7Jq1SqZNWuWY7EBAAAAYVcc//777y96+4033ng58QAAAABIRW7+0ksvJdp+5513ysiRIx2JCQAAAAjb4vjNN9+caFvg3OPMOQ4AAACExqlTpyRz5syJtmfKlElOnDjhSEwAAACALTKk9g5Hjx6Ndzl48KDMnz9frr/+elm4cGH6RAkAAAAgkSpVqsiMGTMSbZ8+fbpUqlQp1Y83fvx4KVOmjGTNmlXq1KkjK1euTHbfKVOmmCaZwIveDwAAAAjbzvE8efIk2nb77bebjpWePXvK6tWrgxUbAAAAgEssyHnffffJ1q1b5dZbbzXbFi9eLJ988kmq5xvXIrvm8xMnTjSF8dGjR0uTJk1k06ZNUrhw4STvkzt3bnN7UiNKAQAAgLDrHE9OkSJF4iXGAAAAANLXPffcI3PmzJEtW7ZIly5d5LnnnpM9e/bIt99+Ky1atEjVY40aNUo6deok7du3N13nWiTPnj27TJ48Odn7aDG8aNGicRf9TAAAAACEbXH8999/j3dZt26dmVblqaeekurVq6cpCIZvAgAAAGlz1113yU8//SSnT5+Ww4cPy5IlS+Smm25K1WPExMSYEaCNGjWK25YhQwZzfcWKFRed87x06dJSsmRJad68ufz555+X9VoAAAAAV0+rogVwLUj7fL5422+44YaLdpUkh+GbAAAAgLO0qH7hwoVEnd96fePGjUne5+qrrzb5f9WqVeX48ePy+uuvS7169UyB/IorrkjyPufOnTMXP/+iobGxseYSUgk+z7hFio+DzfFbHHuIz9KgnzdujN/m2L1w3tgcu9lP3Mfm2BXnjTNsjl2FPM9LxXOmuji+ffv2eNe1o6RQoUJp7t4OHL6ptEg+b948k2j37dv3osM3AQAAAC/TgvYbb7whM2fOlF27dpkO8EBHjhxJt+euW7euufhpYfyaa66Rt99+WwYPHpzkfYYOHSqDBg1KtP3QoUNy9uxZCaWc//6vMO82Bw9eCPv4bY79dO7c4kbnDh5M0X5ujN/m2FMaP7EHn83njc2xK84bZ9gce2riD6aTJ0+mT3Fch00Gi3/4Zr9+/dI0fFO/AbjuuutkyJAhcu211wYtLgAAAMAGWmh+9913zVzjAwYMkP79+8uOHTvMPOQDBw5M8eMULFhQMmbMKAcOHIi3Xa+ntCklU6ZMUqNGDTP/eXI079dRo4Gd4zolizbb6OjQUDq1L6O4UeHCBcI+fptjP/5/ox3cJk8yo65tiN/m2FMaP7EHn83njc2xK84bZ9gce2riD6aUNnKnujjerVs3KV++vPk30Lhx40wirNOipBTDN90j3Ic/Wh275cNiiD34bD5vbI7dC+eNzbGb/cSdwv3Y2xx7ODzn1KlTZdKkSWbe8ZdeeknatGkj5cqVM7nyzz//nChnT07mzJmlZs2asnjx4riFPPX16PWuXbum6DE0r1+/fr00bdo02X2yZMliLglpg4xeQsqlUzOm+DjYHL/FsYf4LA36eePG+G2O3Qvnjc2xm/3EfWyOXXHeOMPm2FXI87xUPGeqi+OfffaZzJ07N9F2LVAPGzYsVcXxtGD4ZvoI9+GPNsdu+7AYYg8+m4dT2Ry7F84bm2O3PX5i99bQzWCKjo6WKlWqmJ9z5sxpmkfU3XffLS+++GKqHks7utu1aye1atWS2rVrm7xeF/n0T3/Ytm1bKVGihMmt1csvv2zWHdLGmWPHjsmIESNk586d8vjjjwf9dQIAAADpIdXF8X/++Ufy5MmTaLsOg9RO8NRg+KZ7hPvwR5tjt31YDLEHn83DqWyO3Qvnjc2x2x4/sXtr6GYw6cjJ/fv3S6lSpUzH+MKFC820g6tWrUqyQ/tiWrVqZZpHdDoWLbpXr15d5s+fHzfKU+c0D+zAOXr0qFk7SPfNly+f6Txfvny5VKpUKeivEwAAAHBFcVw7QzRJTji88ptvvpErr7wyVY/F8E33CPfhj1bHbvmwGGIPPpvPG5tj98J5Y3PsZj9xp3A/9jbHHg7Pee+995rcuU6dOvLMM8/II488Iu+9954pZPfo0SPVj6c5eHJ5+LJly+Jd14VA9QIAAAB4pjiuHdiaMGtXya233mq2aUI+cuTINE2pwvBNAAAAIG10WsPAzm9dtF67tytUqCD33HOPo7EBAAAAYVcc79Chg1nc8tVXX42b47tMmTIyYcIEU8hOLYZvAgAAAKl3/vx5efLJJ83c4mXLljXbtIlELwAAAADSoTiuOnfubC5a1M6WLZtZ/OdyMHwTAAAASB1de+ezzz5L9cKbAAAAAP4n1RMjbt++XTZv3mx+1gUt/YVx3bZjx47UPhwAAACANNJ1e+bMmeN0GAAAAIA3Oscfe+wxM7WKzmMY6JdffpF33303Uac3AAAAgPShObmuyfPTTz+Z6QZz5MgR7/Zu3bo5FhsAAAAQdsXxNWvWSP369RNt17kNk5saBQAAAEDwvffee5I3b15ZvXq1uQSKiIigOA4AAAAEsziuSfbJkycTbT9+/LhcuHAhtQ8HAAAAII10ykMAAAAAIZpz/MYbb5ShQ4fGK4Trz7qtQYMGaQwDAAAAAAAAAAAXd44PHz7cFMivvvpqadiwodn2ww8/yIkTJ2TJkiXpESMAAACAJOhaQBczefLkkMUCAAAAhH1xvFKlSvL777/LuHHjZN26dZItWzZp27atmW88f/786RMlAAAAgESOHj0a7/r58+fljz/+kGPHjsmtt97qWFwAAABAWBbHVfHixWXIkCHxtmkCrgVzFuUEAAAAQuPzzz9PtC02NlY6d+4s5cqVcyQmAAAAIGznHE9o8eLF8tBDD0mxYsUkKioqOFEBAAAASJMMGTJIz5495Y033nA6FAAAACD8iuO7d++Wl19+WcqWLSuNGzeO61qJjo4OdnwAAAAAUmnr1q3y33//OR0GAAAAEB7Tquj8hXPmzJF3333XLMB5xx13yIgRI6RNmzYyYMAAMxc5AAAAgNDRDvFAPp9P9u/fL/PmzZN27do5FhcAAAAQVsXxEiVKSMWKFeWRRx6R6dOnS758+cx2LY4DAAAACL01a9YkmlKlUKFCMnLkSOnQoYNjcQEAAABhVRzXYZkRERHmkjFjxvSNCgAAAMAlLV261OkQAAAAgPCfc3zfvn3yxBNPyCeffCJFixaV+++/38wzrsVyAAAAAKG3fft22bx5c6Ltum3Hjh2OxAQAAACEXXE8a9as8vDDD8uSJUtk/fr1cs0110i3bt1MR/mrr74qixYtkgsXLqRvtAAAAADiPPbYY7J8+fJE23/55RdzGwAAAIAgFMcDlStXTl555RXZuXOnWezn3Llzcvfdd0uRIkXS8nAAAAAA0jjneP369RNtv+GGG2Tt2rWOxAQAAACE3ZzjSdEFf+68805zOXTokHz00UfBiwwAAADARekUhydPnky0/fjx44zqBAAAANKjczwphQoVkp49ewbr4QAAAABcwo033ihDhw6NVwjXn3VbgwYNHI0NAAAACOvOcQAAAADOGT58uCmQX3311dKwYUOz7YcffpATJ06YtYIAAAAAhKBzHAAAAEBoVapUSX7//Xdp2bKlHDx40Eyx0rZtW9m4caNUrlzZ6fAAAAAAV6NzHAAAALBY8eLFZciQIU6HAQAAAFiHznEAAADAUu+//77MmjUr0Xbd9sEHHzgSEwAAABC2neO6wM+UKVNk8eLFZuhmbGxsvNuZ2xAAAAAIDV148+233060vXDhwvLEE09Iu3btHIkLAAAACMviePfu3U1x/K677jLzGEZERKRPZAAAAAAuateuXVK2bNlE20uXLm1uAwAAABDE4vj06dNl5syZ0rRp09TeFQAAAEAQaYe4LshZpkyZeNvXrVsnBQoUcCwuAAAAICznHM+cObOUL18+faIBAAAAkGJt2rSRbt26ydKlS830h3rRaQ51tGfr1q2dDg8AAAAIr+L4c889J2PGjBGfz5c+EQEAAABIkcGDB0udOnXktttuk2zZsplL48aN5dZbb5VXX33V6fAAAACA8JpW5ccffzSdKd98841ce+21kilTpni3z549O5jxAQAAALjIqM4ZM2bIK6+8ImvXrjXF8SpVqpg5xwEAAAAEuTieN29euffee1N7NwAAAADppEKFCuaiTpw4IRMmTJD33ntPfv31V6dDAwAAAMKnOP7++++nTyQAAAAA0kxHd06ePNmM5MyTJw8NLQAAAECwi+N+hw4dkk2bNpmfr776ailUqFBaHwoAAABAGuzdu1emTJliGliOHTsmR48elWnTpknLli0lIiLC6fAAAACA8FqQ8/Tp09KhQwcpVqyY3HjjjeZSvHhx6dixo5w5cyZ9ogQAAAAQ57PPPpOmTZuaJhWda3zkyJGyb98+yZAhg5lznMI4AAAAkA7F8Z49e8p3330nX375pelO0csXX3xhtj333HOpfTgAAAAAqdSqVSupUaOG7N+/X2bNmiXNmzc3i3MCAAAASMfiuHap6OI+d955p+TOndtctGtl0qRJ8umnn6b24QAAAACkko7aHD9+vNxxxx0yceJEM50KAAAAgHQujuvUKUWKFEm0vXDhwkyrAgAAAITA22+/bbrGn3jiCfnkk0/MlIfaPe7z+SQ2Ntbp8AAAAIDwLI7XrVtXoqKi5OzZs3Hb/v33Xxk0aJC5DQAAAED6y5Ytm7Rr185Mb7h+/Xq59tprTRNL/fr15aGHHpLZs2c7HSIAAAAQXsXxMWPGyE8//SRXXHGF3HbbbeZSsmRJWb58ubkNAAAAQGhVqFBBhgwZIrt375aPP/7YjOhs06ZNqh9Hp2opU6aMZM2aVerUqSMrV65M0f2mT59uFgFt0aJFGqIHAAAAnBGZ2jtUrlxZNm/eLFOnTpWNGzeabZp4P/zww6Z7BQAAAIAzMmTIIPfcc4+5HDx4MFX3nTFjhvTs2dPMYa6F8dGjR0uTJk1k06ZNZgrF5OzYsUN69eolDRs2DMIrAAAAAFxcHFfZs2eXTp06BT8aAAAAAEFxsYJ2UkaNGmVy/Pbt25vrWiSfN2+eTJ48Wfr27ZvkfS5cuGCaZHSKxR9++EGOHTsWlNgBAAAA10yrMnfuXDl//nzczxe7pAXDNwEAAADnxMTEyOrVq6VRo0bxutD1+ooVK5K938svv2yK8B07dgxRpAAAAECIO8e1+BwdHW0S34sVorVQrd0jqcHwTQAAAMBZhw8fNnm8LugZSK/7p1JM6Mcff5T33ntP1q5dm+LnOXfunLn4nThxwvwbGxtrLiHl84kbpfg42By/xbGH+CwN+nnjxvhtjt0L543NsZv9xH1sjl1x3jjD5thVyPO8VDxnZGofLNgvhuGbAAAAgF1Onjwpjz76qEyaNEkKFiyY4vsNHTrU5PAJHTp0SM6ePSuhlPPf/xXm3ebgwQthH7/NsZ/OnVvc6FwK1xhwY/w2x57S+Ik9+Gw+b2yOXXHeOMPm2FMTf7Dz1XSZc/zDDz+UVq1aSZYsWRINxdRpTtq2bZvq4Zv9+vVL8/BNLY5fCh0qlxbuHR5Wx275N3/EHnw2nzc2x+6F88bm2M1+4k7hfuxtjj0cnvPKK6+UVatWSYECBeJt1+aR6667TrZt25aix9ECd8aMGeXAgQPxtuv1okWLJtp/69atZiSnLvyZ8PVHRkaaUaDlypVLdD/N+3XUaGBeXrJkSSlUqJDkDvGHuVP7MoobFS4c/70Mx/htjv34/32WdJs8KVxjwI3x2xx7SuMn9uCz+byxOXbFeeMMm2NPTfzBpNN3p0txXDu877jjjkRTnmg1Xm9LTXE8VMM36VC5tHDv8LA5dtu/+SP24LP5G2ObY/fCeWNz7LbHT+ze6k4JJi1QJzWtoTaG7N27N8WPkzlzZqlZs6YsXrw4bhpFLXbr9a5duybav2LFirJ+/fp42wYMGGCOwZgxY0zBOynaYJOwycbfIKOXkIqIEDdK8XGwOX6LYw/xWRr088aN8dscuxfOG5tjN/uJ+9gcu+K8cYbNsauQ53mpeM5UF8d9Pp+ZWzyhPXv2SJ48ecSNwzfpULm0cO/wsDl227/5I/bgs/kbY5tj98J5Y3PstsdP7N7qTgmGuXPnxv28YMGCeHm4Fsu1qK0L3qeG5svt2rWTWrVqSe3atc1aQKdPn46b/lCbYEqUKGEaT/S1Vq5cOd798+bNa/5NuB0AAABwqxQXx2vUqGGK4nq57bbbzHDJwAR8+/btpqM8NUI1fJMOlUsL9w4Pq2O3/Js/Yg8+m88bm2P3wnljc+xmP3GncD/2Nsdu83P6u7s1N9eCdqBMmTKZwvjIkSNT9Zg6daKOrBw4cKBER0dL9erVZf78+XGjPHft2uXIcQUAAAAcL477E3CdzqRJkyaSM2fOeMMwNQG///77U/XkoRq+CQAAAIQTf4NI2bJlzZzjqRlVeTGagyeVh6tly5Zd9L5TpkwJSgwAAACA64rjUVFR5l8tgmtXSbCGjTJ8EwAAAEgbHb2ZkC7G6c+RAQAAACQv1eMitZAdzPkUtdD++uuvm+GbOnRTO9MTDt/cv39/0J4PAAAACBfDhw+XGTNmxF1/8MEHJX/+/Ka5ZN26dY7GBgAAALhdqhfk1PnF33jjDZk5c6YpXMfExMS7/ciRI6kOguGbAAAAQOpNnDhRpk6dan5etGiRfPvtt6bRRHP13r17y8KFC50OEQAAAAifzvFBgwbJqFGjTMf38ePHzbQo9913n1mc56WXXkqfKAEAAAAkogtn+tfd+eqrr6Rly5bSuHFjef75581c5AAAAACCWBzXzpRJkybJc889J5GRkdKmTRt59913zbQoP//8c2ofDgAAAEAa5cuXT3bv3m1+1o7xRo0amZ99Pp8Z8QkAAAAgiMVx7U6pUqWK+Tlnzpyme1zdfffdMm/evNQ+HAAAAIA00hGcDz30kNx+++3yzz//yJ133mm2r1mzRsqXL+90eAAAAEB4FcevuOKKuAUyy5UrFzePoQ7bzJIlS/AjBAAAAJAkXQtI1+6pVKmSmXNcm1eU5utdunRxOjwAAAAgvBbkvPfee2Xx4sVSp04deeaZZ+SRRx6R9957zyzO2aNHj/SJEgAAAEAimTJlkl69eiXaTl4OAAAApENxfNiwYXE/66KcpUqVkhUrVkiFChXknnvuSe3DAQAAALgMH330kbz99tuybds2k5eXLl1aRo8eLWXLlpXmzZs7HR4AAAAQPtOqJFS3bl3p2bMnhXEAAAAgxCZMmGBycZ1r/NixY3GLcObNm9cUyAEAAABcZuf43LlzJaWaNWuW4n0BAAAApN2bb74pkyZNkhYtWsQb4VmrVq0kp1sBAAAAkMriuCbbgSIiIsTn8yXapvzdKgAAAADS1/bt26VGjRqJtmfJkkVOnz7tSEwAAABAWE2rEhsbG3dZuHChVK9eXb755hszdFMv+vN1110n8+fPT/+IAQAAABg6r/jatWsTbde8/JprrnEkJgAAACBsF+R89tlnZeLEidKgQYO4bU2aNJHs2bPLE088IRs2bAh2jAAAAAACvPzyy2baFJ1v/Omnn5azZ8+akZ0rV66UTz75RIYOHSrvvvuu02ECAAAA4VUc37p1q1ngJ6E8efLIjh07ghUXAAAAgGQMGjRInnrqKXn88cclW7ZsMmDAADlz5ow89NBDUrx4cRkzZoy0bt3a6TABAAAA+6dVCXT99debDpUDBw7EbdOfe/fuLbVr1w52fAAAAAASCFz/5+GHH5bNmzfLqVOnJDo6Wvbs2SMdO3Z0ND4AAAAgLDvHJ0+eLPfee6+UKlVKSpYsabbt3r1bKlSoIHPmzEmPGAEAAAAkEBEREe+6TnOoFwAAAADpVBwvX768/P7777Jo0SLZuHGj2aaL/TRq1ChRgg4AAAAgfVx11VWXzL+PHDkSsngAAACAsC+OK03CGzdubC4AAAAAnJl3XNf9AQAAAJCOxfGxY8fKE088IVmzZjU/X0y3bt3SGAoAAACAlNIFNwsXLux0GAAAAEB4F8ffeOMNs9CPFsf154t1lFMcBwAAANIX0xkCAAAAISqOb9++PcmfAQAAAISez+dzOgQAAADAm3OOAwAAAHBObGys0yEAAAAA3iiO9+zZM8UPOGrUqMuJBwAAAAAAAAAAdxTH16xZk6IHY+5DAAAAAAAAAEDYFMeXLl2a/pEAAAAAAAAAABAiGUL1RAAAAAAAAAAAWL0g56+//iozZ86UXbt2SUxMTLzbZs+eHazYAAAAAAAAAABwR+f49OnTpV69erJhwwb5/PPP5fz58/Lnn3/KkiVLJE+ePOkTJQAAAAAAAAAAThbHhwwZIm+88YZ8+eWXkjlzZhkzZoxs3LhRWrZsKaVKlQpmbAAAAAAAAAAAuKM4vnXrVrnrrrvMz1ocP336tEREREiPHj3knXfeSY8YAQAAAAAAAABwtjieL18+OXnypPm5RIkS8scff5ifjx07JmfOnAludAAAAAAAAAAAuGFBzhtvvFEWLVokVapUkQcffFC6d+9u5hvXbbfddlt6xAgAAAAAAAAAgDPFce0Qr1y5sowbN07Onj1rtvXv318yZcoky5cvl/vvv18GDBgQ3OgAAAAAAAAAAHCyOF61alW5/vrr5fHHH5fWrVubbRkyZJC+ffumR1wAAAAAAAAAADg/5/h3330n1157rTz33HNSrFgxadeunfzwww/pFxkAAAAAAAAAAE4Xxxs2bCiTJ0+W/fv3y5tvvik7duyQm266Sa666ioZPny4REdHp1eMAAAAAAAAAAA4Uxz3y5Ejh7Rv3950kv/9999mUc7x48dLqVKlpFmzZsGNDgAAAAAAAAAANxTHA5UvX15eeOEFsxBnrly5ZN68ecGLDAAAAAAAAAAAtxXHv//+e3nsscekaNGi0rt3b7nvvvvkp59+Cm50AAAAAEJGR4SWKVNGsmbNKnXq1JGVK1cmu+/s2bOlVq1akjdvXjO6tHr16vLRRx+FNF4AAADgckSmZud9+/bJlClTzGXLli1Sr149GTt2rLRs2dIkxAAAAADsNGPGDOnZs6dMnDjRFMZHjx4tTZo0kU2bNknhwoUT7Z8/f37p37+/VKxYUTJnzixfffWVmX5R99X7AQAAAGHTOX7nnXdK6dKlzWKc9957r2zYsEF+/PFHkwBfbmGcDhUAAADAWaNGjZJOnTqZ/L5SpUqmSJ49e3aZPHlykvvffPPN5nPBNddcI+XKlZPu3btL1apVzWcEAAAAIKw6xzNlyiSffvqp3H333ZIxY8agBUCHCgAAAOCsmJgYWb16tfTr1y9uW4YMGaRRo0ayYsWKS97f5/PJkiVLTA4/fPjwZPc7d+6cufidOHHC/BsbG2suIeXziRul+DjYHL/FsYf4LA36eePG+G2O3Qvnjc2xm/3EfWyOXXHeOMPm2FXI87xUPGeKi+Nz586V9O5QUVok14U9tUOlb9++SXaoBNIOlQ8++MB0qFAcBwAAAFLv8OHDcuHCBSlSpEi87Xp948aNyd7v+PHjUqJECVPw1gaat956S26//fZk9x86dKgMGjQo0fZDhw7J2bNnJZRy/vu/wrzbHDx4Iezjtzn207lzixudO3gwRfu5MX6bY09p/MQefDafNzbHrjhvnGFz7KmJP5hOnjwZ/DnHg40OFfcI9w4Pq2O3/Js/Yg8+m88bm2P3wnljc+xmP3GncD/2NsceDs/ptFy5csnatWvl1KlTsnjxYjMi9Morr0zU0OKneb/uE5iXlyxZUgoVKiS5Q/xh7tS+4I2GDabChQuEffw2x378/z5Luk2eJEZd2xK/zbGnNH5iDz6bzxubY1ecN86wOfbUxB9MOn2364vjdKi4R7h3eNgcu+3f/BF78Nn8jbHNsXvhvLE5dtvjJ3Zvdae4UcGCBU1efeDAgXjb9XrRokWTvZ82tpQvX978rGsB6bpEmnsnVxzPkiWLuST1OHoJqYgIcaMUHweb47c49hCfpUE/b9wYv82xe+G8sTl2s5+4j82xK84bZ9gcuwp5npeK53S0OJ5WdKgEX7h3eNgcu+3f/BF78Nn8jbHNsXvhvLE5dtvjJ3Zvdae4ka7lU7NmTZNbt2jRIq4TXq937do1xY+j9wkcsQkAAAC4maPFcTpU3CPcOzysjt3yb/6IPfhsPm9sjt0L543NsZv9xJ3C/djbHHs4PGcwaTNJu3btpFatWlK7dm0ZPXq0nD59Om5toLZt25rRm5p3K/1X9y1XrpwpiH/99dfy0UcfyYQJExx+JQAAAIAFxXE6VAAAAAB3aNWqlZl2cODAgRIdHW2aUObPnx83BeKuXbvifQGghfMuXbrInj17JFu2bFKxYkX5+OOPzeMAAAAANnB8WhU6VAAAAAB30AaV5JpUli1bFu/6K6+8Yi4AAACArRwvjtOhAgAAAAAAAADwXHFc0aECAAAAAAAAAAglu1cNAgAAAAAAAAAgDSiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAADDGjx8vZcqUkaxZs0qdOnVk5cqVye47adIkadiwoeTLl89cGjVqdNH9AQAAALdxRXGcJBwAAABw1owZM6Rnz54SFRUlv/32m1SrVk2aNGkiBw8eTHL/ZcuWSZs2bWTp0qWyYsUKKVmypDRu3Fj27t0b8tgBAAAAK4vjJOEAAACA80aNGiWdOnWS9u3bS6VKlWTixImSPXt2mTx5cpL7T506Vbp06SLVq1eXihUryrvvviuxsbGyePHikMcOAAAApEWkuCgJV5qEz5s3zyThffv2TTIJD6RJ+GeffWaS8LZt24YsbgAAACBcxMTEyOrVq6Vfv35x2zJkyGBGaWpDSkqcOXNGzp8/L/nz5092n3PnzpmL34kTJ8y/WlTXS0j5fOJGKT4ONsdvcewhPkuDft64MX6bY/fCeWNz7GY/cR+bY1ecN86wOXYV8jwvFc/paHGcJNw9wj2JtTp2y/+4EXvw2Xze2By7F84bm2M3+4k7hfuxtzn2cHjOYDl8+LBcuHBBihQpEm+7Xt+4cWOKHqNPnz5SvHhxk8snZ+jQoTJo0KBE2w8dOiRnz56VUMr57/8+E7jNwYMXwj5+m2M/nTu3uNG5ZEZe2xC/zbGnNH5iDz6bzxubY1ecN86wOfbUxB9MJ0+eTNF+jhbHScLdI9yTWJtjt/2PG7EHn83/KdocuxfOG5tjtz1+YvdWAh6Ohg0bJtOnTzdTIOo6QsnRphidUjGwaUWnSSxUqJDkDvG5dGpfRnGjwoULhH38Nsd+/P8ardwmT+HCKdrPjfHbHHtK4yf24LP5vLE5dsV54wybY09N/MF0sZzUVdOqXA6S8OAJ9yTW5tht/+NG7MFn83+KNsfuhfPG5thtj5/YvZWAu1HBggUlY8aMcuDAgXjb9XrRokUvet/XX3/d5OXffvutVK1a9aL7ZsmSxVwS0tGjegmpiAhxoxQfB5vjtzh2xxftuszzxo3x2xy7F84bm2M3+4n72By74rxxhs2xq5Dneal4TkeL4yTh7hHuSazVsVv+x43Yg8/m88bm2L1w3tgcu9lP3Cncj73NsYfDcwZL5syZpWbNmmYdnxYtWpht/sU1u3btmuz9XnvtNXn11VdlwYIFUqtWrRBGDAAAAFy+DG5Jwv38SXjdunUvmoQPHjxY5s+fTxIOAAAABIGOtJw0aZJ88MEHsmHDBuncubOcPn1a2rdvb25v27ZtvLWChg8fLi+++KJMnjxZypQpI9HR0eZy6tQpB18FAAAAkHKRbkjC27VrZ4rctWvXltGjRydKwkuUKGHmDfcn4QMHDpRp06bFJeEqZ86c5gIAAAAg9Vq1amXW5NFcW3Ps6tWrm2YU//pAu3btitcdP2HCBImJiZEHHngg3uNERUXJSy+9FPL4AQAAAOuK4yThAAAAgDvoFCrJTaOi6/wE2rFjR4iiAgAAAMK0OK5IwgEAAAAAAAAAoWTvqkEAAAAAAAAAAKQRxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAxvjx46VMmTKSNWtWqVOnjqxcuTLZff/880+5//77zf4REREyevTokMYKAAAAhEVxnCQcAAAAcNaMGTOkZ8+eEhUVJb/99ptUq1ZNmjRpIgcPHkxy/zNnzsiVV14pw4YNk6JFi4Y8XgAAAMD64jhJOAAAAOC8UaNGSadOnaR9+/ZSqVIlmThxomTPnl0mT56c5P7XX3+9jBgxQlq3bi1ZsmQJebwAAACA9cVxknAAAADAWTExMbJ69Wpp1KhR3LYMGTKY6ytWrHA0NgAAACC9RIoLkvB+/fqlaxJ+7tw5c/E7ceKE+Tc2NtZcQsrnEzdK0XEg9qBL6fkX4rM0qPETe/DZfN7YHLsXzhubYzf7iTuF+7G3OfZweM5gOXz4sFy4cEGKFCkSb7te37hxY9Ceh7z80lJ8HGyO3+LY3fpbbvP/lTbH7oXzxubYzX7iPjbHrjhvnGFz7G7PzSO9kIQPHTpUBg0alGj7oUOH5OzZsxJKOf/93wcAtzl48MIl9yF2Z2JXp3PnFjc6l8z0R4GI3ZnY3Rq/zbF74byxOXbb4yd2586bYDp58mTIn9M25OXByw9tjt/m2G3/m+fG+G2O3Qv/V9ocu1vjtzl2xXnjDJtjd3tu7mhxPFS0M13nNQ/sUClZsqQUKlRIcof4pDm1L6O4UeHCBS65D7E7E7s6/n9dVW6Tp3DhS+5D7M7E7tb4bY7dC+eNzbHbHj+xO3feBJMuLm+rggULSsaMGeXAgQPxtuv1YK7zQ14evPzQ5vhtjt32v3lujN/m2L3wf6XNsbs1fptjV5w3zrA5drfn5pFeSMJ1bvKk5ifXKVz0ElIREeJGKToOxB50KT3/HF8c4DLiJ/bgs/m8sTl2L5w3Nsdu9hN3Cvdjb3Ps4fCcwZI5c2apWbOmLF68WFq0aBE3FFWvd+3aNWjPQ15+aSk+DjbHb3Hsbv0tt/n/Sptj98J5Y3PsZj9xH5tjV5w3zrA5drfn5hnckoT7+ZPwunXrOhkaAAAA4Cna0T1p0iT54IMPZMOGDdK5c2c5ffq0tG/f3tzetm3beGsF6fpBa9euNRf9ee/evebnLVu2OPgqAAAAgJSLdEMS3q5dO6lVq5bUrl1bRo8enSgJL1GihJmfUGni/ddff8X97E/Cc+bMKeXLl3f0tQAAAAC2atWqlZn7e+DAgRIdHS3Vq1eX+fPnx60PtGvXrngdOPv27ZMaNWrEXX/99dfN5aabbpJly5Y58hoAAAAAq4rjJOEAAACAO+gUKslNo5Iw1y5Tpoz4fL4QRQYAAACEYXFckYQDAAAAAAAAAELJrfO0AwAAAAAAAACQbiiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8h+I4AAAAAAAAAMBzKI4DAAAAAAAAADyH4jgAAAAAAAAAwHMojgMAAAAAAAAAPIfiOAAAAAAAAADAcyiOAwAAAAAAAAA8xxXF8fHjx0uZMmUka9asUqdOHVm5cuVF9581a5ZUrFjR7F+lShX5+uuvQxYrAAAAEK7IywEAAOAljhfHZ8yYIT179pSoqCj57bffpFq1atKkSRM5ePBgkvsvX75c2rRpIx07dpQ1a9ZIixYtzOWPP/4IeewAAABAuCAvBwAAgNc4XhwfNWqUdOrUSdq3by+VKlWSiRMnSvbs2WXy5MlJ7j9mzBi54447pHfv3nLNNdfI4MGD5brrrpNx48aFPHYAAAAgXJCXAwAAwGsinXzymJgYWb16tfTr1y9uW4YMGaRRo0ayYsWKJO+j27WjJZB2tMyZMyfZ5zl37py5+B0/ftz8e+zYMYmNjZVQOnvyhLjRsWOXPhWI3ZnY1fGzZ8WNfMeOXXIfYncmdrfGb3PsXjhvbI7d9viJ3bnzJphOnPhfvuHz+cQ25OX25Yc2x29z7Lb/zXNj/DbH7oX/K22O3a3x2xy74rxxhs2xuz439zlo7969Gp1v+fLl8bb37t3bV7t27STvkylTJt+0adPibRs/fryvcOHCyT5PVFSUeR4uXLhw4cKFCxcuXNL7snv3bp9tyMu5cOHChQsXLly4iAdzc0c7x0NFO2ACu1q0K+XIkSNSoEABiYiIEBvptx8lS5aU3bt3S+7cucUmxO4cm+MndufYHD+xO8Pm2G2Pn9idpV0pJ0+elOLFizsdimuRl7uPzfETuzNsjt32+IndOTbHT+zOsDn2cIg/Nbm5o8XxggULSsaMGeXAgQPxtuv1okWLJnkf3Z6a/VWWLFnMJVDevHklHOgJautJSuzOsTl+YneOzfETuzNsjt32+IndOXny5BEbkZdfPtvPXZvjJ3Zn2By77fETu3Nsjp/YnWFz7F7JzR1dkDNz5sxSs2ZNWbx4cbzuEb1et27dJO+j2wP3V4sWLUp2fwAAAAAXR14OAAAAL3J8WhUdVtmuXTupVauW1K5dW0aPHi2nT5+W9u3bm9vbtm0rJUqUkKFDh5rr3bt3l5tuuklGjhwpd911l0yfPl1+/fVXeeeddxx+JQAAAIC9yMsBAADgNY4Xx1u1aiWHDh2SgQMHSnR0tFSvXl3mz58vRYoUMbfv2rVLMmT4/w3u9erVk2nTpsmAAQPkhRdekAoVKsicOXOkcuXK4iU6HDUqKirRsFQbELtzbI6f2J1jc/zE7gybY7c9fmLH5SAv9+a5a3P8xO4Mm2O3PX5id47N8RO7M2yOPRziT40IXZXT6SAAAAAAAAAAAAglR+ccBwAAAAAAAADACRTHAQAAAAAAAACeQ3EcAAAAAAAAAOA5FMcBAAAAAAAAAJ5DcRwAAAAAAAAA4DkUxwEA1vL5fGIrm2O3nc3H3ubYAQBA+LI9R7E9flvZfNxtjh3xRfh4NxEEe/fuleXLl8t///0nzZo1kxw5cohN9NcgIiJCbHXhwgXJmDGj2Mjm2APFxsZKhgz2fN9oW7wXi92m13Lu3DnztyZz5sxiI5uOdUKcN844cOCAnD59Wq688kqnQwE8xebcnLzcOTbHHsim/+Ntjtn2/Mr2HMvG4x0O543t5wy5edIik9mOELPpj0FCf/zxh7Rp00YqVKgg+fLlk3vvvVdssnXrVpk6daps375dGjVqJDfeeKOULFlSbLBr1y7JnTu35M2b17pk1ubY9ZyZNm2aHD58WIoVKyZ9+/Y1v782fJjT81xjLFOmjJV/dzZv3iwTJkyQgwcPmvPnrbfeMq/BhteyceNG6d+/v0lIlP7dKV26NOdNCHDeOGPnzp1StWpVadKkiQwdOlTKlSvndEhAitnw9yEcc3PycmfYHLvNebntOZbN+ZXtORbnjTNsPmcUuXny3H3meUTgH4EtW7aYPxaB3Nzcv2HDBrnpppukefPm8uGHH8p7770nWbNmFVusW7dO6tWrJ99++638+OOP0r59exk8eLD5Q+12mzZtkvLly0v16tXl0KFDJonVZNYGNse+fv16c87oubN27Vr56KOPpHXr1uY2t/+HqMdd/wOsXLmy/P3333FJiC38x16TET1flixZIvXr1ze3uT2R0tg11gIFCsiDDz5ojnurVq3izhs3/53nvHGOzeeN2rZtmznGX331lfTo0cNc9587bo8d3kZu7gzycmfYHLvNebntOZbN+ZXtORbnjTNsPmf8yM0vQqdVgTs8//zzvquvvtqXLVs2X4cOHXw//fRT3G2xsbE+tzl16pTv3nvv9XXs2NF3/vx5V8ealPXr1/uyZ8/uGzx4sO/s2bNm28svv+zLlCmTb9GiRT43i46O9t12223mUr9+fd8111zjO3DggLntv//+87mZzbHv2rXLV7FiRV+fPn3M9TNnzvg++eQTX40aNXwbN270udmhQ4d8d955p6958+a+xo0b+woVKuTbsGGDue3ChQs+t9u7d6+vatWq5u+kP+ZffvnFd9VVV/mWLl3qc7MdO3b4KlWqFHfeqNmzZ/seeugh3+nTp31uxnnjHJvPm8C/mU8++aTvr7/+8uXLl8939913+/bs2WNu8/8LuBm5eeiQlzvD5thtzsttz7Fszq9sz7E4b5xh8zkTiNw8ee7+aibMBX67N3PmTPn0009lyJAhMmnSJFmxYoUZ5rBw4ULXfhOlcy39/vvv0rBhQ4mM/P8z9Pi/pfe/vpiYGHGbf/75R26++WapWbOmPP/885IlSxazXb89K1y4sBme5/bhsjpMVof0jBgxQgoWLGhej3bWaLeHzi/pVrbGrr9/ixYtkhIlSsizzz5rrmfLls3ErkNRdYiSm+3YsUOKFy8unTt3lsmTJ0vt2rVNZ5kODdNvj93eIaQdZDlz5pSnn37aXNeYK1WqJOfPn5d9+/aJm/38889y/fXXy3PPPRe37aeffjKdEnXr1pUqVaqYzj6d+81tOG+cY/N541eoUCFZunSp5MqVy/z7/fffm/9nH3jgAXnmmWfM++C23AbeRm7uDPJy59gau+15ue05ls35le05FueNM2w+ZwKRm1/ERQrnCJElS5aYb88mTpwYt23dunW+OnXq+O666y7fwoULfW60YsUKX5YsWXx//vmnuR7YoRJo+vTpyd7mpKeeesp0A40dOzauQ2Lt2rWmQ+Xrr7/2ud13330X9/OPP/7oa9iwoen20A6QwG4PN3YL2Rq7dowF/p7qea2XChUq+ObOnZtof7d9e79q1ap43xo3bdo0UbeBHnN/x5bbvq0PPPYxMTHm3+uvv9733nvvORhZyvz+++9xP+vfnIiICN/o0aN9P/zwg/lbVLBgQd8ff/zhcyPOG+fYfN74/443adLE9+mnn5qf9+3bZ/KGzJkz+7788kuHIwSSR24eeuTlzrE1dtvzcptzLNvzK9tzLM4bZ9h8zihy84ujOO4g/YOlfyBy585tfrEGDBgQ73Z/Et6sWbMk/4N3wqZNm3wvvfSS+fnYsWO+okWL+rp27Rp3e8Kk6YsvvvA1aNDA988///jcIjAx6tatm69MmTK+KVOm+H777TffFVdcYba5WXKJnSaIeqwDh0O++uqrJsl1C5tjV+fOnUvytVx33XW+mTNnxl3/4IMPfG6S3IeZ3bt3J0qm9Pf7448/dt0HoECBx/7GG280SYnfpEmTTJLoFgmP44kTJ0y8y5Yti7c9f/78vuHDh/vchPPGOTafNwlpgfG1114zPz/++OPmvMmZM6fv/vvv923evNnp8IB4yM1Dj7zcOTbHbnNeHm45lk35le05FueNM2w+Z5JCbp40iuMhltQfJ000ypcv77v11lvjfQvo/3aqbNmyvl69evnc8Ads6NChJlHdtm2bmVvpwQcfNEmsdqAkpX///r5HHnnEzAHnJoFz6D3zzDO+kiVL+vLmzetr3769q7sLLnZO6b/Lly833R5VqlQxc2PqBzv/f5BuZHPsgfHrh4cZM2aYnwcOHGhi//vvv3020LnFNJkqXry4r3Xr1iZ2f8eZm/l/h+vWret788034x17t88zGfhhTl/Hzp07fTfccIPvm2++8dmC8yb0bDtv/H8fX3/9dd9jjz1mOmq0aKfnjv591GOu8zT6u4YAp5CbO4+83B1sjj1c8nJbcyyb8ysbc6ykcN6Elo3nDLn5xVEcD6HApM6fkPqHNC5evNgksg8//LDplAi0ZcsW1yyIogsm5MqVy3R0KP1mqVixYmb4WuAQGR2e8dxzz/kKFy7sqqElgccx8OcXXnjBdAnpN4BHjhzxuV3CobCBr0WHRubIkcN8c7lmzRqf2z54hlPs+p+iXsqVK+ebP3++b9SoUWYxqdWrV/vcJmHsga9LP1AXKFDAXNxy3C8Vf+AwvKlTp/reeOMNs2CaDcc+4Qf8F1980Xz4dOMiKJw3zgmX80aPrf7/qjlOYH6jH3rc/sEH4Y/c3Fnk5c4gL3cHm3Msm/OrcMqxFOdNaNh8zihy80ujOB4igb88I0eO9LVo0cLXqFEjX/fu3U2yqnT+Qn8SntQfM7ck4U8//bTv2muvNcN3/MM5deVe/SNcrVo1Myzm5ptvNq/FDX+UdXhsVFRU3Jxbge9F4DHVYZvaCTRu3DizCrSbBP6n549Z//PTWBPq0qWLL2vWrI5/8NHzOvDban0N4Rq7dtboKtv6n3nCDrNQ066xwG+yVWDsgwcPjnc+6e+DHned09Pp456W+HW1dp3fTT/82HbsdX663r17m+TEDX8rAyU8591+3qQ2fjedN6mN3abzZsiQIeb/Uy1QBP5NdUs+A28jN3cGebkzyMudY3NubnNersjN3YG83Dnk5imX4WKLdSJ4dCVe9cILL5hV73Wl25IlS8rKlSvNz7qi9u233y7vvvuuWQlX99u8eXO8x9AVw53iX91eNW3aVM6ePSvr168316+66ipZvHixDB06VKpXry6lS5eWVq1aybJly8x1p82aNUs+/vhjiYqKkpiYGPNe+F9P4CrsY8aMkXvvvVf69esns2fPjveanaArHZ88eVJOnDghERERZpt+oaUx6/miqyLruRJIr//6669m5eRrr73WochF9u7da1ZsHjBggInHLxxj1xXBjx8/Ltu2bZNffvlFatWq5VDkIn/88Ye0bNnSxHfu3Ll454yubF6/fn3zd8V/PqlNmzaZ16z3cfK4pzV+XVH7n3/+MX9LbTr2GvPChQvlu+++Myu3O/m3cs+ePTJz5kzzd8//d13Zct6kJX63nDepjd2m86ZevXry119/ScGCBaVbt25SqVKlePsATiM3dwZ5eeiRlzvH5tzc5rxckZs7g7zcOeTmlykVhXRcJu3i0FXYA1dc17nbGjdubIY+Hj582GzTYWA6Gb7Tc+vt378/2W+/tPtEL262fft2MyRWvwXTRWRq1apl5of0f3ucXKeKfhvo9Lx0+i2enhc1atQw84bpYhp+uoCSdgM98cQTSc6T6YYFlpYuXeqLjIw0c3W2bds2bmiUxqa/Ax07dgyL2P3/fvbZZ2aItZO0Q0Dn53zyyScTLWBy/Phxcy7pghtJHXddVMRpqY3f/692F+i34DYe+6NHjzreDadz55YuXdr8fSxSpIjvnnvuiTuXNXZd1MrN501q43fTeZPWY2/TeeN0HgNcCrl56JCXO4e83Bk25+Y25+WK3NwZ5OXOITe/fBTH00m9evUSrWK/cuVKM7xr7dq1cdv05NTtVatWNQvnJPxlc+rk1V8gna9NPxjooj2aFAb+sdUPCVdeeWXcggMad2DsTq+KvHfvXjM0R+P/4osvzHF8+eWXk03E9brOb6jzXDlNj7UOg+3Ro4eZe6tnz55meJT/w5Au9vDRRx+55lxJiiaszZo187399tvmD7EOR/7rr7+SXTHe5tjdcL6fOnXKfGjr3LlzvA/3+rdGE0KNTz/4X2yOOhvjdwObY9eh7SVKlPD17dvXvA6NUxdl0flr/cmem8+btMbvBsQOhB65uXN/u8nLnUVeHno25+Y257a2x29zbm5zfmhz7OEQv1tQHE8H//77r2/8+PFxc+n56YIyOu/fsGHD4nVE6AJA5cuXN9vd0tkxZ84c34QJE3zvvPOOmbNNk/E77rjDzKl08uRJ8xr1teiK8m6kXQYZMmQwizvcfffdpoMguURcj7/O1ZgxY0bf+vXrHU8A9T9znWcxkHYCufVYJ6Tn9sGDB815owtSzJ4927wP2tlx2223mc4rt/wnHi6x69+aBg0amAU19DU0adLExK0LdNWuXdv34Ycfxu3rtthtj9/m2PWDpv5tCYxLV7nX7e+//77vxx9/dG3stsdP7EBokZs7i7zcObbmtrbHbnN+aHPstsdvc45F7M6xPX63iLzcaVmQWNasWaVLly7m51deeUUKFSokTz75pGTPnl1q1qwpX375pZQrV04eeOABs4/OoZc/f37Jly+fw5GLmZvovvvuM/NU6TxEt956q3To0EEmTpxo5lO65ZZbpFGjRtKuXTvp2bOn2adjx45SrVo1cZObb75ZHnvsMfntt98kS5YsMnbsWDOPUv/+/c3tc+fONT8PHDhQBg0aJO+//76Z46py5cqOxq3zbR07dizeuaFzMZYtW1aOHDkiNtB49ZzX+Tp1rjedL1LfAz1ndL43fS9U4DxjbmFr7HrO6Dxzhw8flt69e5ttOkfqvn37zJyjzz//vGTLls2cV26L3fb4bY5dvyDftWuXrF27VmrUqCGvvvqqfPPNN2YOWH1detvw4cPN31K3xW57/MQOhBa5ubPIy51ja25re+w254c2x257/DbnWMTuHNvjdw2nq/PhJuF8eToELyIiwjdlypS4DhVdjVe7JNq0aWOGC950002+ypUrOz7MQYcb5cuXzwzH0OGPSfn000/NnHq6krCueK+vbeTIka4afufvCpo3b57vscce8y1YsMB33333+erXrx9vKGedOnVM142uwu6fv84NAudVjImJMf8OGDDA9+ijj8bbT7uE3EznBdRzSWmHh55bOidjhw4d4ob4uJVtses3wK1bt/Z17drVdGTp0Gq/3bt3m+HXTz31lPmb5MZvi22O3+bYdW4/nWZAuyO1+0r/nmtnosZ54MAB0ymnXQg6567bYrc9fmIHQofc3Fnk5e5gW25re+w254c2x257/DbnWMTuHNvjdws6x4PI302goqOjpXDhwjJs2DDJnTu3tG/f3qy+rp0c06ZNM90eS5YskTlz5kipUqVk0aJFEhkZaVbYdmKlWF3hXrs1HnroIbOyfWDHhL4WXaG9YsWKcv/998udd94pffv2lddff918O3X33XfHvW6n7N6926xg7u8oUNpl8Nxzz5kVj/V4P/XUUzJixAhzm3an6GvW468rmVetWlXcokKFCnHnU6ZMmeK+DTx48GDcPvoe6evU7iA9b9xEY9VvJLWzafv27aZT6+uvv5bVq1eb80W/wc+cObM55trJ5Sa2xq4x67munVlnzpyRJ554Iu62K664QooUKSKrVq0yv6du/LbY5vhtjl073z7++GMTn65crvE1b97c3Kb/fxUvXtysvp4jRw7XxW57/MQOhAa5uTPIy93D1tzW9thtzg9tjt32+G3OsYjdObbH7xbu+t87TJLvwYMHy7Zt26Rz585Su3Zt6dWrl7m9U6dO5nZNwvv06WOSWP2DrUM6lSboTiVU+ryaaN94441x2xYsWCDz58+XyZMnS4ECBaRMmTJmKJLGq7+Ao0ePNgm6P34nE3AdPqLDG/XDgQ61q169ulx11VXy2muvmcS7ZcuWZhjtgAEDTNw6FE+v6/BTfW1upOeTPyn0X1f6QUljX7NmjesScOWPV88R/eCpCchXX31lrutFb9ehvm5LYm2PXT9s6vCpm266Sd555x258sorzRBspb+n+vugf2P8H+zcxub4bY7df27rcFMtZOjwO/2gqQ4cOGD+7mthyK1sjp/YgfRFbu4M8nJ3sTm3tTl22/NDm2O3PX6bcyxid47t8btBhLaPOx1EONHEesqUKWYONP22Uv8TV5r0vfzyy6ZbRefRa9u2bbz7BSZbTjhx4oTUqVNHGjZsaL5pnT17tnzwwQdmrj9NynPmzGm6Ipo1ayYjR46M94HDaTt37jRzhul/bnqcr7vuOtPt88ILL0jevHnlo48+Mp0GmqDrN2ndu3c3fyhmzJhhXpeb+Y/zSy+9JPv37zfdK/pBYvny5eZ1upkmHnrsNTnRjg6nz3GvxP79999LmzZtTGdElSpVzH+MOpfnjz/+6PjcneEev82x69/GevXqme69okWLmrk99cOEviZ9LW5nc/zEDqQvcvPQIi93J5tzW5tjtz0/tDl22+O3OccidufYHr+jnJ7XJZx8+eWXvhIlSvjWrl1rrut8PrrC9q+//ho3D53OUadzAH399dc+t1m8eLEvMjLSV7p0abOa88SJE32bN2+Om2NPV2tv166dz410PkCdv7BFixZmJfPPP//czKuk1/V46zyG586dM/tu3LjRzDdmk1deecW8jjx58vhWrVrls4Vb5rv0Wux6juvfmkaNGvk6d+7sW79+vc8mNsdvc+xLliwx871WqFDB/P1ct26dzyY2x0/sQPogN3cGebk72Zzb2hy77fmhzbHbHr/NORaxO8f2+J1C5/hlSPittXZ06FDBefPmmWGQs2bNMp0q2jmh31bq7boysn7zrSvFunHonQ6F1Dn0SpcuLQULFozXKdG6dWu5+uqrTZeNcts39roqdY8ePcxwkTfffFNKlCgh69evN6v1tmrVSh555BHrOg38dGiMDgPWb/4qVarkdDiwhP7eKjd0knktfltj12Hw2qGlc6dqh59tbI6f2IHLR27unhyXvBwIn/zQ9thtjt/mHIvYnWN7/E6gOJ5GgUMX9cTLnz+/mdfq8ccfN0O+Vq5cKU2aNJEbbrjBzJ2ncxh++OGHUr9+/bjHcHIew9TQ4Uc6V6POb7hs2bK4hWncaPPmzdK1a9e4OQADj7ftdOElXUQBAAAA8ZGbuw95OQAAsAHF8ctMvrX7YceOHWYBmWuuuUamTp0qGzZsMPP53HLLLWZ1WJ2TTufVGz9+vHVJoX/VW50HUD9g6AI7bqeJuK4Wr6e2zgPYoEEDp0MCAABAOiE3dy/ycgAA4HYUxy9zgR/tOBk+fLg0atRIihcvHu927T7RroKHHnrILKqjnR0ZM2YUW+hwyKeeekry5ctnPmjoBwxbaCKuH4oOHz4sb7zxhukSAgAAQPgiN3cn8nIAAOBmFMfTSFc5fvLJJ+Xrr7+O69jQIZw6J6B2pOhQTp3/T1dC1u0rVqww8xvqvHs2JeH6enSeojx58ohtNm7cKC+++KKMHDlSSpUq5XQ4AAAASCfk5u5GXg4AANzK/ZPqudTJkydNt8a1115rFmOZM2eOmfdPk9WqVauaxX50+KZ2qOgcezp/oS3zGAbSDxO2qlixohlKmzlzZqdDAQAAQDoiN3c38nIAAOBWdI6nch5Dv88++0wefPBBadmypXz//fdy2223xc1ZqEM5tXtFE3A/27pSAAAAADciNwcAAECwUBxPRfK9bds2OXXqlFkRPlu2bDJ79mxZunSpmTfv1ltvlWLFikl0dLQ0adJEJk6cKHXr1nU6fAAAACBskJsDAAAgmCiOX4QemoiICPOzzpH3+eefy/HjxyVXrlxmIZ9nnnkmbr4/7T45e/as6VjRhX40MU/Y0QIAAAAgbcjNAQAAEGxkiBfhT751KOakSZPMAjK7d++WcuXKyYQJE2Tr1q3m9nPnzpkV4++++26zSM63335rkm/tbAEAAABw+cjNAQAAEGwUx5Pgb6bXjpMTJ07I4sWLTYKtQzLnz59v5jHUhXyuu+46OX/+vFnoR+cwrFmzpvz8889m5Xtd4IfuFAAAAODykJsDAAAgvTCtykXmMYyJiTErqmtirSveb968WZo3by4jRoyQp556ygzVfP/996Vhw4ZSuXLluMdggR8AAADg8pGbAwAAID1FpuujW0a/J/An3x07dpQtW7bId999J/nz55f77rtPNm3aJGPHjpX27dubff755x+ZPn265MyZM14CTvINAAAAXB5ycwAAAKQ3xhYmMY/h33//Ldu2bTML/ai+ffvKmTNnzPBMf/J98uRJ6dSpk7mPLgAEAAAAIHjIzQEAAJDemFYlgcmTJ8u0adMkX758MnXqVDN0U5Nt3a6L/uj2K664Qo4dO2ZWvl+1apWZx5DhmgAAAEBwkZsDAAAgPXl+WhX/PIb676lTp0xnig7ZLFiwoEm+Va5cuUwnyi233CLvvvuuZM+eXUqUKCGdO3eWyMhIs8CP/gsAAAAg7cjNAQAAEEp0jv8f7UDRRHv37t3y4YcfyrBhw8zCPrrAj9LD5B/aGYiuFAAAACC4yM0BAAAQCp6dc1y7Ufx0tfsrr7xSoqOjpWTJkmbuwueff16++uqruLkNNfk+f/583H383ymQfAMAAACXh9wcAAAAToj08nBN9emnn8qaNWvM6vZ33323zJ07V4oXLx63uI+ueK/7Dho0yMxf6JdUpwoAAACA1CE3BwAAgFM82TnuT7579eplVrvXeQofffRROXr0qNx6662yd+9es7CPJuFt2rSRcePGyaRJk5wOGwAAAAg75OYAAABwimfnHF+7dq3cc889ZqX722+/3WxbvHixREVFmU6VpUuXStGiRWXnzp3mZ03QGaYJAAAABB+5OQAAAJzgyc5x/yI/mmjrME2/m2++WXr37m0W/rnrrrtk//79Urp06bjkWxf4AQAAABBc5OYAAABwgieK40k1x1999dVy1VVXyddffy3//fef2aZJdqNGjaRSpUpx8xweOXIkriuF7hQAAADg8pCbAwAAwC0yeGGBH/8CPadOnZIDBw6YnwsVKiT16tWTzz77TGbPnh23/9mzZ01Hii7yo/fTRX8AAAAAXD5ycwAAALhJWM85ri/Nn3wPHjxYvv/+e1m1apW0atXKdJ40adJEHnzwQYmOjpaKFSuahHzq1Klm5fsFCxbI9ddfb7aNHz/e6ZcCAAAAWI3cHAAAAG4T1p3j/uRbF/J58803pVOnTvLVV1+ZJLxfv35y+vRp+fjjj6VFixayZ88eeeedd6RAgQIyb948iYyMNHMelipVyjxGGH+HAAAAAKQ7cnMAAAC4TVh3jqtt27aZDpRhw4aZle9/+OEHady4sbz11lvSvn37RAsB5cqVy/zcv39/mTRpkvz0009SoUIFh6IHAAAAwge5OQAAANwkrDvHVebMmc2iPjfddJOZv7Bp06byxhtvmOT7zJkz8sknn8j27dvNvpp8b9iwQe6//36ZNm2aGb5J8g0AAAAEB7k5AAAA3CSsiuOaPGv3yW+//Sbnzp0z2/RfXdVeh28+/vjjMnz4cHnqqafMbRs3bjRDN/fu3Rv3GNdcc4107NhRFi9eLDVq1HDstQAAAAA2IzcHAACA24XNtCpTpkwxwzOPHj0qWbNmNXMVDhkyRHLkyGH+HTBggDzzzDMyZswYs/+///5rhnTGxsaauQ4zZMgQb5EgAAAAAGlDbg4AAAAbREoY0MV6unfvLhMmTJDatWvL0KFD5f333zcr3utQzUcffVR27NhhFv7RRDsmJkY2bdokBw4cMJ0suk0Tcf0XAAAAQNqRmwMAAMAW1mecs2bNMkMxdQjmY489JpUqVZKuXbvKqVOnzNBMVbJkSRk/fryMHTtW1q9fLwcPHjSJ+po1ayRTpkxm3kOSbwAAAODykJsDAADAJlZ3jmvi/Pnnn0vZsmXNavZ+Oneh2rJlizz77LNSvXp1ufPOO01irpdAFy5ckMhIqw8DAAAA4DhycwAAANjG+jnHdR7Dbt26mVXtdVGfuXPnyubNm03irQv4jB49Wg4fPiyrV6+WihUryksvvWSScQAAAADBRW4OAAAAm1hZHE84B+E///xjuk5++OEHc5v+W65cubjuk4wZM8rkyZNl27ZtJgGnGwUAAAAIDnJzAAAA2Mq64nhg8j1z5kypUKGC1KhRQ44dO2Y6Uv7880954oknpEOHDibx9ifggZLaBgAAACB1yM0BAABgM6tWutE6vj/57tOnj/Tu3Vs+/fRTOX78uOTNm9cM09SEfMqUKTJp0qS4RFuT9kAk3wAAAMDlITcHAACA7azoHE84VFMT7VdeeUUWLlwoV199teTIkSNuH+1Sefrpp2X37t3SvHlz6dGjB6vdAwAAAEFCbg4AAIBw4frM9N9//42XQJ8/f15WrlwpPXv2lOuuu06yZs0ab3/tUnnzzTcle/bsZvGfiIgIB6IGAAAAwg+5OQAAAMKJq1e/6dixo5w5c0Y++eQTM2xTk+n//vtPfvvtNylatGjcMEz/kE5N1qOjo6Vs2bJmzsOcOXOa+/jvCwAAACBtyM0BAAAQblzbOa5Jc+fOneXDDz801zXx9qtbt65Z3X7Xrl3muj+53rhxo1n4Z/v27ZI7d26TlOuQTpJvAAAAIO3IzQEAABCOXFkc93eT1KpVSzJlymQW8NH5C0+ePCnZsmWTFi1ayIIFC2Ts2LEm6VaHDx+WQYMGmW6W0qVLxz0WcxoCAAAAaUduDgAAgHDlygU5/SvZ+/3444/yzDPPSGRkpCxevNh0nkydOlX69OkjRYoUMZ0rmTNnNnMerlq1yiTtCRcKAgAAAJB65OYAAAAIV64rji9btswk1I0aNTLzGmqy/cYbb8h3330nvXr1Mrfpz7r9559/lq1bt8r69eulQoUK0q5dO5Ok6z76LwAAAIC0IzcHAABAOHNNcVzD0GGXN9xwgxQsWFAKFSokixYtkqVLl0r16tVNt8n3338vvXv3Ngm2/pwrV65LdrYAAAAASB1ycwAAAHiBa4rjfpqEX3XVVWZl+wkTJkinTp3ibtMk/IcffpDnn3/e/OwfxgkAAAAg+MjNAQAAEM5cMfGfJtP+zpKjR49K0aJFTRI+a9YsWbhwYdx+Ok9hw4YN5bXXXjMJ+rPPPutg1AAAAED4ITcHAACAVzjeOR64OM+3334rtWvXNh0nR44ckcaNG5uf+/XrZ+Y5jIiIiLvfxo0bzVyGDNMEAAAAgoPcHAAAAF7iaOe41uX9ybcm2d26dZMPPvhATpw4Ifnz55c5c+bI8ePHZcSIEfL111+bFe8bNGggL774olSsWNEk39rRAgAAAODykJsDAADAaxzvHFdRUVEyfvx4+eKLL6RatWqSM2dOk5xrN8ru3buldevWJhHXBDxLlizy66+/SubMmZ0OGwAAAAg75OYAAADwipAXx6dOnSpNmzaVfPnymevbt2+XVq1ayeDBg6VJkyZmvsJdu3bJjBkz5IYbbpAHH3zQbNP5DXVBoMcff1wiIyPlv//+M/8CAAAASBtycwAAAHhZSDPYd955R2bPni1t2rSJ25YrVy45dOiQrFu3TgoUKCBjxoyR33//XbJmzSpvvPGGxMTEyMMPPyxt27aNu48O1yT5BgAAANKO3BwAAABeF/LOcU2edT7CFStWSJkyZaRYsWLSv39/+eSTT2Tv3r3y9NNPmwV+tIOlRYsWUrp0aZOUAwAAAAgucnMAAAB4WciK4/7EW59u2bJlcvfdd5vFe5555hmz8M/OnTvl7NmzUr16dbN/bGys3HjjjdK8eXPp3bt3KEIEAAAAPIHcHAAAAAhRcdy/gE+gPn36yMyZM6Vz587Srl07KVKkiNl++vRp+fvvv2XAgAGmW0UX+GGYJgAAABAc5OYAAADA/0SGMvn+7LPPzGI9usjP8OHDTVfKuHHjzG2PPfaYFC5cWL788kuZPn266VRZtWqVSb79nS0AAAAA0o7cHAAAAAhRcVyHX2qSrXRRHx2qecUVV0jevHmlSZMmMnToUJOcjx8/3uyjnSqNGzc2nSo6bFOTbk3Y6U4BAAAALg+5OQAAABBfuma2/uT7hRdekOjoaJNQf//996bzRFe6v+eee2TIkCEmCZ8wYYKcPHlSnn/+ebnlllviEniSbwAAAODykZsDAAAAIZ5z/K233pJ+/frJggULpGTJkrJ9+3bp3r27GabZtWtXueuuu8x+uviPzmOowzsTzoEIAAAA4PKRmwMAAAAhLI4//vjjcvToUZNY+/3888/y0EMPSfHixU3nStOmTc12//yFSS0SBAAAAODykJsDAAAA/9//xlamAx12qbJmzSpnzpwxP2tirUn2DTfcIP3795c1a9bIpEmT5NtvvzW3k3wDAAAAwUduDgAAAKRjcdyfcCec0/Dmm282wzZnzpxpEmv/yvZZsmQx8xfu2bNHpk2bFnc/km8AAADg8pCbAwAAACGaVkWTb3/CvXjxYjNUUxNsXd1e/+3Tp4+MHj1aJk6caFa6z5cvnzz22GNm0Z8iRYpIixYtZP369XLttddebigAAACAp5GbAwAAACkTlOXm/cl37969TRdK4LZ58+bJ8OHDJVu2bGaRn0KFCpnhmTlz5pS2bdvKhg0bpFy5cuY6AAAAgMtDbg4AAACEsDiu3n//fZk8ebLMnz9frrjiCtOh0qtXL7n99ttlxYoV8tJLL8mdd94phw4dkvPnz0uzZs3MMM6PP/7YJN8k4AAAAEBwkJsDAAAAIZpWRenK9jt37pSpU6fGbTtx4oQZnqlPsWTJEomM/P+1eO1Kee2112Tu3LnmtmrVqgUjDAAAAMDzyM0BAACAEC7Iqd0oa9eujbuuK9/nzp1bOnbsKAcPHpTDhw/H3fbvv/+abf/9958sW7aM5BsAAAAIInJzAAAAIB2K4//880+S2++//36zmr0u7qOJtQ7LVIULFzZzHOpwTT+d47BBgwYyadIkqVKlSmpDAAAAAEBuDgAAAISuOP7DDz/IAw88IN9//33cNv+sLLVq1ZJ69erJF198Ia+++qocP35ctm/fLmPHjpUyZcqYuQ4DaYKeNWvWy4seAAAA8ChycwAAACCExXHtNNGEW+cj/Omnn8w27UjRYZp58+aVV155RapWrSqzZs0yK9/rwj4HDhwwSbnuFxsbe5nhAgAAAFDk5gAAAECIF+TcvHmzdOvWzSTiL774otSvX99s16GZmTJlkpiYGHOZMGGCNGrUyCTk2omiwzkDF/0BAAAAcHnIzQEAAIAQzjleoUIFMxxTu00GDx4sP/74o9muybcm5bq4T8uWLc2wzRo1apjkW7tXSL4BAACA4CI3BwAAAELYOX6xLhUdpqnJ9969e2XDhg0mKQcAAACQvsjNAQAAgBAWxwOTcO1U6dy5s7z55puyZ88eWbdunUm+Ga4JAAAAhAa5OQAAAJDO06okN4yzefPmJN8AAACAQ8jNAQAAgBB2jvtt3LhR3nrrLRk1apRJukm+AQAAAGeQmwMAAAAhLI4HIvkGAAAA3IHcHAAAAAhhcRwAAAAAAAAAgLCecxwAAAAAAAAAABtRHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAAB4DsVxAAAAAAAAAIDnUBwHAAAAAAAAAHgOxXEAAAAAAAAAgOdQHAcAAAAAAAAAeA7FcQAAAAAAAACA51AcBwAAAAAAAACI1/w/K+nXVI00O3UAAAAASUVORK5CYII=" }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "\n", "=== RESULTS SUMMARY ===\n", "\n", "Classical Methods:\n", "LogisticRegression - Val: 0.8000, Test: 0.7669\n", "SVC - Val: 0.8080, Test: 0.7846\n", "MLP - Val: 0.8040, Test: 0.7701\n", "\n", "Quantum Methods (best per mode count):\n", "2 modes - Val: 0.5800, Test: 0.5691\n", "4 modes - Val: 0.8120, Test: 0.7990\n", "6 modes - Val: 0.8360, Test: 0.8280\n", "8 modes - Val: 0.8480, Test: 0.8376\n" ] } ], "execution_count": 40 }, { "cell_type": "markdown", "source": [ "\n", " ### 9.1 Save Results\n" ], "metadata": { "collapsed": false } }, { "cell_type": "code", "source": [ "save_experiment_results(results, \"quantum_llm_finetuning_results.json\")\n", "\n", "print(\"\\nExperiment completed successfully!\")\n", "print(f\"Results saved to ./results/quantum_llm_finetuning_results.json\")" ], "metadata": { "collapsed": false, "ExecuteTime": { "end_time": "2025-06-09T16:24:04.900244Z", "start_time": "2025-06-09T16:24:04.895690Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Results saved. Total experiments: 2\n", "\n", "Experiment completed successfully!\n", "Results saved to ./results/quantum_llm_finetuning_results.json\n" ] } ], "execution_count": 41 }, { "cell_type": "markdown", "source": [ "\n", " ## 10. Analysis and Conclusions\n", "\n", " Based on the experimental results, we can draw several insights:\n", "\n", " ### Performance Comparison\n", "\n", " 1. **Classical Baselines**:\n", " - Logistic Regression provides a strong baseline despite its simplicity\n", " - SVM often performs competitively in few-shot scenarios\n", " - MLP can capture non-linear patterns but may overfit with limited data\n", "\n", " 2. **Quantum Classifiers**:\n", " - Performance varies with the number of modes and photons\n", " - Smaller quantum circuits (2-4 modes) often perform comparably to classical methods\n", " - Larger circuits may suffer from optimization challenges\n", "\n", " ### Key Observations\n", "\n", " - **Few-shot learning**: With only 8 samples per class, simpler models often generalize better\n", " - **Quantum advantage**: Quantum models show promise but require careful hyperparameter tuning\n", " - **Computational trade-offs**: Quantum simulations are computationally intensive compared to classical methods\n", "\n", " ### Future Directions\n", "\n", " 1. **Scaling studies**: Test with more training samples to see if quantum models benefit from larger datasets\n", " 2. **Architecture exploration**: Try different quantum circuit designs and encoding strategies\n", " 3. **Hardware implementation**: Evaluate on real quantum photonic hardware when available\n", " 4. **Hybrid approaches**: Combine classical and quantum layers for potentially better performance\n", "\n", " The results demonstrate that quantum photonic classifiers can achieve competitive performance in NLP tasks, opening new avenues for quantum-enhanced machine learning in language processing." ], "metadata": { "collapsed": false } } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "2.7.6" } }, "nbformat": 4, "nbformat_minor": 0 }