merlin.pcvl_pytorch package

class merlin.pcvl_pytorch.CircuitConverter(circuit, input_specs=None, dtype=torch.complex64, device=device(type='cpu'))

Bases: object

Convert a parameterized Perceval circuit into a differentiable PyTorch unitary matrix.

This class converts Perceval quantum circuits into PyTorch tensors that can be used in neural network training with automatic differentiation. It supports batch processing for efficient training and handles various quantum components like beam splitters, phase shifters, and unitary operations.

Supported Components:
  • PS (Phase Shifter)

  • BS (Beam Splitter)

  • PERM (Permutation)

  • Unitary (Generic unitary matrix)

  • Barrier (no-op, removed during compilation)

Attributes:

circuit: The Perceval circuit to convert param_mapping: Maps parameter names to tensor indices device: PyTorch device for tensor operations tensor_cdtype: Complex tensor dtype tensor_fdtype: Float tensor dtype

Example:

Basic usage with a single phase shifter:

>>> import torch
>>> import perceval as pcvl
>>> from merlin.pcvl_pytorch.locirc_to_tensor import CircuitConverter
>>>
>>> # Create a simple circuit with one phase shifter
>>> circuit = pcvl.Circuit(1) // pcvl.PS(pcvl.P("phi"))
>>>
>>> # Convert to PyTorch with gradient tracking
>>> converter = CircuitConverter(circuit, input_specs=["phi"])
>>> phi_params = torch.tensor([0.5], requires_grad=True)
>>> unitary = converter.to_tensor(phi_params)
>>> print(unitary.shape)  # torch.Size([1, 1])

Multiple parameters with grouping:

>>> # Circuit with multiple phase shifters
>>> circuit = (pcvl.Circuit(2)
...            // pcvl.PS(pcvl.P("theta1"))
...            // (1, pcvl.PS(pcvl.P("theta2"))))
>>>
>>> converter = CircuitConverter(circuit, input_specs=["theta"])
>>> theta_params = torch.tensor([0.1, 0.2], requires_grad=True)
>>> unitary = converter.to_tensor(theta_params)
>>> print(unitary.shape)  # torch.Size([2, 2])

Batch processing for training:

>>> # Batch of parameter values
>>> batch_params = torch.tensor([[0.1], [0.2], [0.3]], requires_grad=True)
>>> converter = CircuitConverter(circuit, input_specs=["phi"])
>>> batch_unitary = converter.to_tensor(batch_params)
>>> print(batch_unitary.shape)  # torch.Size([3, 1, 1])

Training integration:

>>> # Training loop with beam splitter
>>> circuit = pcvl.Circuit(2) // pcvl.BS.Rx(pcvl.P("theta"))
>>> converter = CircuitConverter(circuit, ["theta"])
>>> theta = torch.tensor([0.5], requires_grad=True)
>>> optimizer = torch.optim.Adam([theta], lr=0.01)
>>>
>>> for step in range(10):
...     optimizer.zero_grad()
...     unitary = converter.to_tensor(theta)
...     loss = some_loss_function(unitary)
...     loss.backward()
...     optimizer.step()
set_dtype(dtype)

Set the tensor data types for float and complex operations.

Args:

dtype: Target dtype (float32/complex64 or float64/complex128)

Raises:

TypeError: If dtype is not supported

to(dtype, device)

Move the converter to a specific device and dtype.

Args:

dtype: Target tensor dtype (float32/complex64 or float64/complex128) device: Target device (string or torch.device)

Returns:

Self for method chaining

Raises:

TypeError: If device type or dtype is not supported

to_tensor(*input_params, batch_size=None)

Convert the parameterized circuit to a PyTorch unitary tensor.

Return type:

Tensor

Args:

*input_params: Variable number of parameter tensors. Each tensor has shape (num_params,) or (batch_size, num_params) corresponding to input_specs order. batch_size: Explicit batch size. If None, inferred from input tensors.

Returns:
Complex unitary tensor of shape (circuit.m, circuit.m) for single samples

or (batch_size, circuit.m, circuit.m) for batched inputs.

Raises:

ValueError: If wrong number of input tensors provided. TypeError: If input_params is not a list or tuple.

merlin.pcvl_pytorch.build_slos_distribution_computegraph(m, n_photons, output_map_func=None, computation_space=None, no_bunching=None, keep_keys=True, device=None, dtype=torch.float32, index_photons=None)

Construct a reusable SLOS computation graph.

Return type:

SLOSComputeGraph

Parameters

mint

Number of modes in the circuit.

n_photonsint

Total number of photons injected in the circuit.

output_map_funccallable, optional

Mapping applied to each output Fock state, allowing post-processing.

computation_space : ComputationSpace, optional keep_keys : bool, optional

Whether to keep the list of mapped Fock states.

devicetorch.device, optional

Device on which tensors should be allocated.

dtypetorch.dtype, optional

Real dtype controlling numerical precision.

index_photonslist[tuple[int, …]], optional

Bounds for each photon placement.

Returns

SLOSComputeGraph

Pre-built computation graph ready for repeated evaluations.

Submodules