merlin.pcvl_pytorch.locirc_to_tensor module

class merlin.pcvl_pytorch.locirc_to_tensor.CircuitConverter(circuit, input_specs=None, dtype=torch.complex64, device=device(type='cpu'))

Bases: object

Convert a parameterized Perceval circuit into a differentiable PyTorch unitary matrix.

This class converts Perceval quantum circuits into PyTorch tensors that can be used in neural network training with automatic differentiation. It supports batch processing for efficient training and handles various quantum components like beam splitters, phase shifters, and unitary operations.

Parameters:
  • circuit (pcvl.Circuit) – Perceval circuit to convert.

  • input_specs (list[str] | None) – Parameter name prefixes used to group parameters into input tensors.

  • dtype (torch.dtype) – Target tensor dtype.

  • device (torch.device) – Device used for tensor operations.

Notes

Supported Components:
  • PS (Phase Shifter)

  • BS (Beam Splitter)

  • PERM (Permutation)

  • Unitary (Generic unitary matrix)

  • Barrier (no-op, removed during compilation)

Example:

Basic usage with a single phase shifter:

>>> import torch
>>> import perceval as pcvl
>>> from merlin.pcvl_pytorch.locirc_to_tensor import CircuitConverter
>>>
>>> # Create a simple circuit with one phase shifter
>>> circuit = pcvl.Circuit(1) // pcvl.PS(pcvl.P("phi"))
>>>
>>> # Convert to PyTorch with gradient tracking
>>> converter = CircuitConverter(circuit, input_specs=["phi"])
>>> phi_params = torch.tensor([0.5], requires_grad=True)
>>> unitary = converter.to_tensor(phi_params)
>>> print(unitary.shape)  # torch.Size([1, 1])

Multiple parameters with grouping:

>>> # Circuit with multiple phase shifters
>>> circuit = (pcvl.Circuit(2)
...            // pcvl.PS(pcvl.P("theta1"))
...            // (1, pcvl.PS(pcvl.P("theta2"))))
>>>
>>> converter = CircuitConverter(circuit, input_specs=["theta"])
>>> theta_params = torch.tensor([0.1, 0.2], requires_grad=True)
>>> unitary = converter.to_tensor(theta_params)
>>> print(unitary.shape)  # torch.Size([2, 2])

Batch processing for training:

>>> # Batch of parameter values
>>> batch_params = torch.tensor([[0.1], [0.2], [0.3]], requires_grad=True)
>>> converter = CircuitConverter(circuit, input_specs=["phi"])
>>> batch_unitary = converter.to_tensor(batch_params)
>>> print(batch_unitary.shape)  # torch.Size([3, 1, 1])

Training integration:

>>> # Training loop with beam splitter
>>> circuit = pcvl.Circuit(2) // pcvl.BS.Rx(pcvl.P("theta"))
>>> converter = CircuitConverter(circuit, ["theta"])
>>> theta = torch.tensor([0.5], requires_grad=True)
>>> optimizer = torch.optim.Adam([theta], lr=0.01)
>>>
>>> for step in range(10):
...     optimizer.zero_grad()
...     unitary = converter.to_tensor(theta)
...     loss = some_loss_function(unitary)
...     loss.backward()
...     optimizer.step()
set_dtype(dtype)

Set the tensor data types for float and complex operations.

Parameters:

dtype (torch.dtype) – Target dtype (float32/complex64 or float64/complex128).

Raises:

TypeError – If dtype is not supported.

to(dtype, device)

Move the converter to a specific device and dtype.

Parameters:
  • dtype (torch.dtype) – Target tensor dtype (float32/complex64 or float64/complex128).

  • device (str | torch.device) – Target device (string or torch.device).

Returns:

self for method chaining.

Return type:

CircuitConverter

Raises:

TypeError – If device type is not supported.

to_tensor(*input_params, batch_size=None)

Convert the parameterized circuit to a PyTorch unitary tensor.

Parameters:
  • input_params (torch.Tensor) – Variable number of parameter tensors. Each tensor has shape (num_params,) or (batch_size, num_params) in the order of input_specs.

  • batch_size (int | None) – Explicit batch size. If None, it is inferred from the input tensors.

Returns:

Complex unitary tensor of shape (circuit.m, circuit.m) for a single sample or (batch_size, circuit.m, circuit.m) for batched inputs.

Return type:

torch.Tensor

Raises:
  • ValueError – If the wrong number of input tensors is provided.

  • TypeError – If input_params is not a list or tuple.