merlin.pcvl_pytorch.slos_torchscript module
This module extends slos_torch.py with TorchScript-optimized computation graphs for photonic quantum circuit simulations. It separates the graph construction from the actual computation for improved performance.
The optimized implementation pre-builds the computation graph based on the input state configuration, which can then be reused for multiple unitary evaluations.
- class merlin.pcvl_pytorch.slos_torchscript.Callable
Bases:
object
- enum merlin.pcvl_pytorch.slos_torchscript.ComputationSpace(value)
Bases:
str,EnumEnumeration of supported computational subspaces.
- Member Type:
str
Valid values are as follows:
- FOCK = <ComputationSpace.FOCK: 'fock'>
- UNBUNCHED = <ComputationSpace.UNBUNCHED: 'unbunched'>
- DUAL_RAIL = <ComputationSpace.DUAL_RAIL: 'dual_rail'>
The
Enumand its members also have the following methods:- classmethod default(*, no_bunching)
Derive the default computation space from the legacy no_bunching flag.
- Return type:
- classmethod coerce(value)
Normalize user-provided values (enum instances or case-insensitive strings).
- Return type:
- class merlin.pcvl_pytorch.slos_torchscript.SLOSComputeGraph(m, n_photons, output_map_func=None, computation_space=ComputationSpace.UNBUNCHED, keep_keys=True, device=None, dtype=torch.float32, index_photons=None)
Bases:
objectA class that builds and stores the computation graph for SLOS algorithm.
This separates the graph construction (which depends only on input state, computation_space, and output_map_func) from the actual computation using the unitary matrix.
- compute(unitary, input_state)
Compute the amplitudes using the pre-built graph.
- Return type:
tuple[list[tuple[int,...]],Tensor]
- Args:
unitary (torch.Tensor): Single unitary matrix [m x m] or batch of unitaries [b x m x m]. The unitary should be provided in the complex dtype corresponding to the graph’s dtype. For example, for torch.float32, use torch.cfloat; for torch.float64, use torch.cdouble. input_state (list[int]): Input_state of length self.m with self.n_photons in the input state
- Returns:
- Tuple[List[Tuple[int, …]], torch.Tensor]:
List of tuples representing output Fock state configurations
Amplitudes distribution tensor
- compute_batch(unitary, input_states)
Compute the probability distribution using the pre-built graph.
- Return type:
tuple[list[tuple[int,...]],Tensor]
- Args:
unitary (torch.Tensor): Single unitary matrix [m x m] or batch of unitaries [b x m x m]. The unitary should be provided in the complex dtype corresponding to the graph’s dtype. For example, for torch.float32, use torch.cfloat; for torch.float64, use torch.cdouble. input_state (list[int]): Input_state of length self.m with self.n_photons in the input state
- Returns:
- Tuple[List[Tuple[int, …]], torch.Tensor]:
List of tuples representing output Fock state configurations
Probability distribution tensor
- compute_pa_inc(unitary, input_state_prev, input_state, changed_unitary=False)
- Return type:
tuple[list[tuple[int,...]],Tensor]
- compute_probs(unitary, input_state)
Compute the probability distribution using the pre-built graph.
- Args:
unitary (torch.Tensor): Single unitary matrix [m x m] or batch of unitaries [b x m x m]. The unitary should be provided in the complex dtype corresponding to the graph’s dtype. For example, for torch.float32, use torch.cfloat; for torch.float64, use torch.cdouble. input_state (list[int]): Input_state of length self.m with self.n_photons in the input state
- Returns:
- Tuple[List[Tuple[int, …]], torch.Tensor]:
List of tuples representing output Fock state configurations
Probability distribution tensor
- compute_probs_from_amplitudes(amplitudes)
- to(device)
Moves the converter to a specific device.
- Parameters:
dtype – The data type to use for the tensors - one can specify either a float or complex dtype. Supported dtypes are torch.float32 or torch.complex64, torch.float64 or torch.complex128.
device (
str|device) – The device to move the converter to.
- merlin.pcvl_pytorch.slos_torchscript.build_slos_distribution_computegraph(m, n_photons, output_map_func=None, computation_space=None, no_bunching=None, keep_keys=True, device=None, dtype=torch.float32, index_photons=None)
Construct a reusable SLOS computation graph.
- Return type:
Parameters
- mint
Number of modes in the circuit.
- n_photonsint
Total number of photons injected in the circuit.
- output_map_funccallable, optional
Mapping applied to each output Fock state, allowing post-processing.
computation_space : ComputationSpace, optional keep_keys : bool, optional
Whether to keep the list of mapped Fock states.
- devicetorch.device, optional
Device on which tensors should be allocated.
- dtypetorch.dtype, optional
Real dtype controlling numerical precision.
- index_photonslist[tuple[int, …]], optional
Bounds for each photon placement.
Returns
- SLOSComputeGraph
Pre-built computation graph ready for repeated evaluations.
- merlin.pcvl_pytorch.slos_torchscript.compute_slos_distribution(unitary, input_state, output_map_func=None, computation_space=ComputationSpace.UNBUNCHED, keep_keys=True, index_photons=None)
TorchScript-optimized version of pytorch_slos_output_distribution.
This function builds the computation graph first, then uses it to compute the probabilities. For repeated calculations with the same input configuration but different unitaries, it’s more efficient to use build_slos_compute_graph() directly.
- Return type:
tuple[list[tuple[int,...]],Tensor]
- Args:
unitary (torch.Tensor): Single unitary matrix [m x m] or batch of unitaries [b x m x m] input_state (List[int]): Number of photons in every mode of the circuit output_map_func (callable, optional): Function that maps output states computation_space ComputationSpace): Enumeration domain. keep_keys (bool): If True, output state keys are returned index_photons: List of tuples (first_integer, second_integer). The first_integer is the lowest index layer a photon can take and the second_integer is the highest index
- Returns:
- Tuple[List[Tuple[int, …]], torch.Tensor]:
List of tuples representing output Fock state configurations
Probability distribution tensor
- merlin.pcvl_pytorch.slos_torchscript.layer_compute_backward(unitary, sources, destinations, modes, m)
Compute amplitudes for a single layer using vectorized operations.
- Return type:
- Args:
unitary: Batch of unitary matrices [batch_size, m, m] prev_amplitudes: Previous layer amplitudes [batch_size, prev_size] sources: Source indices for operations [num_ops] destinations: Destination indices for operations [num_ops] modes: Mode indices for operations [num_ops] p: Photon index for this layer
- Returns:
Next layer amplitudes [batch_size, next_size]
- merlin.pcvl_pytorch.slos_torchscript.layer_compute_batch(unitary, prev_amplitudes, sources, destinations, modes, p)
Propagate a layer of the SLOS computation graph while evaluating several coherent input components in parallel.
The pre-computed
sources,destinationsandmodestensors encode the sparse transitions that must be applied to go from the amplitudes of the previous layer to the amplitudes of the current layer. Each transition picks a value fromprev_amplitudesusingsources(the index of the parent state), multiplies it by the relevant unitary elementU[modes, p]for the photon that is currently being injected, and scatters the contribution intodestinations(the index of the child state). When several input superposition components need to be evaluated,pprovides the photon indices for every component and the computations are vectorised along the last axis.- Return type:
- Args:
- unitary (torch.Tensor): Batch of unitary matrices with shape
[batch_size, m, m]. The unitary entries are looked up according tomodesand the photon indicespso the tensor can reside on either CPU or CUDA as long as it matches the device ofprev_amplitudes.- prev_amplitudes (torch.Tensor): Complex amplitudes produced by the
previous layer with shape
[batch_size, prev_size, num_inputs]. The third dimension indexes the different coherent input components.- sources (torch.Tensor): Integer tensor of shape
[num_ops]containing the index of the parent state for every sparse transition.
- destinations (torch.Tensor): Integer tensor of shape
[num_ops]with the index within the current layer where each contribution must be accumulated.
- modes (torch.Tensor): Integer tensor of shape
[num_ops]describing which output mode of the unitary matrix is involved in each transition.
- p (list[int]): Photon occupation indices for the layer, one entry per
superposition component. The list length must match the third dimension of
prev_amplitudes.
- Returns:
torch.Tensor: Tensor with shape
[batch_size, next_size, num_inputs]that contains the amplitudes of the current layer after applying all transitions.next_sizeequalsdestinations.max() + 1so the method adapts automatically to the sparsity structure.- Notes:
The function is side-effect free: input tensors are never modified in place.
Zero operations (
len(sources) == 0) short-circuit to the input in order to keep TorchScript graphs simple and avoid unnecessary tensor allocations.
- merlin.pcvl_pytorch.slos_torchscript.layer_compute_vectorized(unitary, prev_amplitudes, sources, destinations, modes, p)
Compute amplitudes for a single layer using vectorized operations.
- Return type:
- Args:
unitary: Batch of unitary matrices [batch_size, m, m] prev_amplitudes: Previous layer amplitudes [batch_size, prev_size] sources: Source indices for operations [num_ops] destinations: Destination indices for operations [num_ops] modes: Mode indices for operations [num_ops] p: Photon index for this layer
- Returns:
Next layer amplitudes [batch_size, next_size]
- merlin.pcvl_pytorch.slos_torchscript.load_slos_distribution_computegraph(path)
Load a previously saved SLOS distribution computation graph.
- Args:
path: Path to the saved computation graph
- Returns:
SLOSComputeGraph: Loaded computation graph ready for computations
- Example:
>>> # Save a computation graph >>> graph = build_slos_distribution_computegraph([1, 1]) >>> graph.save("hom_graph.pt") >>> >>> # Later, load the saved graph >>> loaded_graph = load_slos_distribution_computegraph("hom_graph.pt") >>> >>> # Use the loaded graph >>> unitary = torch.tensor([[0.7071, 0.7071], [0.7071, -0.7071]], dtype=torch.cfloat) >>> keys, probs = loaded_graph.compute(unitary)
- merlin.pcvl_pytorch.slos_torchscript.prepare_vectorized_operations(operations_list, device=None)
Convert operations list to tensors for vectorized computation.
- Args:
operations_list: List of operations, each as [src_idx, dest_idx, mode_i] device: Optional device to place tensors on (defaults to CPU if None)
- Returns:
Tuple of tensors: (sources, destinations, modes)
- merlin.pcvl_pytorch.slos_torchscript.resolve_float_complex(dtype)
Given a torch dtype representing either the float or complex side, return the matching pair.
- Return type:
tuple[dtype,dtype]
- Args:
dtype: torch float or complex dtype.
- Returns:
Tuple (float_dtype, complex_dtype) ensuring the pair is internally consistent.
- Raises:
TypeError: If the dtype is not one of the supported float/complex types.