merlin.pcvl_pytorch.slos_torchscript module
This module extends slos_torch.py with TorchScript-optimized computation graphs for photonic quantum circuit simulations. It separates the graph construction from the actual computation for improved performance.
The optimized implementation pre-builds the computation graph based on the input state configuration, which can then be reused for multiple unitary evaluations.
- class merlin.pcvl_pytorch.slos_torchscript.SLOSComputeGraph(m, n_photons, output_map_func=None, computation_space=ComputationSpace.UNBUNCHED, keep_keys=True, device=None, dtype=torch.float32, index_photons=None)
Bases:
objectBuild and store a reusable computation graph for the SLOS algorithm.
This separates graph construction, which depends only on the input configuration, from repeated evaluations with different unitary matrices.
- compute(unitary, input_state)
Compute amplitudes with the pre-built graph.
- Parameters:
unitary (torch.Tensor) – Single unitary matrix with shape
(m, m)or batch of unitaries with shape(batch_size, m, m). The dtype must match the complex dtype associated with the graphdtype. For example, for torch.float32, use torch.cfloat; for torch.float64, use torch.cdouble.input_state (list[int]) – Input occupation list of length
mcontainingn_photonsphotons.
- Returns:
List of tuples representing output Fock state configurations
Amplitudes distribution tensor
- Return type:
tuple[list[tuple[int, …]] | None, torch.Tensor]
- Raises:
ValueError – If the input state, unitary shape, or unitary dtype is invalid for the current graph configuration.
- compute_batch(unitary, input_states)
Compute the probability distribution using the pre-built graph.
- Parameters:
unitary (torch.Tensor) – Single unitary matrix with shape
(m, m)or batch of unitaries with shape(batch_size, m, m). The dtype must match the complex dtype associated with the graphdtype. For example, for torch.float32, use torch.cfloat; for torch.float64, use torch.cdouble.input_states (list[list[int]]) – Collection of input occupation lists evaluated in parallel.
- Returns:
List of tuples representing output Fock state configurations
Probability distribution tensor
- Return type:
tuple[list[tuple[int, …]] | None, torch.Tensor]
- Raises:
ValueError – If the input states, unitary shape, or unitary dtype is invalid for the current graph configuration.
- compute_pa_inc(unitary, input_state_prev, input_state, changed_unitary=False)
Update amplitudes incrementally after a change in input occupation.
- Parameters:
unitary (torch.Tensor) – Single unitary matrix with shape
(m, m)or batch of unitaries with shape(batch_size, m, m).input_state_prev (list[int]) – Previously evaluated input occupation list.
changed_unitary (bool) – Whether cached inverse tensors must be recomputed. Default is
False.
- Returns:
Updated amplitude tensor.
- Return type:
- Raises:
RuntimeError – If no previous amplitudes are available.
ValueError – If the input state, unitary shape, or unitary dtype is invalid for the current graph configuration.
- compute_probs(unitary, input_state)
Compute probabilities with the pre-built graph.
- Parameters:
unitary (torch.Tensor) – Single unitary matrix with shape
(m, m)or batch of unitaries with shape(batch_size, m, m). For example, for torch.float32, use torch.cfloat; for torch.float64, use torch.cdouble.input_state (list[int]) – Input occupation list of length
mcontainingn_photonsphotons.
- Returns:
List of tuples representing output Fock state configurations
Probability distribution tensor
- Return type:
tuple[list[tuple[int, …]] | None, torch.Tensor] | torch.Tensor
- compute_probs_from_amplitudes(amplitudes)
Convert amplitudes into probabilities and apply output mapping.
- Parameters:
amplitudes (torch.Tensor) – Amplitude tensor with shape
(n_states,)or(batch_size, n_states).- Returns:
Output keys and probability tensor.
- Return type:
tuple[list[tuple[int, …]] | None, torch.Tensor]
- to(device)
Move graph tensors to a specific device.
- Parameters:
device (str | torch.device) – Target device.
- Returns:
The graph instance moved to
device.- Return type:
- Raises:
TypeError – If
deviceis neither a string nor atorch.device.
- merlin.pcvl_pytorch.slos_torchscript.prepare_vectorized_operations(operations_list, device=None)
Convert operations list to tensors for vectorized computation.
- Args:
operations_list: List of operations, each as [src_idx, dest_idx, mode_i] device: Optional device to place tensors on (defaults to CPU if None)
- Returns:
Tuple of tensors: (sources, destinations, modes)
- merlin.pcvl_pytorch.slos_torchscript.layer_compute_vectorized(unitary, prev_amplitudes, sources, destinations, modes, p)
Compute amplitudes for a single layer using vectorized operations.
- Return type:
- Args:
unitary: Batch of unitary matrices [batch_size, m, m] prev_amplitudes: Previous layer amplitudes [batch_size, prev_size] sources: Source indices for operations [num_ops] destinations: Destination indices for operations [num_ops] modes: Mode indices for operations [num_ops] p: Photon index for this layer
- Returns:
Next layer amplitudes [batch_size, next_size]
- merlin.pcvl_pytorch.slos_torchscript.layer_compute_batch(unitary, prev_amplitudes, sources, destinations, modes, p)
Propagate a layer of the SLOS computation graph while evaluating several coherent input components in parallel.
The pre-computed
sources,destinationsandmodestensors encode the sparse transitions that must be applied to go from the amplitudes of the previous layer to the amplitudes of the current layer. Each transition picks a value fromprev_amplitudesusingsources(the index of the parent state), multiplies it by the relevant unitary elementU[modes, p]for the photon that is currently being injected, and scatters the contribution intodestinations(the index of the child state). When several input superposition components need to be evaluated,pprovides the photon indices for every component and the computations are vectorised along the last axis.- Return type:
- Args:
- unitary (torch.Tensor): Batch of unitary matrices with shape
[batch_size, m, m]. The unitary entries are looked up according tomodesand the photon indicespso the tensor can reside on either CPU or CUDA as long as it matches the device ofprev_amplitudes.- prev_amplitudes (torch.Tensor): Complex amplitudes produced by the
previous layer with shape
[batch_size, prev_size, num_inputs]. The third dimension indexes the different coherent input components.- sources (torch.Tensor): Integer tensor of shape
[num_ops]containing the index of the parent state for every sparse transition.
- destinations (torch.Tensor): Integer tensor of shape
[num_ops]with the index within the current layer where each contribution must be accumulated.
- modes (torch.Tensor): Integer tensor of shape
[num_ops]describing which output mode of the unitary matrix is involved in each transition.
- p (list[int]): Photon occupation indices for the layer, one entry per
superposition component. The list length must match the third dimension of
prev_amplitudes.
- Returns:
torch.Tensor: Tensor with shape
[batch_size, next_size, num_inputs]that contains the amplitudes of the current layer after applying all transitions.next_sizeequalsdestinations.max() + 1so the method adapts automatically to the sparsity structure.- Notes:
The function is side-effect free: input tensors are never modified in place.
Zero operations (
len(sources) == 0) short-circuit to the input in order to keep TorchScript graphs simple and avoid unnecessary tensor allocations.
- merlin.pcvl_pytorch.slos_torchscript.layer_compute_backward(unitary, sources, destinations, modes, m)
Compute amplitudes for a single layer using vectorized operations.
- Return type:
- Args:
unitary: Batch of unitary matrices [batch_size, m, m] prev_amplitudes: Previous layer amplitudes [batch_size, prev_size] sources: Source indices for operations [num_ops] destinations: Destination indices for operations [num_ops] modes: Mode indices for operations [num_ops] p: Photon index for this layer
- Returns:
Next layer amplitudes [batch_size, next_size]
- merlin.pcvl_pytorch.slos_torchscript.build_slos_distribution_computegraph(m, n_photons, output_map_func=None, computation_space=None, no_bunching=None, keep_keys=True, device=None, dtype=torch.float32, index_photons=None)
Construct a reusable SLOS computation graph.
- Parameters:
m (int) – Number of modes in the circuit.
n_photons (int) – Total number of photons injected in the circuit.
output_map_func (Callable[[tuple[int, ...]], tuple[int, ...] | None] | None) – Mapping applied to each output Fock state, allowing post-processing.
computation_space (ComputationSpace | None) – Logical computation subspace used to build the basis and transitions. When omitted, defaults to
ComputationSpace.UNBUNCHED.no_bunching (bool | None) – Deprecated legacy flag. Use
computation_spaceinstead.keep_keys (bool) – Whether to keep the list of mapped Fock states. Default is
True.device (torch.device | str | None) – Device on which tensors should be allocated.
dtype (torch.dtype) – Real dtype controlling numerical precision. Default is
torch.float.index_photons (list[tuple[int, ...]] | None) – Bounds for each photon placement.
- Returns:
Pre-built computation graph ready for repeated evaluations.
- Return type:
- merlin.pcvl_pytorch.slos_torchscript.load_slos_distribution_computegraph(path)
Load a previously saved SLOS distribution computation graph.
- Parameters:
path (str | os.PathLike[str]) – Path to the saved computation graph.
- Returns:
Loaded computation graph ready for computations.
- Return type:
Examples
>>> # Save a computation graph >>> graph = build_slos_distribution_computegraph([1, 1]) >>> graph.save("hom_graph.pt") >>> >>> # Later, load the saved graph >>> loaded_graph = load_slos_distribution_computegraph("hom_graph.pt") >>> >>> # Use the loaded graph >>> unitary = torch.tensor([[0.7071, 0.7071], [0.7071, -0.7071]], dtype=torch.cfloat) >>> keys, probs = loaded_graph.compute(unitary)
- merlin.pcvl_pytorch.slos_torchscript.compute_slos_distribution(unitary, input_state, output_map_func=None, computation_space=ComputationSpace.UNBUNCHED, keep_keys=True, index_photons=None)
Compute a SLOS output distribution with a TorchScript-optimized graph.
This function builds the computation graph first, then uses it to compute the probabilities. For repeated calculations with the same input configuration but different unitaries, it is more efficient to use
build_slos_distribution_computegraphdirectly.- Parameters:
unitary (torch.Tensor) – Single unitary matrix with shape
(m, m)or batch of unitaries with shape(batch_size, m, m).input_state (list[int]) – Number of photons in every mode of the circuit.
output_map_func (Callable[[tuple[int, ...]], tuple[int, ...] | None] | None) – Function that maps output states.
computation_space (ComputationSpace) – Computation subspace used to build the graph. Default is
ComputationSpace.UNBUNCHED.keep_keys (bool) – If
True, output state keys are returned. Default isTrue.index_photons (list[tuple[int, ...]] | None) – Bounds for each photon placement.
- Returns:
Output keys and probability tensor.
- Return type:
tuple[list[tuple[int, …]] | None, torch.Tensor]