merlin.measurement.photon_loss module

Photon loss utilities let you incorporate Perceval NoiseModel instances directly inside a perceval.Experiment. The resulting QuantumLayer automatically inserts a PhotonLossTransform before any detector mapping, expanding the classical basis with loss outcomes whilst keeping probability normalisation intact.

class merlin.measurement.photon_loss.PhotonLossTransform(simulation_keys, survival_probs, *, dtype=None, device=None)

Bases: Module

Linear map applying per-mode photon loss to a Fock probability vector.

Args:
simulation_keys: Iterable describing the raw Fock states produced by the

simulator (as tuples or lists of integers).

survival_probs: One survival probability per optical mode. dtype: Optional torch dtype for the transform matrix. Defaults to

torch.float32.

device: Optional device used to stage the transform matrix.

forward(distribution)

Apply the photon loss transform to a Fock probability vector.

Return type:

Tensor

Args:

distribution: A Fock probability vector as a 1D torch tensor.

Returns:

A Fock probability vector after photon loss.

property is_identity: bool

Whether the transform corresponds to perfect transmission.

property output_keys: list[tuple[int, ...]]

Classical Fock keys after photon loss.

property output_size: int

Number of classical outcomes after photon loss.

to(*args, **kwargs)

Move and/or cast the parameters and buffers.

This can be called as

to(device=None, dtype=None, non_blocking=False)
to(dtype, non_blocking=False)
to(tensor, non_blocking=False)
to(memory_format=torch.channels_last)

Its signature is similar to torch.Tensor.to(), but only accepts floating point or complex dtypes. In addition, this method will only cast the floating point or complex parameters and buffers to dtype (if given). The integral parameters and buffers will be moved device, if that is given, but with dtypes unchanged. When non_blocking is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.

See below for examples.

Note

This method modifies the module in-place.

Args:
device (torch.device): the desired device of the parameters

and buffers in this module

dtype (torch.dtype): the desired floating point or complex dtype of

the parameters and buffers in this module

tensor (torch.Tensor): Tensor whose dtype and device are the desired

dtype and device for all parameters and buffers in this module

memory_format (torch.memory_format): the desired memory

format for 4D parameters and buffers in this module (keyword only argument)

Returns:

Module: self

Examples:

>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> linear = nn.Linear(2, 2)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]])
>>> linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
        [-0.5113, -0.2325]], dtype=torch.float64)
>>> # xdoctest: +REQUIRES(env:TORCH_DOCTEST_CUDA1)
>>> gpu1 = torch.device("cuda:1")
>>> linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')
>>> cpu = torch.device("cpu")
>>> linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)
>>> linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
        [-0.5112, -0.2324]], dtype=torch.float16)

>>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)
>>> linear.weight
Parameter containing:
tensor([[ 0.3741+0.j,  0.2382+0.j],
        [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)
>>> linear(torch.ones(3, 2, dtype=torch.cdouble))
tensor([[0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j],
        [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
class merlin.measurement.photon_loss.Sequence

Bases: Reversible, Collection

All the operations on a read-only sequence.

Concrete subclasses must override __new__ or __init__, __getitem__, and __len__.

count(value) integer -- return number of occurrences of value
index(value[, start[, stop]]) integer -- return first index of value.

Raises ValueError if the value is not present.

Supporting start and stop arguments is optional, but recommended.

merlin.measurement.photon_loss.cast(typ, val)

Cast a value to a type.

This returns the value unchanged. To the type checker this signals that the return value has the designated type, but at runtime we intentionally don’t check anything (we want this to be as fast as possible).

merlin.measurement.photon_loss.resolve_photon_loss(experiment, n_modes)

Resolve photon loss from the experiment’s noise model.

Return type:

tuple[list[float], bool]

Args:

experiment: The quantum experiment carrying the noise model. n_modes: Number of photonic modes to cover.

Returns:

Tuple containing the per-mode survival probabilities and a flag indicating whether an effective noise model was provided.