merlin.core.probability_distribution
FilteredBasis
ProbabilityDistribution
- class merlin.core.probability_distribution.ProbabilityDistribution(tensor, n_modes, n_photons, computation_space=ComputationSpace.FOCK, logical_performance=None, _custom_basis=None)
Bases:
objectProbability tensor bundled with Fock metadata and post-filter tracking.
- Parameters:
tensor (torch.Tensor) – Dense or sparse probabilities; leading dimensions are treated as batch axes.
n_modes (int) – Number of modes in the Fock space.
n_photons (int) – Total photon number represented by the distribution.
computation_space (ComputationSpace) – Basis enumeration used to order amplitudes (
fock,unbunched,dual_rail).logical_performance (torch.Tensor | None) – Optional per-batch scalar tracking kept/total probability after filtering.
Notes
Instances are normalized on construction; arithmetic-style temporary unnormalized states are not supported (unlike
StateVector). Onlyshape,device,dtype, andrequires_gradare delegated to the underlyingtorch.Tensor; tensor-like helpersto,clone,detach, andrequires_grad_mirror tensor semantics while keeping metadata and logical performance aligned. Layout-changing tensor operations should be done ontensordirectly, then wrapped again viafrom_tensorto maintain a consistent basis.- property basis: Combinadics | FilteredBasis | tuple[tuple[int, ...], ...]
Basis enumeration associated with the current tensor.
- clone()
Return a cloned distribution with metadata and logical performance preserved.
- Returns:
Cloned probability distribution.
- Return type:
-
computation_space:
ComputationSpace= 'fock'
- detach()
Return a detached
ProbabilityDistributionsharing data without gradients.- Returns:
Detached probability distribution.
- Return type:
- filter(rule)
Apply post-selection filter and renormalize probabilities.
logical_performance records kept_mass / original_mass per batch.
- Parameters:
rule (
Union[ComputationSpace,str,Callable[[tuple[int,...]],bool],Iterable[Sequence[int]],tuple[Union[ComputationSpace,str],Callable[[tuple[int,...]],bool]]]) – Computation space alias (fock,unbunched,dual_rail), a predicate, an explicit iterable of allowed states, or a tuple(space, predicate)to combine a computation-space constraint with an additional predicate.- Returns:
A new, normalized distribution; may shrink its basis when filtering to
unbunchedordual_railin the dense case.- Return type:
- Raises:
ValueError – If
dual_railis selected with incompatiblen_modes/n_photonsor an unknown computation space is requested.
- classmethod from_perceval(distribution, *, dtype=None, device=None, sparse=None)
Construct from a Perceval
BSDistribution.Validates that all entries share the same photon number and mode count.
- Parameters:
distribution (pcvl.BSDistribution) – Input Perceval distribution.
dtype (torch.dtype | None) – Optional override for output tensor dtype.
device (torch.device | None) – Optional overrides for output tensor placement and precision.
sparse (bool | None) – Force dense or sparse output; default auto-selects based on fill ratio.
- Raises:
ValueError – If the distribution is empty or inconsistent in shape/photon number.
- Return type:
- classmethod from_state_vector(state_vector, *, dtype=None, device=None, computation_space=None)
Convert a
StateVectorto a probability distribution.- Parameters:
state_vector (Any) – Source amplitudes; must expose
to_dense,n_modes, andn_photons.dtype (torch.dtype | None) – Optional override for output tensor dtype.
device (torch.device | None) – Optional overrides for output tensor placement and precision.
computation_space (ComputationSpace | str | None) – Optional basis scheme; defaults to
fock.
- Return type:
- classmethod from_tensor(tensor, *, n_modes, n_photons, computation_space=None, dtype=None, device=None)
Build a distribution from an explicit probability tensor.
- Parameters:
tensor (torch.Tensor) – Dense or sparse probability tensor; last dimension must match the basis size.
n_modes (int) – Number of modes in the represented Fock space.
n_photons (int) – Metadata for basis construction.
computation_space (ComputationSpace | str | None) – Optional basis scheme; defaults to
fock.dtype (torch.dtype | None) – Optional override for output tensor dtype.
device (torch.device | None) – Optional overrides for output tensor placement and precision.
- Raises:
ValueError – If the last dimension does not match the expected basis size.
- Return type:
- property is_normalized: bool
Probability distributions are always normalized by construction.
- Type:
- normalize()
In-place normalization; safe for zero-mass batches.
- Returns:
The same instance, normalized along the basis dimension.
- Return type:
- probabilities()
Alias for
to_dense()for readability.- Return type:
- requires_grad_(requires_grad=True)
Set
requires_gradon underlying tensors and return self.- Parameters:
requires_grad (bool) – Whether gradients should be tracked.
- Returns:
The updated instance.
- Return type:
- to(*args, **kwargs)
Return a new
ProbabilityDistributionwith tensor (and logical_performance) moved/cast viatorch.Tensor.to.- Parameters:
*args – Positional arguments forwarded to
torch.Tensor.to().**kwargs – Keyword arguments forwarded to
torch.Tensor.to().
- Returns:
Converted probability distribution.
- Return type:
- to_perceval()
Convert to Perceval
BSDistribution(single) or list for batches.- Returns:
Single distribution for 1D tensors, or one distribution per batch entry for batched tensors.
- Return type:
- merlin.core.probability_distribution.Basis = merlin.utils.combinadics.Combinadics | ForwardRef('FilteredBasis') | tuple[tuple[int, ...], ...]
Represent a union type
E.g. for int | str
Notes and Examples
Constructors
from_tensor — wrap a probability tensor with Fock metadata. The last dimension must match the basis size for the chosen computation space. The distribution is normalized on construction:
import torch
from merlin.core.probability_distribution import ProbabilityDistribution
probs = torch.tensor([0.2, 0.3, 0.5])
pd = ProbabilityDistribution.from_tensor(probs, n_modes=2, n_photons=2)
The optional computation_space parameter selects the basis ordering
(defaults to FOCK):
from merlin.core.computation_space import ComputationSpace
pd = ProbabilityDistribution.from_tensor(
probs, n_modes=2, n_photons=2,
computation_space=ComputationSpace.FOCK,
)
Batched inputs are supported — leading dimensions are treated as batch axes:
batch = torch.rand(16, 3) # 16 samples, basis_size = 3
pd = ProbabilityDistribution.from_tensor(batch, n_modes=2, n_photons=2)
assert pd.shape == (16, 3)
from_state_vector — compute \(|a_i|^2\) from a
StateVector:
from merlin.core.state_vector import StateVector
sv = StateVector.from_basic_state([1, 0, 1, 0], sparse=False)
pd = ProbabilityDistribution.from_state_vector(sv)
assert pd.n_modes == 4 and pd.n_photons == 2
from_perceval — convert from a Perceval BSDistribution:
import perceval as pcvl
dist = pcvl.BSDistribution()
dist[pcvl.BasicState([1, 0])] = 0.8
dist[pcvl.BasicState([0, 1])] = 0.2
pd = ProbabilityDistribution.from_perceval(dist)
Properties and metadata
n_modes and n_photons are set at construction and immutable.
shape, device, dtype, and requires_grad are delegated to the
underlying tensor:
pd.n_modes # 2
pd.n_photons # 2
pd.shape # torch.Size([3])
pd.device # device(type='cpu')
basis returns the Fock ordering for the current computation space (or a
filtered basis after filter()). basis_size is len(basis):
pd.basis_size # 3 for (2 modes, 2 photons, FOCK)
list(pd.basis) # [(2, 0), (1, 1), (0, 2)]
computation_space records which basis scheme was used:
pd.computation_space # ComputationSpace.FOCK
is_normalized is always True — distributions are normalized on
construction and after every filter call.
Accessing probabilities
probabilities() and to_dense() both return a dense, normalized tensor:
dense = pd.probabilities() # shape: (3,)
dense = pd.to_dense() # equivalent
Use bracket syntax for a single Fock state’s probability:
import perceval as pcvl
p = pd[[1, 1]] # scalar tensor
p = pd[pcvl.BasicState([1, 1])] # equivalent
For batched distributions, the returned tensor matches the batch shape:
batch_pd = ProbabilityDistribution.from_tensor(
torch.rand(8, 3), n_modes=2, n_photons=2,
)
p = batch_pd[[1, 1]] # shape: (8,)
Filtering and post-selection
filter() applies post-selection and returns a new, renormalized
distribution. The logical_performance attribute records the fraction of
probability mass kept per batch element.
Filter by computation space:
from merlin.core.computation_space import ComputationSpace
pd = ProbabilityDistribution.from_tensor(
torch.tensor([0.5, 0.25, 0.25]), n_modes=2, n_photons=2,
)
filtered = pd.filter(ComputationSpace.UNBUNCHED)
filtered.basis_size # 1 — only |1,1⟩ survives
filtered.logical_performance # tensor(0.25) — 25% of mass kept
String aliases also work: "fock", "unbunched", "dual_rail".
Filter by predicate:
# Keep only states where mode 0 has at least 1 photon
filtered = pd.filter(lambda state: state[0] >= 1)
Filter by explicit allowed states:
filtered = pd.filter([(1, 1), (2, 0)])
Combined space + predicate — pass a tuple (space, predicate):
# Unbunched states where mode 0 is occupied
filtered = pd.filter((ComputationSpace.UNBUNCHED, lambda s: s[0] == 1))
logical_performance is None on unfiltered distributions and is set by
filter() to a tensor of kept / total mass per batch element.
PyTorch-like helpers
to, clone, detach, and requires_grad_ mirror the standard
torch.Tensor API while preserving metadata and logical_performance:
pd_cuda = pd.to("cuda") # moves tensor + logical_performance
pd_copy = pd.clone() # independent copy
pd_det = pd.detach() # shares data, no gradient graph
pd.requires_grad_(True) # enable gradients in-place
QuantumLayer integration
With return_object=True and a probability measurement strategy, the layer
returns a ProbabilityDistribution instead of a bare tensor:
import merlin as ML
from merlin.core.computation_space import ComputationSpace
layer = ML.QuantumLayer(
builder=builder,
input_state=[1, 0, 1, 0],
n_photons=2,
measurement_strategy=ML.MeasurementStrategy.probs(ML.ComputationSpace.FOCK),
return_object=True,
)
pd = layer(x) # ProbabilityDistribution
pd.probabilities() # dense tensor
pd_ub = pd.filter(ComputationSpace.UNBUNCHED) # post-select
pd_ub.logical_performance # fraction of mass kept
Perceval interoperability
import perceval as pcvl
from merlin.core.probability_distribution import ProbabilityDistribution
# Perceval → Merlin
dist = pcvl.BSDistribution()
dist[pcvl.BasicState([1, 0])] = 0.8
dist[pcvl.BasicState([0, 1])] = 0.2
pd = ProbabilityDistribution.from_perceval(dist)
# Merlin → Perceval
pcvl_back = pd.to_perceval()
assert pcvl_back[pcvl.BasicState([1, 0])] == 0.8
For batched distributions, to_perceval() returns a list of
pcvl.BSDistribution objects.