Feedforward Circuits
Feedforward is a key capability in photonic quantum circuits, where a partial measurement determines the configuration of the downstream circuit. This mechanism is comparable to dynamic circuits in the gate-based model of quantum computing (see IBM Dynamic Circuits).
The main difference is in the physical implementation:
Gate-based circuits: gates are applied consecutively, and adapting the circuit requires performing a measurement and determining follow-up gates within the coherence time of the qubits (typically ms–s).
Photonic circuits: feedforward involves measuring some modes while the remaining modes travel through a delay line. The delay must be short enough to avoid photon loss, while still allowing the photonic chip to be reconfigured. Measurement and reconfiguration must therefore happen on sub-microsecond timescales.
FeedForwardBlock in MerLin
Modern MerLin versions model feedforward circuits via the
FeedForwardBlock class. Instead of
describing the block procedurally, you simply provide a complete
perceval.Experiment containing:
The unitary layers between measurements.
Explicit detector declarations (PNR, threshold, …).
One or more
perceval.components.feed_forward_configurator.FFCircuitProviderinstances that describe how the circuit is reconfigured after the detectors fire.
FeedForwardBlock parses the experiment, creates the appropriate
QuantumLayer objects for every stage, and runs
them sequentially. Classical inputs (input_parameters) are only consumed by
the first stage; once the first measurement happens the remaining branches are
propagated in amplitude-encoding mode.
Note
The current implementation expects noise-free experiments (NoiseModel()
or None). Adding detectors and feed-forward configurators to a noisy
experiment is rejected during construction.
Measurement strategy
measurement_strategy controls the classical view exposed by
forward():
PROBABILITIES(default): returns a tensor of shape(batch_size, len(output_keys)). Each column already corresponds to the fully specified Fock state listed in :pyattr:`~merlin.algorithms.feed_forward.FeedForwardBlock.output_keys`.MODE_EXPECTATIONS: returns a tensor of shape(batch_size, num_modes)containing the per-mode photon expectations aggregated across all measurement keys. The :pyattr:`~merlin.algorithms.feed_forward.FeedForwardBlock.output_keys` list is retained for metadata while :pyattr:`~merlin.algorithms.feed_forward.FeedForwardBlock.output_state_sizes` storesnum_modesfor each entry.AMPLITUDES: list of tuples(measurement_key, branch_probability, remaining_photons, amplitudes)describing the mixed state produced after every partial measurement.
For tensor outputs the attribute
:pyattr:`~merlin.algorithms.feed_forward.FeedForwardBlock.output_keys` lists the
measurement tuple corresponding to each column. PROBABILITIES therefore
directly aligns with the dictionary keys, whereas MODE_EXPECTATIONS
retains the key ordering purely as metadata because the returned tensor is
already aggregated across all outcomes.
API Reference
- class merlin.algorithms.feed_forward.FeedForwardBlock(experiment, *, input_state=None, trainable_parameters=None, input_parameters=None, computation_space=ComputationSpace.FOCK, measurement_strategy=MeasurementStrategy.PROBABILITIES, device=None, dtype=None)
Bases:
ModuleFeed-forward photonic block constructed directly from a Perceval experiment.
The block introspects the provided
pcvl.Experiment, splits it into unitary / detector /FFCircuitProviderstages and turns each segment into one or moreQuantumLayerinstances. At run time the block executes every stage, branching on every partial measurement outcome and accumulating the classical probability for each branch.Parameters
- experiment:
Perceval experiment containing the full feed-forward definition. The current implementation requires noise-free experiments (
NoiseModel()orNone).- input_state:
Initial quantum state. May be provided as a Fock occupation list,
pcvl.BasicState,pcvl.StateVector, or a tensor whose components represent amplitudes in the experiment Fock basis (the tensor is only required for amplitude-encoding inputs).- trainable_parameters:
Optional list of Perceval parameter prefixes that should remain learnable across all stages.
- input_parameters:
Perceval parameter prefixes that receive classical inputs. They are consumed by the first stage only; once the first detection happens all branches switch to amplitude encoding and the classical tensor is ignored.
- computation_space:
Currently restricted to
FOCK.- measurement_strategy:
Controls how classical outputs are produced:
MeasurementStrategy.PROBABILITIES(default) returns a tensor of shape(batch_size, num_output_keys)whose columns match the fully specified Fock states stored in :pyattr:`output_keys`.MeasurementStrategy.MODE_EXPECTATIONScollapses every branch into a single tensor of shape(batch_size, num_modes)that contains the per-mode photon expectations aggregated across all measurement keys. The :pyattr:`output_keys` attribute is retained for metadata while :pyattr:`output_state_sizes` reportsnum_modesfor every key.MeasurementStrategy.AMPLITUDESyields a list of tuples(measurement_key, branch_probability, remaining_photons, amplitudes)so callers can reason about the mixed state left by each branch.
- describe()
Return a multi-line description of the feed-forward stages.
The summary lists, in order, the global modes that remain active at each step, the subset of measured modes, and the type of feed-forward configurator attached to the stage. It is primarily intended for debugging or for logging experiment structure.
- Return type:
str
- forward(x=None)
Execute the feed-forward experiment.
Parameters
- x:
Classical feature tensor. Only the first stage consumes classical inputs; subsequent stages operate purely in amplitude-encoding mode. When the experiment does not expose classical inputs this argument may be omitted (or
None), in which case an empty tensor is automatically supplied.
Returns
- torch.Tensor | list
PROBABILITIESreturns a tensor of shape(batch_size, len(output_keys))aligned with the fully specified Fock states in :pyattr:`output_keys`.MODE_EXPECTATIONSproduces a tensor of shape(batch_size, total_modes)where the columns already encode the per-mode expectations aggregated across all measurement keys (:pyattr:`output_state_sizes` storestotal_modesfor every key).AMPLITUDESyields a list of tuples(measurement_key, branch_probability, remaining_photons, amplitudes)describing every branch of the resulting mixed state.
- property output_keys: list[tuple[int, ...]]
Return the measurement keys associated with the most recent classical forward pass.
The list is populated after
forward()completes. For thePROBABILITIESstrategy the list lines up with the tensor columns. ForMODE_EXPECTATIONSit is retained for reference even though the returned tensor already aggregates all measurement outcomes. Calling the property before running the block raisesRuntimeError.
- property output_state_sizes: dict[tuple[int, ...], int]
Return the number of remaining Fock states represented by each entry in
output_keys.Only available when
measurement_strategyisPROBABILITIESorMODE_EXPECTATIONS. ForPROBABILITIESthe value is always1because each key now denotes a fully specified Fock state, while forMODE_EXPECTATIONSit equals the total number of modes contributing to the expectation vector.
Example
import torch
import perceval as pcvl
from merlin.algorithms import FeedForwardBlock
from merlin.measurement.strategies import MeasurementStrategy
# Build an experiment with one detector stage and two branches
exp = pcvl.Experiment()
exp.add(0, pcvl.Circuit(3) // pcvl.BS())
exp.add(0, pcvl.Detector.pnr())
reflective = pcvl.Circuit(2) // pcvl.PERM([1, 0])
transmissive = pcvl.Circuit(2) // pcvl.BS()
provider = pcvl.FFCircuitProvider(1, 0, reflective)
provider.add_configuration([1], transmissive)
exp.add(0, provider)
block = FeedForwardBlock(
exp,
input_state=[2, 0, 0],
trainable_parameters=["theta"], # optional Perceval prefixes
input_parameters=["phi"], # classical inputs for the first unitary
measurement_strategy=MeasurementStrategy.PROBABILITIES,
)
x = torch.zeros((1, 1)) # only the first stage consumes features
outputs = block(x) # tensor (batch, num_keys, dim)
for idx, key in enumerate(block.output_keys):
distribution = outputs[:, idx] # probabilities for this measurement
When the experiment does not expose classical inputs you may call block()
without passing a tensor (an empty feature tensor is injected automatically).
Further Reading
internal_design
circuit_specific_optimizations
output_mappings