Release v0.3.0
New features
-
Can switch progress bar on and off (default is on) from the settings via
settings.PROGRESSBAR = True/False
.
(#128) -
States in Gaussian and Fock representation now can be concatenated.
from mrmustard.lab.states import Gaussian, Fock from mrmustard.lab.gates import Attenuator # concatenate pure states fock_state = Fock(4) gaussian_state = Gaussian(1) pure_state = fock_state & gaussian_state # also can concatenate mixed states mixed1 = fock_state >> Attenuator(0.8) mixed2 = gaussian_state >> Attenuator(0.5) mixed_state = mixed1 & mixed2 mixed_state.dm()
-
Parameter passthrough allows one to use custom variables and/or functions as parameters. For example we can use parameters of other gates:
from mrmustard.lab.gates import Sgate, BSgate BS = BSgate(theta=np.pi/4, theta_trainable=True)[0,1] S0 = Sgate(r=BS.theta)[0] S1 = Sgate(r=-BS.theta)[1] circ = S0 >> S1 >> BS
Another possibility is with functions:
def my_r(x): return x**2 x = math.new_variable(0.5, bounds = (None, None), name="x") def cost_fn(): # note that my_r needs to be in the cost function # in order to track the gradient S = Sgate(r=my_r(x), theta_trainable=True)[0,1] return # some function of S opt.Optimize(cost_fn, by_optimizing=[x])
-
Adds the new trainable gate
RealInterferometer
: an interferometer that doesn't mix the q and p quadratures. (#132) -
Now marginals can be iterated over:
for mode in state: print(mode.purity)
Breaking changes
-
The Parametrized and Training classes have been refactored: now trainable tensors are wrapped in an instance of the
Parameter
class. To define a set of parameters dofrom mrmustard.training import Parametrized params = Parametrized( magnitude=10, magnitude_trainable=False, magnitude_bounds=None, angle=0.1, angle_trainable=True, angle_bounds=(-0.1,0.1) )
which will automatically define the properties
magnitude
andangle
on theparams
object.
To access the backend tensor defining the values of such parameters use thevalue
propertyparams.angle.value params.angle.bounds params.magnitude.value
Gates will automatically be an instance of the
Parametrized
class, for examplefrom mrmustard.lab import BSgate bs = BSgate(theta = 0.3, phi = 0.0, theta_trainable: True) # access params bs.theta.value bs.theta.bounds bs.phi.value
Improvements
-
The Parametrized and Training classes have been refactored. The new training module has been added and with it the new
Parameter
class: now trainable tensors are being wrapped in an instance ofParameter
. (#133), patch (#144) and (#158). -
The string representations of the
Circuit
andTransformation
objects have been improved: theCircuit.__repr__
method now produces a string that can be used to generate a circuit in an identical state (same gates and parameters), theTransformation.__str__
and objects inheriting from it now prints the name, memory location of the object as well as the modes of the circuit in which the transformation is acting on. The_markdown_repr_
has been implemented and on a jupyter notebook produces a table with valuable information of the Transformation objects. (#141) -
Add the argument
modes
to theInterferometer
operation to indicate which modes the Interferometer is applied to. (#121)
Bug fixes
-
Fixed a bug in the
State.ket()
method. An attribute was called with a typo in its name. (#135) -
The
math.dagger
function applying the hermitian conjugate to an operator was incorrectly transposing the indices of the input tensor. Nowmath.dagger
appropriately calculates the Hermitian conjugate of an operator. (#156)
Documentation
-
The centralized Xanadu Sphinx Theme is now used to style the Sphinx documentation. (#126)
-
The documentation now contains the
mm.training
section. The optimization examples on the README and Basic API Reference section have been updated to use the latest API. (#133)
Contributors
This release contains contributions from (in alphabetical order):
Mikhail Andrenkov (@Mandrenkov), Sebastian Duque Mesa (@sduquemesa), Filippo Miatto (@ziofil), Yuan Yao (@sylviemonet)