-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linearizing the vanilla algorithm #312
Conversation
…linear-vanilla
…linear-vanilla
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## develop #312 +/- ##
===========================================
+ Coverage 83.29% 83.31% +0.02%
===========================================
Files 60 61 +1
Lines 4441 4448 +7
===========================================
+ Hits 3699 3706 +7
Misses 742 742
Continue to review full report in Codecov by Sentry.
|
9e709ab
to
081404b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
super!
…linear-vanilla
### New features * Added a new interface for backends, as well as a `numpy` backend (which is now default). Users can run all the functions in the `utils`, `math`, `physics`, and `lab` with both backends, while `training` requires using `tensorflow`. The `numpy` backend provides significant improvements both in import time and runtime. [(#301)](#301) * Added the classes and methods to create, contract, and draw tensor networks with `mrmustard.math`. [(#284)](#284) * Added functions in physics.bargmann to join and contract (A,b,c) triples. [(#295)](#295) * Added an Ansatz abstract class and PolyExpAnsatz concrete implementation. This is used in the Bargmann representation. [(#295)](#295) * Added `complex_gaussian_integral` and `real_gaussian_integral` methods. [(#295)](#295) * Added `Bargmann` representation (parametrized by Abc). Supports all algebraic operations and CV (exact) inner product. [(#296)](#296) ### Breaking changes * Removed circular dependencies by: * Removing `graphics.py`--moved `ProgressBar` to `training` and `mikkel_plot` to `lab`. * Moving `circuit_drawer` and `wigner` to `physics`. * Moving `xptensor` to `math`. [(#289)](#289) * Created `settings.py` file to host `Settings`. [(#289)](#289) * Moved `settings.py`, `logger.py`, and `typing.py` to `utils`. [(#289)](#289) * Removed the `Math` class. To use the mathematical backend, replace `from mrmustard.math import Math ; math = Math()` with `import mrmustard.math as math` in your scripts. [(#301)](#301) * The `numpy` backend is now default. To switch to the `tensorflow` backend, add the line `math.change_backend("tensorflow")` to your scripts. [(#301)](#301) ### Improvements * Calculating Fock representations and their gradients is now more numerically stable (i.e. numerical blowups that result from repeatedly applying the recurrence relation are postponed to higher cutoff values). This holds for both the "vanilla strategy" [(#274)](#274) and for the "diagonal strategy" and "single leftover mode strategy" [(#288)](#288). This is done by representing Fock amplitudes with a higher precision than complex128 (countering floating-point errors). We run Julia code via PyJulia (where Numba was used before) to keep the code fast. The precision is controlled by `setting settings.PRECISION_BITS_HERMITE_POLY`. The default value is ``128``, which uses the old Numba code. When setting to a higher value, the new Julia code is run. * Replaced parameters in `training` with `Constant` and `Variable` classes. [(#298)](#298) * Improved how states, transformations, and detectors deal with parameters by replacing the `Parametrized` class with `ParameterSet`. [(#298)](#298) * Includes julia dependencies into the python packaging for downstream installation reproducibility. Removes dependency on tomli to load pyproject.toml for version info, uses importlib.metadata instead. [(#303)](#303) [(#304)](#304) * Improves the algorithms implemented in `vanilla` and `vanilla_vjp` to achieve a speedup. Specifically, the improved algorithms work on flattened arrays (which are reshaped before being returned) as opposed to multi-dimensional array. [(#312)](#312) [(#318)](#318) * Adds functions `hermite_renormalized_batch` and `hermite_renormalized_diagonal_batch` to speed up calculating Hermite polynomials over a batch of B vectors. [(#308)](#308) * Added suite to filter undesired warnings, and used it to filter tensorflow's ``ComplexWarning``s. [(#332)](#332) ### Bug fixes * Added the missing `shape` input parameters to all methods `U` in the `gates.py` file. [(#291)](#291) * Fixed inconsistent use of `atol` in purity evaluation for Gaussian states. [(#294)](#294) * Fixed the documentations for loss_XYd and amp_XYd functions for Gaussian channels. [(#305)](#305) * Replaced all instances of `np.empty` with `np.zeros` to fix instabilities. [(#309)](#309) --------- Co-authored-by: Sebastián Duque Mesa <[email protected]> Co-authored-by: JacobHast <[email protected]> Co-authored-by: elib20 <[email protected]> Co-authored-by: ziofil <[email protected]> Co-authored-by: ziofil <[email protected]> Co-authored-by: Luke Helt <[email protected]> Co-authored-by: zeyueN <[email protected]> Co-authored-by: Robbe De Prins <[email protected]> Co-authored-by: Robbe De Prins (UGent-imec) <[email protected]> Co-authored-by: Yuan <[email protected]> Co-authored-by: Ryk <[email protected]> Co-authored-by: Gabriele Gullì <[email protected]> Co-authored-by: Yuan Yao <[email protected]> Co-authored-by: Yuan Yao <[email protected]> Co-authored-by: heltluke <[email protected]> Co-authored-by: Tanner Rogalsky <[email protected]> Co-authored-by: Jan Provazník <[email protected]>
Description of the Change:
vanilla
strategy works on a flattened array (which is reshaped before returning) as opposed to a multi-dimensional array.*_batch
methods are removed. The same functionalities can be achieved by passing batchedB
vectors to the pre-existing functions.Benefits:
vanilla
strategy.vanilla_batch
strategy.TODO