Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bargmann method physics #295

Merged
merged 149 commits into from
Jan 11, 2024
Merged

Bargmann method physics #295

merged 149 commits into from
Jan 11, 2024

Conversation

ziofil
Copy link
Collaborator

@ziofil ziofil commented Oct 20, 2023

Context:
We want the Bargmann representation to take center stage

Description of the Change:

  • Added method to contract a Bargmann triple over arbitrary pairs of indices
  • Added method to contract pairs of Bargmann triples
  • Added method to reorder the indices of an Abc triple

Benefits:
Can be used in the TN CV contractions and to support everything through Bargmann

Possible Drawbacks:
Need to be careful with the leftover index ordering

@codecov
Copy link

codecov bot commented Oct 20, 2023

Codecov Report

Attention: 20 lines in your changes are missing coverage. Please review.

Comparison is base (552b8d8) 83.36% compared to head (52d7719) 83.97%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #295      +/-   ##
===========================================
+ Coverage    83.36%   83.97%   +0.60%     
===========================================
  Files           61       64       +3     
  Lines         4448     4742     +294     
===========================================
+ Hits          3708     3982     +274     
- Misses         740      760      +20     
Files Coverage Δ
mrmustard/lab/abstract/state.py 92.37% <100.00%> (ø)
mrmustard/lab/gates.py 97.24% <ø> (ø)
mrmustard/lab/states.py 100.00% <100.00%> (ø)
mrmustard/math/backend_manager.py 98.08% <100.00%> (+0.06%) ⬆️
mrmustard/math/backend_numpy.py 100.00% <100.00%> (ø)
mrmustard/math/backend_tensorflow.py 100.00% <100.00%> (ø)
mrmustard/math/parameter_set.py 95.55% <100.00%> (-0.53%) ⬇️
mrmustard/math/tensor_networks/tensors.py 92.90% <ø> (ø)
mrmustard/physics/bargmann.py 100.00% <100.00%> (ø)
mrmustard/physics/gaussian.py 88.00% <100.00%> (ø)
... and 4 more

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 552b8d8...52d7719. Read the comment docs.

ziofil and others added 8 commits October 27, 2023 15:32
**Context:**
Continue work to make Bargmann default

**Description of the Change:**
Pulls relevant code from MVP representation project (Data, MatVecData
and AbcData classes)

**Benefits:**
We have the Bargmann representation now :)
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Show resolved Hide resolved
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/math/tensorflow_wrapper.py Outdated Show resolved Hide resolved
mrmustard/math/tensorflow_wrapper.py Outdated Show resolved Hide resolved
mrmustard/math/tensorflow_wrapper.py Outdated Show resolved Hide resolved
mrmustard/math/tensorflow_wrapper.py Outdated Show resolved Hide resolved
mrmustard/physics/ansatze.py Show resolved Hide resolved
mrmustard/physics/bargmann.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Show resolved Hide resolved
Copy link
Contributor

@sylviemonet sylviemonet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice PR! And I'm happy to have all these functions. Just left a small amount of questions for details. I'm also surprised by lots of the test which are smart, while some of them need to be add doscstrings.

mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
mrmustard/physics/representations.py Show resolved Hide resolved
mrmustard/physics/representations.py Outdated Show resolved Hide resolved
tests/test_physics/test_bargmann/test_bargmann_repr.py Outdated Show resolved Hide resolved
tests/test_physics/test_bargmann/test_bargmann_repr.py Outdated Show resolved Hide resolved
Copy link
Contributor

@sylviemonet sylviemonet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved with some small concerns!

mrmustard/physics/ansatze.py Show resolved Hide resolved
mrmustard/physics/ansatze.py Show resolved Hide resolved
mrmustard/physics/ansatze.py Show resolved Hide resolved
@ziofil ziofil merged commit 4cb1fd7 into develop Jan 11, 2024
7 checks passed
@ziofil ziofil deleted the bargmann_method_physics branch January 11, 2024 17:39
@SamFerracin SamFerracin mentioned this pull request Feb 1, 2024
SamFerracin pushed a commit that referenced this pull request Feb 6, 2024
### New features
* Added a new interface for backends, as well as a `numpy` backend
(which is now default). Users can run
all the functions in the `utils`, `math`, `physics`, and `lab` with both
backends, while `training`
requires using `tensorflow`. The `numpy` backend provides significant
improvements both in import
time and runtime.
[(#301)](#301)

* Added the classes and methods to create, contract, and draw tensor
networks with `mrmustard.math`.
  [(#284)](#284)

* Added functions in physics.bargmann to join and contract (A,b,c)
triples.
  [(#295)](#295)

* Added an Ansatz abstract class and PolyExpAnsatz concrete
implementation. This is used in the Bargmann representation.
  [(#295)](#295)

* Added `complex_gaussian_integral` and `real_gaussian_integral`
methods.
  [(#295)](#295)

* Added `Bargmann` representation (parametrized by Abc). Supports all
algebraic operations and CV (exact) inner product.
  [(#296)](#296)

### Breaking changes
* Removed circular dependencies by:
* Removing `graphics.py`--moved `ProgressBar` to `training` and
`mikkel_plot` to `lab`.
  * Moving `circuit_drawer` and `wigner` to `physics`.
  * Moving `xptensor` to `math`.
  [(#289)](#289)

* Created `settings.py` file to host `Settings`.
  [(#289)](#289)

* Moved `settings.py`, `logger.py`, and `typing.py` to `utils`.
  [(#289)](#289)

* Removed the `Math` class. To use the mathematical backend, replace
`from mrmustard.math import Math ; math = Math()` with `import
mrmustard.math as math`
  in your scripts.
  [(#301)](#301)

* The `numpy` backend is now default. To switch to the `tensorflow`
backend, add the line `math.change_backend("tensorflow")` to your
scripts.
  [(#301)](#301)

### Improvements

* Calculating Fock representations and their gradients is now more
numerically stable (i.e. numerical blowups that
result from repeatedly applying the recurrence relation are postponed to
higher cutoff values).
This holds for both the "vanilla strategy"
[(#274)](#274) and for the
"diagonal strategy" and "single leftover mode strategy"
[(#288)](#288).
This is done by representing Fock amplitudes with a higher precision
than complex128 (countering floating-point errors).
We run Julia code via PyJulia (where Numba was used before) to keep the
code fast.
The precision is controlled by `setting
settings.PRECISION_BITS_HERMITE_POLY`. The default value is ``128``,
which uses the old Numba code. When setting to a higher value, the new
Julia code is run.

* Replaced parameters in `training` with `Constant` and `Variable`
classes.
  [(#298)](#298)

* Improved how states, transformations, and detectors deal with
parameters by replacing the `Parametrized` class with `ParameterSet`.
  [(#298)](#298)

* Includes julia dependencies into the python packaging for downstream
installation reproducibility.
Removes dependency on tomli to load pyproject.toml for version info,
uses importlib.metadata instead.
  [(#303)](#303)
  [(#304)](#304)

* Improves the algorithms implemented in `vanilla` and `vanilla_vjp` to
achieve a speedup.
Specifically, the improved algorithms work on flattened arrays (which
are reshaped before being returned) as opposed to multi-dimensional
array.
  [(#312)](#312)
  [(#318)](#318)

* Adds functions `hermite_renormalized_batch` and
`hermite_renormalized_diagonal_batch` to speed up calculating
  Hermite polynomials over a batch of B vectors.
  [(#308)](#308)

* Added suite to filter undesired warnings, and used it to filter
tensorflow's ``ComplexWarning``s.
  [(#332)](#332)


### Bug fixes

* Added the missing `shape` input parameters to all methods `U` in the
`gates.py` file.
[(#291)](#291)
* Fixed inconsistent use of `atol` in purity evaluation for Gaussian
states.
[(#294)](#294)
* Fixed the documentations for loss_XYd and amp_XYd functions for
Gaussian channels.
[(#305)](#305)
* Replaced all instances of `np.empty` with `np.zeros` to fix
instabilities.
[(#309)](#309)

---------

Co-authored-by: Sebastián Duque Mesa <[email protected]>
Co-authored-by: JacobHast <[email protected]>
Co-authored-by: elib20 <[email protected]>
Co-authored-by: ziofil <[email protected]>
Co-authored-by: ziofil <[email protected]>
Co-authored-by: Luke Helt <[email protected]>
Co-authored-by: zeyueN <[email protected]>
Co-authored-by: Robbe De Prins <[email protected]>
Co-authored-by: Robbe De Prins (UGent-imec) <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: Ryk <[email protected]>
Co-authored-by: Gabriele Gullì <[email protected]>
Co-authored-by: Yuan Yao <[email protected]>
Co-authored-by: Yuan Yao <[email protected]>
Co-authored-by: heltluke <[email protected]>
Co-authored-by: Tanner Rogalsky <[email protected]>
Co-authored-by: Jan Provazník <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants