Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sampling speedup #349

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

rachelchadwick
Copy link
Collaborator

Before submitting

Context:
Some speedups for sampling from a GBS distribution.

Description of the Change:
The current generate_hafnian_sample function calculates hafnians of matrices larger than (twice) the sampled number of photons in each mode. This can be avoided by first sampling a random number and calculating the cumulative probability for increasing photon numbers and stopping once the cumulative probability has reached the sampled number. The loops have also been reduced to return -1 sooner if the sampling fails and the final mode can use pure_state_amplitude if the covariance matrix is pure. Finally the absolute function in the approximate case has been removed so that in the case of a non-positive matrix it throws an error rather than giving a sample from the wrong distribution.

Benefits:
Faster sampling than the current method.

There is another pull request that replaces this function with the quadratically faster algorithm. For higher numbers of photons in the sample that function will be faster. However, for lower numbers of photons this suggested method may be faster (depending on the cutoff supplied in the quadratic speedup version) as it does not need to calculate larger matrices than the number of photons sampled regardless of the cutoff chosen. This method is also exact up to a global cutoff whereas the quadratic speedup algorithm has some error due to the cutoff in each mode. In order to sample exactly, the cutoff in each mode needs to be the same as the global cutoff and so the cutoff kwarg is set to None which becomes the total number of photons +1.

Possible Drawbacks:
None

Related GitHub Issues:
None

@codecov
Copy link

codecov bot commented Aug 11, 2022

Codecov Report

Merging #349 (b732f97) into master (6e6caf3) will decrease coverage by 0.11%.
The diff coverage is 96.29%.

@@             Coverage Diff             @@
##            master     #349      +/-   ##
===========================================
- Coverage   100.00%   99.88%   -0.12%     
===========================================
  Files           24       24              
  Lines         1735     1738       +3     
===========================================
+ Hits          1735     1736       +1     
- Misses           0        2       +2     
Impacted Files Coverage Δ
thewalrus/samples.py 99.44% <96.29%> (-0.56%) ⬇️
thewalrus/_hafnian.py 99.56% <0.00%> (-0.44%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6e6caf3...b732f97. Read the comment docs.

@sduquemesa sduquemesa self-requested a review August 15, 2022 20:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants