Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

anormal memory behaviour with strawberryfields.fock backend #842

Closed
MichelNowak1 opened this issue Oct 9, 2020 · 4 comments
Closed

anormal memory behaviour with strawberryfields.fock backend #842

MichelNowak1 opened this issue Oct 9, 2020 · 4 comments

Comments

@MichelNowak1
Copy link

Issue description

When running a simple circuit with the strawberryfields.fock backend, the memory increases to high levels and is released suddenly at each expectation calculation (expval). Here is the snippet code that introduces this behaviour:

import pennylane as qml
import numpy as np
n_wires = 8
num_categories = 4
dev = qml.device("strawberryfields.fock", wires=n_wires, cutoff_dim=4)
@qml.qnode(dev, interface="autograd")
def circuit(weights, x=None):
    qml.templates.embeddings.DisplacementEmbedding(x, wires=range(n_wires))
    qml.templates.layers.CVNeuralNetLayers(*weights, wires=range(n_wires))
    return [qml.expval(qml.X(i)) for i in range(num_categories)]
weights = qml.init.cvqnn_layers_all(n_layers=3, n_wires=n_wires)
data = np.random.random(size=[n_wires])
circuit(weights, x=data)
  • Expected behavior:
    I would expect the memory to be stable along time.

  • Actual behavior:

mprof run python3 circuit_test.py
mprof plot

Screenshot 2020-10-06 at 08 25 19 (1)

with 3 expectations values returned, I get:

Screenshot 2020-10-06 at 07 59 17 (1)

  • Reproduces how often:
    always

  • System information:

Name: PennyLane
Version: 0.11.0
Summary: PennyLane is a Python quantum machine learning library by Xanadu Inc.
Home-page: https://github.com/XanaduAI/pennylane
Author: None
Author-email: None
License: Apache License 2.0
Location: /home/michelnowak/.local/lib/python3.6/site-packages
Requires: autograd, numpy, appdirs, toml, semantic-version, networkx, scipy
Required-by: PennyLane-SF
Platform info: Linux-4.15.0-99-generic-x86_64-with-Ubuntu-18.04-bionic
Python version: 3.6.9
Numpy version: 1.19.1
Scipy version: 1.4.1
Installed devices:

  • default.gaussian (PennyLane-0.11.0)
  • default.qubit (PennyLane-0.11.0)
  • default.qubit.autograd (PennyLane-0.11.0)
  • default.qubit.tf (PennyLane-0.11.0)
  • default.tensor (PennyLane-0.11.0)
  • default.tensor.tf (PennyLane-0.11.0)
  • strawberryfields.fock (PennyLane-SF-0.11.0)
  • strawberryfields.gaussian (PennyLane-SF-0.11.0)
  • strawberryfields.remote (PennyLane-SF-0.11.0)

the strawberryfields version is 0.15.1

Source code and tracebacks

The code ends successfully. But in order to know what takes so much memory, I interrupted the calculation while the memory was increasing. The keyboard interrupt signal was not taken into account immediately, but rather at the end of the plateau at 65G of memory. When the keyboard interrupt takes effect, I get the following traceback:

^CTraceback (most recent call last):
File "circuit_test.py", line 18, in
circuit(weights, x=data)
File "/home/michelnowak/.local/lib/python3.6/site-packages/pennylane/interfaces/autograd.py", line 69, in call
return self.evaluate(args, kwargs)
File "/usr/local/lib/python3.6/dist-packages/autograd/tracer.py", line 48, in f_wrapped
return f_raw(*args, **kwargs)
File "/home/michelnowak/.local/lib/python3.6/site-packages/pennylane/qnodes/base.py", line 834, in evaluate
return_native_type=temp,
File "/home/michelnowak/.local/lib/python3.6/site-packages/pennylane/_device.py", line 274, in execute
results.append(self.expval(obs.name, wires, obs.parameters))
File "/home/michelnowak/.local/lib/python3.6/site-packages/pennylane_sf/simulator.py", line 140, in expval
ex, var = self._observable_map[observable](self.state, device_wires, par)
File "/home/michelnowak/.local/lib/python3.6/site-packages/pennylane_sf/expectations.py", line 188, in
device_wires.labels[0], phi
File "/home/michelnowak/.local/lib/python3.6/site-packages/strawberryfields/backends/states.py", line 794, in quad_expectation
rho = self.reduced_dm(mode)
File "/home/michelnowak/.local/lib/python3.6/site-packages/strawberryfields/backends/states.py", line 637, in reduced_dm
return np.einsum(indStr, self.dm())
File "/home/michelnowak/.local/lib/python3.6/site-packages/strawberryfields/backends/states.py", line 546, in dm
rho = np.einsum(einstr, self.ket(), self.ket().conj())
File "<array_function internals>", line 6, in einsum
File "/usr/local/lib/python3.6/dist-packages/numpy/core/einsumfunc.py", line 1350, in einsum
return c_einsum(*operands, **kwargs)
KeyboardInterrupt

Hope this helps.

@co9olguy
Copy link
Member

co9olguy commented Oct 9, 2020

Thanks for bringing this to our attention @MichelNowak1. We will take a deeper look

We haven't fully determined the cause, but based on the memory size, we're speculating that at some point a mixed state is being created numerically 🤔

@nquesada
Copy link
Contributor

@co9olguy is on point. As found by @agran2018, during the calculation, SF constructs the reduced density matrix of some of the modes. To do this, it first construct the total density matrix of the pure state, which for 8 modes and cutoff 4 has (4^8)^2 elements, which is precisely where the massive usage of memory happens. As far as I can tell the memory usage is expected although it is likely one can come up with a better implementation to reduce it.

@MichelNowak1
Copy link
Author

Thanks for this explanation. I was actually curious about the release of memory and reallocation at each expectation calculation and looked into the code: from my understanding, the density matrix should not be changing between two subsequent expectation calculations. I am missing the reason why it is recomputed each time. because the ramp-up in memory is clearly due to the call to:

rho = np.einsum(einstr, self.ket(), self.ket().conj())

but I compared two rho between each call and the difference is 0. (with cutoff dim=3 though because I have only 128G of memory)

So we could store rho and reuse it at each expectation calculation. I did it quickly by adding it to the attributes and initialising it to None. If it is None, it is computed. If it is not None, it simply returns the value. And the memory behaviour is now:

Screenshot 2020-10-13 at 20 11 48

So it saves a significant amount of simulation time. (90 seconds instead of 300)

Of course this is okay for one pass through the circuit, and self.rho should be reinitialised at each circuit call.

Could you please tell me if I am missing something important about the density matrix calculation? or would it make sense to avoid re-computing the density matrix between each expectation calculation?

Thanks

@co9olguy
Copy link
Member

Thanks @MichelNowak1, if it's ok with you I'm going to transition this over to an issue on the strawberry fields github, and we can continue the discussion there 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants