Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance benchmark result of adjoint Jacobian in #562 #587

Open
rht opened this issue Jan 6, 2024 · 1 comment
Open

Performance benchmark result of adjoint Jacobian in #562 #587

rht opened this issue Jan 6, 2024 · 1 comment

Comments

@rht
Copy link

rht commented Jan 6, 2024

Issue description

Here is a concrete benchmark result for #562, where I found the hybrid Python-C++ code (~40 LOC of the Python code) in that PR to be ~2x faster than the full C++ code for adjoint Jacobian. I think this means that there could be a room for improvement for the C++ implementation.

I'm raising this issue because I had spent time figuring out about speeding up the adjoint code. I think it would be a missed opportunity if this were not examined. A review of this issue would be appreciated!

# First method (Hybrid Python-C++)
Elapsed prepare final version 0.2567005157470703
elapsed grad final version 0.2276167869567871
first last -6.487318906392218e-05 -3.165823172952863e-06
Elapsed python 0.5076043605804443

# Second method (Builtin C++ Adjoint Jacobian)
elapsed prepare diagonal observable 0.29276108741760254
Elapsed diagonal observable grad only 0.4546031951904297
first last -6.487318906392219e-05 -3.165823172952862e-06
Elapsed diagonal observable sparse 0.8442294597625732

The elapsed times to consider are the "elapsed grad final version" and "Elapsed diagonal observable grad only".

Code to reproduce the benchmark result.

import time

from pennylane import numpy as np
from pennylane_lightning.lightning_gpu.lightning_gpu import L2Loss

import pennylane as qml

np.random.seed(0)
num_wires = 22
complex_type = np.complex128
real_type = np.float64

target_probs = np.random.rand(2**num_wires).astype(real_type)
target_probs /= np.sum(target_probs)

params = np.random.rand(2 * num_wires, requires_grad=True).astype(real_type)

dev_gpu = qml.device("lightning.gpu", wires=num_wires, c_dtype=complex_type)

# First method (Hybrid Python-C++)
def grad_fn_python(params):
    with qml.tape.QuantumTape() as tape:
        _ = [qml.RX(params[i], wires=i) for i in range(num_wires)]
        _ += [qml.CZ(wires=(i, i + 1)) for i in range(num_wires - 1)]
        _ += [qml.RX(params[i + num_wires], wires=i) for i in range(num_wires)]
        L2Loss(target_probs)
    out = dev_gpu.adjoint_jacobian(tape)
    return out


tic = time.time()
g = grad_fn_python(params)
toc = time.time()
print("first last", g[0], g[-1])
print("Elapsed python", toc - tic)


# Second method (Builtin C++ Adjoint Jacobian)
def common_h(h):
    @qml.qnode(dev_gpu, diff_method="adjoint")
    def c(params):
        _ = [qml.RX(params[i], wires=i) for i in range(num_wires)]
        _ += [qml.CZ(wires=(i, i + 1)) for i in range(num_wires - 1)]
        _ += [qml.RX(params[i + num_wires], wires=i) for i in range(num_wires)]
        return qml.expval(h)

    return c


def get_prob(x):
    return np.abs(x) ** 2


dev_gpu_for_eval = qml.device("lightning.gpu", wires=num_wires, c_dtype=complex_type)


@qml.qnode(dev_gpu_for_eval, diff_method="finite-diff")
def circuit_gpu(params):
    # define your quantum circuit here
    _ = [qml.RX(params[i], wires=i) for i in range(num_wires)]
    _ += [qml.CZ(wires=(i, i + 1)) for i in range(num_wires - 1)]
    _ += [qml.RX(params[i + num_wires], wires=i) for i in range(num_wires)]
    return qml.state()


def adjoint_l2_sparse(params):
    from scipy.sparse import diags

    wires = range(num_wires)
    ket = circuit_gpu(params)
    tic = time.time()
    obs = -2 * (target_probs - get_prob(ket))
    hmat = diags(obs, format="csr")
    # Note: Since the L2 loss function is a completely diagonal operator, we can express it as a sparse Hamiltonian.
    h = qml.SparseHamiltonian(hmat, wires)
    print("elapsed prepare diagonal observable", time.time() - tic)

    c = common_h(h)
    tic = time.time()
    out = qml.grad(c)(params)
    print("Elapsed diagonal observable grad only", time.time() - tic)
    return out


print()
tic = time.time()
g = adjoint_l2_sparse(params)
toc = time.time()
print("first last", g[0], g[-1])
print("Elapsed diagonal observable sparse", toc - tic)
@trbromley
Copy link
Contributor

trbromley commented Jan 8, 2024

Thanks @rht. This is useful info and squeezing the most performance out of the lightning suite is a top priority!

For the 0.35 release (end of Feb), we're planning to move towards lightning.gpu tapping into our new JVP/VJP pipeline by default. At that point, I'd be curious to compare these numbers again. We'll keep you posted as we make progress on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants