Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is this hybrid quantum-classical PINN code correct for solving the Lorenz system? #1847

Open
AHDMarwan opened this issue Sep 29, 2024 · 0 comments

Comments

@AHDMarwan
Copy link

AHDMarwan commented Sep 29, 2024

Hello,

I have written a hybrid quantum-classical neural network (PINN) using TensorFlow, DeepXDE, and PennyLane to solve the Lorenz system. The code integrates a quantum layer in a classical feed-forward neural network to model the system of differential equations. However, I would appreciate feedback on whether the approach is correct and if there are any improvements or issues in the implementation.

Here’s an overview of the code:

Lorenz System: A system of differential equations modeled using DeepXDE.
Neural Network: Combines classical feed-forward layers with a quantum layer implemented using PennyLane.
Training: The model is trained using the Adam optimizer, followed by the L-BFGS optimizer.
Goal: Solve the Lorenz system and learn its parameters (C1, C2, C3).
Code Snippet:
`
import deepxde as dde
import numpy as np
import tensorflow as tf
import pennylane as qml

C1 = dde.Variable(1.0)
C2 = dde.Variable(1.0)
C3 = dde.Variable(1.0)

n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)

class QuantumLayer(tf.keras.layers.Layer):
def init(self, **kwargs):
super(QuantumLayer, self).init(**kwargs)
self.weight_shape = (n_qubits, n_qubits)

def build(self, input_shape):
    self.quantum_weights = self.add_weight(
        shape=self.weight_shape,
        initializer="random_normal",
        trainable=True,
        name="quantum_weights"
    )

def call(self, inputs):
    @qml.qnode(dev, interface='tensorflow')
    def quantum_circuit(inputs, weights):
        qml.templates.AngleEmbedding(inputs, wires=range(n_qubits))
        qml.templates.BasicEntanglerLayers(weights, wires=range(n_qubits))
        return [qml.expval(qml.PauliZ(i)) for i in range(n_qubits)]

    output = tf.convert_to_tensor(quantum_circuit(inputs, self.quantum_weights))
    return tf.reshape(output, (-1, 2))

def Lorenz_system(x, y):
y1, y2, y3 = y[:, 0:1], y[:, 1:2], y[:, 2:]
dy1_x = dde.grad.jacobian(y, x, i=0)
dy2_x = dde.grad.jacobian(y, x, i=1)
dy3_x = dde.grad.jacobian(y, x, i=2)
return [
dy1_x - C1 * (y2 - y1),
dy2_x - y1 * (C2 - y3) + y2,
dy3_x - y1 * y2 + C3 * y3,
]

def boundary(_, on_initial):
return on_initial

geom = dde.geometry.TimeDomain(0, 3)

ic1 = dde.icbc.IC(geom, lambda X: -8, boundary, component=0)
ic2 = dde.icbc.IC(geom, lambda X: 7, boundary, component=1)
ic3 = dde.icbc.IC(geom, lambda X: 27, boundary, component=2)

observe_t, ob_y = np.load("/content/drive/MyDrive/Colab Notebooks/dataset/Lorenz.npz")["t"], np.load("/content/drive/MyDrive/Colab Notebooks/dataset/Lorenz.npz")["y"]
observe_y0 = dde.icbc.PointSetBC(observe_t, ob_y[:, 0:1], component=0)
observe_y1 = dde.icbc.PointSetBC(observe_t, ob_y[:, 1:2], component=1)
observe_y2 = dde.icbc.PointSetBC(observe_t, ob_y[:, 2:3], component=2)

data = dde.data.PDE(
geom,
Lorenz_system,
[ic1, ic2, ic3, observe_y0, observe_y1, observe_y2],
num_domain=400,
num_boundary=2,
anchors=observe_t,
)

layer_sizes = [1, 40, 40, 40, 3]
activation = "tanh"
initializer = "Glorot uniform"

class PINNWithQuantumLayer(dde.nn.FNN):
def init(self, layer_sizes, activation, initializer):
super().init(layer_sizes, activation, initializer)
self.quantum_layer = QuantumLayer()

def call(self, inputs):
    x = super().call(inputs)
    x = self.quantum_layer(x)
    return x

net = PINNWithQuantumLayer(layer_sizes, activation, initializer)

model = dde.Model(data, net)

external_trainable_variables = [C1, C2, C3]
variable = dde.callbacks.VariableValue(
external_trainable_variables, period=600, filename="variables.dat"
)

model.compile(
"adam", lr=0.001, external_trainable_variables=external_trainable_variables
)
losshistory, train_state = model.train(iterations=20000, callbacks=[variable])

model.compile("L-BFGS", external_trainable_variables=external_trainable_variables)
losshistory, train_state = model.train(callbacks=[variable])

dde.saveplot(losshistory, train_state, issave=True, isplot=True)
`

I would appreciate it if someone could verify:

Is the integration of the quantum layer with the classical network correct?
Does the Lorenz system PDE implementation match the expected formulation?
Are there any potential issues with training using both Adam and L-BFGS optimizers?
Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant