Skip to content

Quantum optimized Riemannian mean

gcattan edited this page Sep 20, 2022 · 16 revisions

The MDM algorithm calculates a "mean" in its "training" phase for each class. This is computing the barycenter of all covariance matrices (that represent the input signal) for each class. The idea is to formulate the calculation of the mean as an optimization problem. A quantum optimized "mean" can provide a better classification performance, especially in cases where classical computation fails.

To calculate the mean we need to select a "distance" and provide an optimizer. For the distance we provide a convex model based on frobenius distance (python method fro_mean_convex). For the optimizer pyRiemann-qiskit relies on Qiskit implementation of QAOA (Quantum Approximate Optimization Algorithm). To test the convex model, we also provide a wrapper on a classical optimizer (CobylaOptimizer, also included within Qiskit).

pyRiemann-qiskit uses a wrapper around QAOA optimizer (class NaiveQAOAOptimizer) that rounds covariance matrices to a certain precision and converts each resulting integer to binary. The implementation is based on Qiskit’s IntegerToBinary, a bounded-coefficient encoding method. However, QAOA is limited to te solving of QUBO problems, that is, problems with unconstrained and binary variables only. Binary variables means that a matrix can contain only 0 and 1 as values.

The complexity of the QAOA optimizer raises as a function of the size of covariance matrices and the upper-bound coefficient. The size of the covariance matrices depends on the number of input channels in the input time epoch, as well as the dimension reduction method which is in place. The upper-bound coefficient also has an impact on the final size of the covariance matrices. If all variables inside a matrix are integers that can take only 4 values, they can be represented by only 2 bits. The size of the matrix will only be twice larger. However, a high upper bound implies a higher number of qubits to hold the variables inside a matrix (and therefore the final size of the binary matrix will be impacted).

Here is an example code:

metric = {
'mean': "convex",
'distance': "convex"
}

distance_methods["convex"] = lambda A, B: np.linalg.norm(A - B, ord='fro')

clf = make_pipeline(XdawnCovariances(), MDM(metric=metric))
skf = StratifiedKFold(n_splits=5)
n_matrices, n_channels, n_classes = 100, 3, 2
covset = get_covmats(n_matrices, n_channels)
labels = get_labels(n_matrices, n_classes)

score = cross_val_score(clf, covset, labels, cv=skf, scoring='roc_auc')
assert score.mean() > 0

If the MDM method is supplied with "convex" metric it will automatically use the fro_mean_convex method for computing the mean. The default optimizer for the fro_mean_convex method is the Cobyla optimizer.