Skip to content

Optimizing the number of qubits

toncho11 edited this page Jun 20, 2023 · 7 revisions

Problem definition

In order for a pipeline to be executed on real quantum computer we need to optimize the number of used qubits. Below we explain:

  • how the number of qubits is calculated
  • how to change (usually reduce) the number of qubits needed

If we use the QAOA in emulation mode then it will always provide the needed number of qubits, but the execution will fail if the real quantum computer has less than the number of qubits requested by our pipeline.

Example pipeline

Let's take an example using QuantumMDMWithRiemannianPipeline pipeline :

QuantumMDMWithRiemannianPipeline(
    convex_metric="mean", quantum=True
)

Which is an equivalent of the pipeline below:

make_pipeline(
     XdawnCovariances(nfilter=1, estimator="scm", xdawn_estimator="lwf"),
     Whitening(dim_red={"n_components": 2}),
     MDM(metric={"mean": "convex", "distance": "euclid"})
)

The pipeline does:

  • XDawn is a method for pre-processing the input data to covariances matrices that is designed to improve the signal to signal + noise ratio
  • Whitening for unsupervised dimension reduction (similar to PCA, but for SPD matrices)

Calculating required qubits

The number of qubits depends on 2 factors:

  • the size of the matrices being processed
  • the encoding of the matrices which is required for the QAOA

The size of the square matrix in the pipeline is:

  • 4 x 4 (nfilter * 4) after XdawnCovariances because 1 filter x 4 = size of 4 for the matrix
  • 2 x 2 after Whitening (because n_components is 2) The number of components has to be lower than the number of columns or rows of the covariance matrices.

When QAOA is selected as an optimizer (quantum simulation or real quantum), the optimizer also applies an additional modification to the size of the matrix. This is needed because we need to encode each element in the matrix to be used in QAOA. The size of the matrix then depends on the upper_bound parameter passed to the NaiveQAOAOptimizer optimizer.

The number of elements of the final matrices produced equals the number of qubits required.

  1. We take the size of the matrix after Whitening as n
  2. Convert the upper_bound to binary. By default upper_bound is 7, that is, 111.
  3. Then we take the number of digits (111 = 3 digits) as m
  4. You multiply the number of elements in the matrix after Whitening by the number of digits:
total_number_elements = n * n * m

So here for XDawnFilter =1, Whitening =2 and Upper Bound = 7:

number of qubits needed = 2 * 2 * 3 = 12

Reducing the number of qubits required

This is done by dimension reduction which decreases the size of the matrices used in the pipeline and decreasing the upper_bound parameter. Both can lead to data precision loss and lower classification performance.

The table below gives some of the possible combinations.

Important you may want to check the total number of elements before running on true quantum backend. The number of qubit in the quantum backend has to match or be greater than the number of elements.

xdawn filter size of covariance matrix whitening components size after whitening upper_bound binary representation nb qubit by element nb total element
1 4x4 2 2x2 1 10 2 8
        2 10 2 8
        7 111 3 12
    4 4x4 1 1 1 32
        2 10 2 64
        7 111 3 96