You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was surprised to find out that the mnist example took in images (referred to as x in the QFCModel code) with a [256, 16] format, where 256 is the batch size and 16 is the shrunken then flattened MNIST image. Why does the quantum circuit take in its examples in [1, 16] format? And how can the gates process this format if they are limited to four wires? Is every element in a given image vector fed in on the same wire or is the image truncated to the first row of its pixels?
I tried to figure out exactly how the gates were processing these images. But when I went to try and print out any changes to the input x as it progressed through encoding and the quantum layer, torch.all(torch.eq(x, old_x)) always returned true. Is x not processed until we call the measure method? If not, how do I correctly evaluate change in-between simulation steps?
The text was updated successfully, but these errors were encountered:
From my understanding, the algorithm works by using a randomly generated trainable quantum circuit as a convolution through which groups of pixels can be slightly perturbed. Effectively, the only quantum part is the convolution (thus quanvolution). There are certainly a handful of more quantum algorithms you could explore (ex: QCNN, etc.)!
I was surprised to find out that the mnist example took in images (referred to as
x
in theQFCModel
code) with a [256, 16] format, where 256 is the batch size and 16 is the shrunken then flattened MNIST image. Why does the quantum circuit take in its examples in [1, 16] format? And how can the gates process this format if they are limited to four wires? Is every element in a given image vector fed in on the same wire or is the image truncated to the first row of its pixels?I tried to figure out exactly how the gates were processing these images. But when I went to try and print out any changes to the input
x
as it progressed through encoding and the quantum layer,torch.all(torch.eq(x, old_x))
always returned true. Isx
not processed until we call themeasure
method? If not, how do I correctly evaluate change in-between simulation steps?The text was updated successfully, but these errors were encountered: