Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for issue #213 #289

Merged
merged 3 commits into from
Nov 19, 2024
Merged

Conversation

ankhoa1212
Copy link

@ankhoa1212 ankhoa1212 commented Nov 18, 2024

To fix issue #213, it looks like the QuantumDevice needs to be initialized with the batch size (batch size is 1 by default which is why the size was [1, 2] instead of [256, 2]).

After this change, there is a NameError for CliffordQuantizer:
image
which can be fixed by importing it in the torchquantum/operator/op_types.py file, which calls the CliffordQuantizer function.

After these changes, I believe examples/clifford_qnn/mnist_clifford_qnn.py can run out-of-the-box:
clifford_success
@Hanrui-Wang @01110011011101010110010001101111

@@ -7,6 +7,7 @@
from ..macro import C_DTYPE, F_DTYPE
from typing import Iterable, Union, List
from enum import IntEnum
from torchquantum.util.quantization.clifford_quantization import CliffordQuantizer

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this import necessary here?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, wait, I missed your initial comment.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you instead import this lazily, so in the conditional branch where we actually call CliffordQuantizer? Afterward, will merge in!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for figuring this out! Just a quick question on the extra import then I can merge this in!

@@ -7,6 +7,7 @@
from ..macro import C_DTYPE, F_DTYPE
from typing import Iterable, Union, List
from enum import IntEnum
from torchquantum.util.quantization.clifford_quantization import CliffordQuantizer

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, wait, I missed your initial comment.

@@ -7,6 +7,7 @@
from ..macro import C_DTYPE, F_DTYPE
from typing import Iterable, Union, List
from enum import IntEnum
from torchquantum.util.quantization.clifford_quantization import CliffordQuantizer

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you instead import this lazily, so in the conditional branch where we actually call CliffordQuantizer? Afterward, will merge in!

@01110011011101010110010001101111 01110011011101010110010001101111 changed the base branch from main to dev November 18, 2024 21:10
@01110011011101010110010001101111
Copy link
Collaborator

Also, don’t worry about tests, it’s likely from numpy 2.0 breaking. Once we merge, I think we should be good.

@ankhoa1212
Copy link
Author

Alright I moved the import into the conditional branch, so it should be good now.

@01110011011101010110010001101111 01110011011101010110010001101111 merged commit 6ff80a8 into mit-han-lab:dev Nov 19, 2024
@01110011011101010110010001101111
Copy link
Collaborator

Merged, thanks so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants