Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPFQ #666

Merged
merged 12 commits into from
Sep 27, 2023
Merged

GPFQ #666

merged 12 commits into from
Sep 27, 2023

Conversation

Giuseppe5
Copy link
Collaborator

No description provided.

src/brevitas/graph/gpxq.py Outdated Show resolved Hide resolved
src/brevitas/graph/gpxq.py Outdated Show resolved Hide resolved
Copy link
Contributor

@volcacius volcacius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should potentially account for updating float layers (e.g. last unquantized) based on the activation quantization error at the input.

Copy link
Contributor

@volcacius volcacius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For act_order, we could consider weights x activation magnitude as ordering criteria

@Giuseppe5
Copy link
Collaborator Author

For act_order, we could consider weights x activation magnitude as ordering criteria

We need to define an order for quantizing the input channels. When multiplying weight x activation, the input channel is the inner dimension of the matmul which means that we "lose" that information

@Giuseppe5 Giuseppe5 merged commit 3b7b9c7 into Xilinx:dev Sep 27, 2023
13 of 22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants