Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: full gpu hybrid model #963

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

andrei-stoian-zama
Copy link
Collaborator

No description provided.

@cla-bot cla-bot bot added the cla-signed label Dec 18, 2024
Base automatically changed from llama_fine_tuning to main December 19, 2024 15:33
@andrei-stoian-zama andrei-stoian-zama force-pushed the chore/optimize_mem_and_runtime_fhe_disable branch from 43601e1 to 2a326f3 Compare December 31, 2024 09:42
Copy link

Coverage failed ❌

Coverage details

---------- coverage: platform linux, python 3.8.18-final-0 -----------
Name                                         Stmts   Miss  Cover   Missing
--------------------------------------------------------------------------
src/concrete/ml/quantization/quantizers.py     326      1    99%   780
src/concrete/ml/torch/hybrid_model.py          191      2    99%   223, 676
src/concrete/ml/torch/lora.py                  146     13    91%   356-394
--------------------------------------------------------------------------
TOTAL                                         8531     16    99%

60 files skipped due to complete coverage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant