-
Notifications
You must be signed in to change notification settings - Fork 612
Issues: bitsandbytes-foundation/bitsandbytes
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Paged optimizer resuming from checkpoint - attributeError: 'int' object has no attribute 'cpu'
#1381
opened Oct 1, 2024 by
shivam15s
Model architecture is modified when I use BitsAndBytesConfig with default params
#1371
opened Sep 25, 2024 by
yunhao-tech
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling cublasGemmEx
#1363
opened Sep 17, 2024 by
LukeLIN-web
quantize_4bit/dequantize_4bit gives wrong output on in-contiguous tensor
#1342
opened Aug 30, 2024 by
chenqianfzh
Linear8bitLt can not be moved back to cpu
bug
Something isn't working
#1332
opened Aug 24, 2024 by
Nerogar
Pretrained Causal LM cannot be loaded in 4bit/8bit
likely not a BNB issue
#1331
opened Aug 23, 2024 by
adrienchaton
Any plan to support block size 32?
enhancement
New feature or request
#1329
opened Aug 20, 2024 by
lllyasviel
'nf4' compute datatype?
question
Further information is requested
#1321
opened Aug 15, 2024 by
dorsa-zeinali
where are the outliers stored in LLM.int8 quantization for inference suing transformers library on AMD GPU?
#1320
opened Aug 15, 2024 by
vbayanag
About fusion of **kdequantize kernel** and **simple bf16/fp16 matmul**
#1319
opened Aug 15, 2024 by
Ther-nullptr
Communicate blocksize constraints to kernels that take blocksize as a runtime argument
enhancement
New feature or request
Low Risk
Risk of bugs in transformers and other libraries
#1317
opened Aug 14, 2024 by
mm04926412
Unable to override PyTorch CUDA Version
CUDA Setup
waiting for info
#1315
opened Aug 13, 2024 by
tinglvv
RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information: python -m bitsandbytes Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
#1307
opened Aug 6, 2024 by
senzawapoi
bitsbytes 8bit quantized LLama 3.1 gets stuck sometimes when producing output
#1304
opened Aug 5, 2024 by
Techbhatia
Previous Next
ProTip!
Follow long discussions with comments:>50.