Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature][model] llama.cpp support new GGUF file format #567

Closed
2 tasks done
fangyinc opened this issue Sep 8, 2023 · 2 comments · Fixed by #649
Closed
2 tasks done

[Feature][model] llama.cpp support new GGUF file format #567

fangyinc opened this issue Sep 8, 2023 · 2 comments · Fixed by #649
Labels
enhancement New feature or request

Comments

@fangyinc
Copy link
Collaborator

fangyinc commented Sep 8, 2023

Search before asking

  • I had searched in the issues and found no similar feature requirement.

Description

see: GGUF

Use case

No response

Related issues

No response

Feature Priority

Medium

Are you willing to submit PR?

  • Yes I am willing to submit a PR!
@fangyinc fangyinc added enhancement New feature or request Waiting for reply and removed Waiting for reply labels Sep 8, 2023
@paul-yangmy
Copy link

mark

@Ananderz
Copy link

Ananderz commented Oct 3, 2023

Could be fixed by implementing ctransformers

Aries-ckt added a commit that referenced this issue Oct 7, 2023
Close #567 
Close #644
Close #563

**Other**
- Fix raise Exception when stop DB-GPT
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants