Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Enable codellama on Intel GPUs #90

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

abhilash1910
Copy link

Motivation:
Thanks for creating this repository . There is an ongoing effort planned to collaborate from Intel GPU to enable out of the box runtime functionality of code llama on our XPU/GPU Devices. There is also a parallel effort on Llama recipes which is under discussion between Intel and Meta , and we plan to provide consolidated support to all framework/models from Meta research to be able to run on our graphics cards.
Mentioning PR : meta-llama/llama-recipes#116 (llama recipes also in progress).

@abhilash1910
Copy link
Author

@brozi @syhw requesting review and also wanted to discuss future collaborative development plan with support from Intel GPUs. Would be great to have a discussion on this . Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants