-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add support for OPEA LLMs in Llama-Index #16666
base: main
Are you sure you want to change the base?
Conversation
thanks for this PR. I am one of contributors in OPEA. I have a comment here: such code should have an assumption that the OPEA microservice has been deployed in somewhere, either in local or in a cloud env. Then user could pass down the service ip to access. I would suggest to have words in document to mention this prerequisite. |
from llama_index.embeddings.opea import OPEAEmbedding | ||
|
||
embed_model = OPEAEmbedding( | ||
model="<model_name>", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be model_name=
Thanks @logan-markewich! I tested your PR with OPEA microservices. No issues with llm but there is an issue w embeddings. If you want to try yourself you can launch the opea microservice following one of the options of this README: https://github.com/opea-project/GenAIComps/tree/main/comps/embeddings/tei/llama_index Here are some checks after launching OPEA embedding microservice: test with TextDoc input test with EmbeddingRequest input Input in embeddingRequest is String Using your code:
Input of embeddingRequest is converting the string to a list To make it work I had to remove the square brackets in the openai get_embedding function: Line 137 in 5b581a3
Please have a check! |
Thanks for testing @rbrugaro -- taking a look |
@rbrugaro it seems like the OPEA embeddings are not fully openai-compatible? Only being able to embed a single piece of text at a time is probably not ideal 😅 -- I think I raised a slightly related issue in the opea repo (mentioned above) |
Description
This PR adds a llama-index wrapper for LLMs hosted by OPEA's GenAIComps library
A few unknowns/notes from my part
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Your pull-request will likely not be merged unless it is covered by some form of impactful unit testing.