Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when using grok for chat and openai for embedding, some http requests to openai.com/v1/embeddings work, some fail because it's using the wrong api key #484

Open
felixniemeyer opened this issue Dec 18, 2024 · 0 comments

Comments

@felixniemeyer
Copy link

I have basically followed the example on the readme.

The graph creation seemed to work. I had to change embedding size to 1536 for some reason, I don't understand yet.

Afterwards I'm trying to run this:

"""
test using LightRAG 
"""
import os

import numpy as np
from dotenv import load_dotenv

from lightrag import LightRAG, QueryParam
from lightrag.llm import openai_complete_if_cache, openai_embedding
from lightrag.utils import EmbeddingFunc

load_dotenv()

WORKING_DIR = "./journal-fragment"

async def llm_model_func(
    prompt, system_prompt=None, history_messages=[], keyword_extraction=False, **kwargs
) -> str:
    return await openai_complete_if_cache(
        os.getenv("CHAT_MODEL"),
        prompt,
        system_prompt=system_prompt,
        history_messages=history_messages,
        api_key=os.getenv("CHAT_API_KEY"),
        base_url=os.getenv("CHAT_BASE_URL"),
        **kwargs
    )

async def embedding_func(texts: list[str]) -> np.ndarray:
    return await openai_embedding(
        texts,
        api_key=os.getenv("OPENAI_API_KEY"),
    )

rag = LightRAG(
    working_dir=WORKING_DIR,
    llm_model_func=llm_model_func,
    embedding_func=EmbeddingFunc(
        embedding_dim=1536,
        max_token_size=8192,
        func=embedding_func
    )
)

prompt = "What are possible ideas to self improve?"

# Perform naive search
print("----- naive search -----")
print(rag.query(prompt, param=QueryParam(mode="naive")))

# Perform local search
print("----- local search -----")
print(rag.query(prompt, param=QueryParam(mode="local")))

# Perform global search
print("----- global search -----")
print(rag.query(prompt, param=QueryParam(mode="global")))

# Perform hybrid search
print("----- hybrid search -----")
print(rag.query(prompt, param=QueryParam(mode="hybrid")))

Now, when trying to query, I get a 401 from open ai embeddings endpoint but just sometimes, as you can see in the logs:


----- naive search -----
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
WARNING:lightrag:No valid chunks found after filtering
Sorry, I'm not able to provide an answer to that question.
----- local search -----
INFO:httpx:HTTP Request: POST https://api.x.ai/v1/chat/completions "HTTP/1.1 200 OK"
INFO:lightrag:kw_prompt result:
json
{
  "high_level_keywords": ["Self improvement", "Personal development", "Growth"],
  "low_level_keywords": ["Goal setting", "Habit formation", "Skill acquisition", "Mindfulness", "Time management"]
}
INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 401 Unauthorized"
Traceback (most recent call last):
...

Note that the first embeddings call to openai returns fine (200). Also a request to grok seems to work. But a second request to openai's embeddings fails (401).

...
Error code: 401 - {'error': {'message': 'Incorrect API key provided: xai-Yxfb************************************************************************nQZ1. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

It seems, that it's trying to use the xAI api key for openai - even though I specified an open ai key for the embedding (see embedding_func in the code above).

Any idea what's going wrong?
Any hint, how to solve this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant