Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unlogged error while using groq #794

Open
elchananvol opened this issue Aug 22, 2024 · 4 comments
Open

unlogged error while using groq #794

elchananvol opened this issue Aug 22, 2024 · 4 comments

Comments

@elchananvol
Copy link

elchananvol commented Aug 22, 2024

The problem:
I'm using Groq as an LLM provider. When running the code the following message is printed to the console:
"⚠️ Error in reading JSON, attempting to repair JSON."
In debug mode, I discovered that the real issue is: "'Unable to import langchain-groq. Please install with pip install -U langchain-groq.'"
the code:

async def get_report(query: str, report_type: str, tone) -> str:
    researcher = GPTResearcher(query, report_type, tone, )
    research_result = await researcher.conduct_research()
    report = await researcher.write_report()
    return report

Suggestion: In the "choose_agent" function in "action.py", the exception message should be logged.

@danieldekay
Copy link
Contributor

did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g. create_chat_completion in llm.py, if not already in get_llm

@elchananvol
Copy link
Author

did you not get a "Failed to get response from {llm_provider} API" somewhere? If the model cannot be loaded, the error should be elsewhere, e.g. create_chat_completion in llm.py, if not already in get_llm

Nope, i got directly exception

@ElishaKay
Copy link
Collaborator

ElishaKay commented Aug 27, 2024

Thanks for the heads up @elchananvol

@danieldekay we need to think through how we implement this.

Some context: a week or two ago we sliced the main requirements.txt (which cut the GPTR Docker Image by 87%)

These were the dependencies before the official slicing:

https://github.com/assafelovic/gpt-researcher/blob/5dba221ddf93d2b1f1208e081c4e8aa3a7d2fe55/requirements.txt

Most of the default requirements that were sliced were related to custom LLMs (which were supported by custom langchain libraries) & custom retrievers

Maybe we should run some logic based on the .env file when the server starts up?

For example, if the .env states that the retriever or LLM is anything other than the default, print an error message with the dependencies that the user should install globally? Or even better, install the required dependency on server-start up based on the config in the .env file?

Happy to hear your thoughts @danieldekay @elchananvol or even better, to see a pull request 🤠

@danieldekay
Copy link
Contributor

Just an idea.

Poetry could have different groups for different custom models, and the package installation could parse the .env file for the activated model, and then include the corresponding group.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants