llm of DatasetGenerator #17355
Replies: 1 comment
-
Hey @xinshuaiiii! I'm here to help you with any bugs, questions, or contributions while you wait for a human maintainer. Let's tackle this together! To apply the LLM from OpenAI to
from llama_index.core.llms.llm import LLM
# Assuming you have a function or class that wraps your OpenAI LLM
class OpenAILLM(LLM):
def __init__(self, client):
self.client = client
def generate(self, prompt):
response = self.client.chat.completions.create(
model="xxxxx",
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=800,
top_p=0.95,
)
return response['choices'][0]['message']['content']
# Initialize your OpenAI LLM
openai_llm = OpenAILLM(client)
# Create a DatasetGenerator instance with the OpenAI LLM
dataset_generator = DatasetGenerator.from_documents(
[doc],
question_gen_query=QUESTION_GEN_PROMPT,
llm=openai_llm, # Pass the OpenAI LLM instance
show_progress=True,
num_questions_per_chunk=1,
) This example demonstrates how to wrap your OpenAI LLM in a class that can be used with To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
from openai import OpenAI
client=OpenAI(
base_url="",
api_key='',
)
message_text = [{"role":"user","content":"what's 1+1? Answer in one word."}]
llm = client.chat.completions.create(
model="xxxxx",
messages = message_text,
temperature=0.7,
max_tokens=800,
top_p=0.95,
)
dataset_generator = DatasetGenerator.from_documents(
[doc],
question_gen_query=QUESTION_GEN_PROMPT,
llm=llm,
show_progress=True,
num_questions_per_chunk=1,
)
How to apply the llm in OpenAI to DatasetGenerator?
Beta Was this translation helpful? Give feedback.
All reactions