LLMs Deployed on AzureMLStudio #8066
Replies: 1 comment 1 reply
-
🤖 Hi, To integrate the from llama_index.llms.azure_openai import AzureOpenAI
# Create an instance of AzureOpenAI for the llama_2 model
llama_2 = AzureOpenAI(
model="llama_2",
engine="your_llama_2_deployment_name",
api_key="your_api_key",
api_base="https://YOUR_RESOURCE_NAME.openai.azure.com/",
api_type="azure",
api_version="2023-05-15"
)
# Create an instance of AzureOpenAI for the gpt-3.5 model
gpt_35 = AzureOpenAI(
model="gpt-3.5",
engine="your_gpt_3.5_deployment_name",
api_key="your_api_key",
api_base="https://YOUR_RESOURCE_NAME.openai.azure.com/",
api_type="azure",
api_version="2023-05-15"
) Please replace Remember to set the environment variables The You can find more information about the Let me know if you have any other questions. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I deployed llama_2, gpt-3.5 models on Azure ML studio. Now, I want to use those models with llama index.
with langchain it was quite easy, can some help me how to work those with llama index
Beta Was this translation helpful? Give feedback.
All reactions