-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Support for Multiple Simultaneous LLM AI API Endpoints for Self-Hosting and Model Selection #34
Comments
If using textgenwebui locally, it'd be great to be able to switch between models without having to host multiple models simultaneously. Get models
Load models
So something like get, if loaded model = listed model, continue, else load the desired model for the agent. |
Wow, I am deeply appreciative of your work. And I am looking for a way to implement some similar target, such as to connect AutoGen to some LLM model's endpoint like 'QWen-turbo''s online service. May I join your team or at least do something for you guys? |
Same! Happy to help. |
@taoyiran tao So am I! It's important for me to support the qwen online server. Feel free to contact me anytime if you need assistance |
@ImagineL Glad to see your message! Now I just study on this project and try to connect AutoGen to Qwen's online service. I will update my status and, if possible, my code here. Thx everyone! |
@taoyiran I'm looking forward to your code sharing! I had to analyze the source code, and it seems hard to resolve if we don't modify the source code. good luck ! |
In looking for how to do this, I found this thread. I also found the answer. You can just set multiple configurations. It seems as though this feature is already implemented if I understand what the feature request is.
|
@2good4hisowngood @taoyiran @Pakmandesign @ImagineL @weldonla you can do this using LiteLLM Proxy Server Here's the quick start:Doc: https://docs.litellm.ai/docs/simple_proxy#load-balancing---multiple-instances-of-1-model Step 1 Create a Config.yamlmodel_list:
- model_name: gpt-4
litellm_params:
model: azure/chatgpt-v-2
api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
api_version: "2023-05-15"
api_key:
- model_name: gpt-4
litellm_params:
model: azure/gpt-4
api_key:
api_base: https://openai-gpt-4-test-v-2.openai.azure.com/
- model_name: gpt-4
litellm_params:
model: azure/gpt-4
api_key:
api_base: https://openai-gpt-4-test-v-2.openai.azure.com/ Step 2: Start the litellm proxy:
Step3 Make Request to LiteLLM proxy:
|
This looks very reliable, thank you! I'm going to try it! |
We are closing this issue due to inactivity; please reopen if the problem persists. |
moved the package in notebooks from pyautogen to autogen
* WIP code execution * add tests, reorganize * fix polars test * credit statements * attributions
Description:
We would like to propose the addition of a new feature to AutoGen that enables users to configure and utilize multiple Language Model (LLM) AI API endpoints for self-hosting and experimentation with different models. This feature would enhance the flexibility and versatility of AutoGen for developers and researchers working with LLMs.
Feature Details:
Endpoint Configuration:
Custom Endpoint Names:
Chat Parameters:
Model Selection (if applicable):
API Key Management (if applicable):
Endpoint Address:
Optional - Endpoint Tagging:
Expected Benefits:
This feature will benefit developers, researchers, and users who work with LLMs by offering a centralized and user-friendly interface for managing multiple AI API endpoints. It enhances the ability to experiment with various models, configurations, and providers while maintaining security and simplicity. This could allow different characters to leverage specific fine-tuned models rather than the same model for each. This could also allow self-hosted users to experiment with expand the number of repeated looped calls without drastically increasing the bill.
Additional Notes:
Consider implementing an intuitive user interface for configuring and managing these endpoints within the GitHub platform, making it accessible to both novice and experienced users.
References:
Include any relevant resources or references that support the need for this feature, such as the growing popularity of LLMs in various fields and the demand for flexible API management solutions.
Related Issues/Pull Requests:
Assignees:
If you permit this ticket to remain open, I will assemble some links and resources, as well as opening another ticket to handle TextGenWebUI with relevant links there to implementing it. I can try implementing and doing a PR if someone else doesn't get to it first.
Thank you for considering this feature request. I believe that this enhancement will greatly benefit the AutoGen community and its users working with Language Model AI API endpoints.
edit: 9.28
Looking through the repo, it looks like there's a standardized json config, going to look into this next as a method for expanding and holding the features listed above. page found while reading documentation, note near top how it loads the json, then references it further down as : https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat_research.ipynb
Found it https://github.com/microsoft/autogen/blob/main/OAI_CONFIG_LIST_sample
Going to look into how it gets loaded.
The text was updated successfully, but these errors were encountered: