Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow user to pick AI model for request #76

Open
ga-it opened this issue Apr 21, 2024 · 0 comments
Open

Allow user to pick AI model for request #76

ga-it opened this issue Apr 21, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@ga-it
Copy link

ga-it commented Apr 21, 2024

Describe the feature you'd like to request

Allow selection of model in AI assistant front end. For example, ChatGPT4, ChatGPT3.5 or Llama-70b or call them using an alias (like Poe) using @chatgpt4

The latter could be extensible to other parts of nextcloud like Talk to call a model as a bot.

Describe the solution you'd like

I use LITELLM to proxy requests to a variety of GPT services. This can be integrated into Nextcloud instead of LocalAI which gave me lots of hassles.

Even if using one provider the OpenAI spec allows you to specify model.

Some models perform better at different tasks or have different cost implications.

Providing a drop down in the Assistant would allow the user to pick the model for the task.

This could also tie into the nextcloud permissions system where different models are restricted for different user groups and different tasks.

If the available models are aliased in a drop down and callable like Poe, then there could be a "Callable" option where multiple models could be called in the dialogue with @ aliases. Bracketing responses could allow nesting where one model processes the results of another.

This could also allow "custom gpt" (different stored prompts) to be stored in assistant. In a sense the "headline", "summary" buttons are these. If these were a user defineable stored set with their own aliases, then these could be combined with the model calls.

The benefit of the above, together with proxying calls across ai services (using something like LiteLLM, is that you effectively provide an abstraction layer across AI services, and allow application of permissions, cost management, etc to that.

Describe alternatives you've considered

I can do the above outside nextcloud using litellm and privategpt.

This creates a further user environment and losing nextcloud integration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant