-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local Image Generation #5
Comments
Yes, that's on the roadmap (i started working on it) and for the same feature on Bavarder |
Cool, I was just about to make the same issue on Bavarder. Right now I'm relying on distrobox to give me a separate environment that's good for these tasks, so having a flatpak app that does the same would be really nice. I have 2 more questions then:
|
Since the App has been designed to be expandable and is based on a plugin system, I can add as many preferences for each provider as I want. The issue with local models is that I need to add a way to download it |
@issacdowling Local Models will be available in the next release of Bavarder and will be available in Imaginer soon. You can see the documentation here |
Is there any chance that these projects would ever get the ability to actually run these models within the flatpak? As in, bundling something like llama.cpp for Bavarder, and whatever the tool is for stable diffusion for imaginer, rather than connecting to an API for something else running locally? Fair if not, as I can see how it would add lots of complexity, but it would mean getting set up with real local models would be way cleaner |
The issue is that if I bundle for example llama.cpp inside the flatpak, the flatpak will be way bigger for a feature that not everyone use... |
Speaking of that, what does it connect to now? A company that mines data? |
Hugging Face |
Thanks! I wonder if they collect data and what do they collect via this Imaniger app. |
|
Is your feature request related to a problem? Please describe.
I wish I could generate images without needing an internet connection or a separate service.
Describe the solution you'd like
I'd like to be able to run local models (since Stable Diffusion is open source anyway) using this app.
Describe alternatives you've considered
There are Web UIs (like InvokeAI), but I much prefer this native desktop app to something that runs in a browser.
Additional context
It would be nice if I could use my GPU to locally generate images, instead of relying on external services.
The text was updated successfully, but these errors were encountered: