- Migrated to
aiogram==3.1.1
,pydantic==2.4.0
, if you want to keep old pydantic, useaiogram==3.0.0b7
. - WebUI made with Vue is available! Run it with
dashboard.py
- Configuration preset switching in WebUI, to change path of default .env file, use
--env
ondashboard.py
, additional configuration files should are stored with .env extension in env directory. - New configuration options:
sys_webui_host
,sys_api_host
,sys_request_timeout
,sys_api_log_level
- Fixed TTS initialization failing when threaded mode was off
- Fixed memory manager initialization
- For model configuration UI the
path_to_llama_cpp_weights_dir
key has been added tollm_paths
- Reply context (1-step memory) can now be toggled with
llm_assistant_add_reply_context
option - Configuration hints are available in the WebUI.
- Model manager in the WebUI can be used to download and set up models. A few types of models are initially supported.
- full TTS refactoring
- universal cross-platform os TTS provider (pyttsx4) support
- threaded initialization (configurable via
threaded_initialization
option) - bugfixes
tts_enable_so_vits_svc
config option has been removed in favor oftts_enable_backends
.tts_so_vits_svc_base_tts_provider
config option has been removed, you only need base voice from now on.
- so-vits-svc-4.1 support
- experimental memory manager (now supports all models except LLMs with pytorch backend)
tts_so_vits_svc_code_path
config option has been renamed totts_so_vits_svc_4_0_code_path
, andtts_so_vits_svc_4_1_code_path
option was added to support so-vits-svc-4.1 models, to specify that the model is a 4.1 mode, use "v": 4.1 intts_so_vits_svc_voices
.llm_host
has ben fixed in .env.example file,llm_ob_host
has been removed
- Basic config state reactivity (runtime config changes in bot are reflected in .env)
- Fixed SD model info retrieval and reinitialization
- Global module accessibility milestone 2
- Experimental multi-line dialog answer support
- Speech-to-text via Whisper.cpp, Silero an Wav2Vec2
- Seamless dialog mode voice->text->llm->voice
- Text-to-audio via Audiocraft
- LLM module / provider refactoring
- Reply chronicler with single-item history
- Kobold.cpp and llama.cpp server support
- SD Lora API support
- Global module accessibility milestone 1
MinChatGPTChronicler
andGPT4AllChronicler
were removed Default assistant chronicler becomesinstruct
,alpaca
name is kept for a limited time for backwards compatibilityllm_ob_host
has been renamed tollm_host
llm_active_model_type
has been deprecated and replaced by two new keys:llm_backend
andllm_python_model_type
(when backend is pytorch)- In
llm_python_model_type
optioncerebras_gpt
has been renamed toauto_hf
path_to_cerebras_weights
has been renamed topath_to_autohf_weights
sd_available_loras
has been deprecated, lora API endpoint is used
- initial version with incremental changes and full backwards compatibility with previous commits