Replies: 1 comment 2 replies
-
You can't do gguf |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I did try a LoRA training on the most basic level. I used the instuctions here:
https://github.com/oobabooga/text-generation-webui/wiki/05-%E2%80%90-Training-Tab
So I loaded tinyllama-1.1b-chat-v0.3.Q4_K_M.gguf with the llama.cpp model loader, put a simple txt file in the dataset folder and a name for the lora file. I left all other settings on default.
I get this error message:
15:20:45-429935 WARNING LoRA training has only currently been validated for LLaMA, OPT, GPT-J, and GPT-NeoX models.
(Found model type: LlamaCppModel)
15:20:50-432068 INFO Loading raw text file dataset
Traceback (most recent call last):
File "E:\SillyTavern\text-generation-webui\installer_files\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction
output = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SillyTavern\text-generation-webui\installer_files\env\Lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SillyTavern\text-generation-webui\installer_files\env\Lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
I did try 2 different computers and lots of different models but I always end up with this error. I am sure it is my mistake, but I can not find my mistake.
Can someone share a model ( and also a text file ) that will work?
Best, Matthias
Beta Was this translation helpful? Give feedback.
All reactions