You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to load flux as I was supposed to, including the HF token. I also downloaded the model(the whole file) from HF manually and I had the same issue
What did you expect would happen?
To just start training the lora as it should
Relevant log output
erving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.18.0 at http://localhost:6006/ (Press CTRL+C to quit)
tokenizer/tokenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 705/705 [00:00<00:00, 6.30MB/s]
tokenizer/vocab.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 2.76MB/s]
tokenizer/merges.txt: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 1.38MB/s]
tokenizer/special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 588/588 [00:00<00:00, 6.76MB/s]
tokenizer_2/tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20.8k/20.8k [00:00<00:00, 141MB/s]
spiece.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 792k/792k [00:00<00:00, 53.0MB/s]
tokenizer_2/special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.54k/2.54k [00:00<00:00, 33.0MB/s]
tokenizer_2/tokenizer.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.42M/2.42M [00:00<00:00, 3.82MB/s]
scheduler/scheduler_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 273/273 [00:00<00:00, 3.87MB/s]
text_encoder/config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 613/613 [00:00<00:00, 7.81MB/s]
model.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 246M/246M [00:05<00:00, 43.0MB/s]
text_encoder_2/config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 782/782 [00:00<00:00, 7.68MB/s]
/home/obfuscated/OneTrainer/venv/lib/python3.10/site-packages/transformers/modeling_utils.py:5055: FutureWarning: `_is_quantized_training_enabled` is going to be deprecated in transformers 4.39.0. Please use `model.hf_quantizer.is_trainable` instead
warnings.warn(
Traceback (most recent call last):
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 220, in load
self.__load_internal(
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 41, in __load_internal
raise Exception("not an internal model")
Exception: not an internal model
Traceback (most recent call last):
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 229, in load
self.__load_diffusers(
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 86, in __load_diffusers
text_encoder_2 = self._load_transformers_sub_module(
File "/home/obfuscated/OneTrainer/modules/modelLoader/mixin/HFModelLoaderMixin.py", line 176, in _load_transformers_sub_module
return self.__load_sub_module(
File "/home/obfuscated/OneTrainer/modules/modelLoader/mixin/HFModelLoaderMixin.py", line 42, in __load_sub_module
replace_linear_with_nf4_layers(sub_module, keep_in_fp32_modules, copy_parameters=False)
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 130, in replace_linear_with_nf4_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 115, in __replace_linear_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 115, in __replace_linear_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 96, in __replace_linear_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 115, in __replace_linear_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 96, in __replace_linear_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 115, in __replace_linear_layers
__replace_linear_layers(
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 111, in __replace_linear_layers
quant_linear = convert_fn(module, copy_parameters)
File "/home/obfuscated/OneTrainer/modules/util/quantization_util.py", line 23, in __create_nf4_linear_layer
quant_linear = LinearNf4(
TypeError: 'NoneType' object is not callable
Traceback (most recent call last):
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 238, in load
self.__load_safetensors(
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 153, in __load_safetensors
pipeline = FluxPipeline.from_single_file(
File "/home/obfuscated/OneTrainer/venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/home/obfuscated/OneTrainer/venv/src/diffusers/src/diffusers/loaders/single_file.py", line 378, in from_single_file
checkpoint = load_single_file_checkpoint(
File "/home/obfuscated/OneTrainer/venv/src/diffusers/src/diffusers/loaders/single_file_utils.py", line 386, in load_single_file_checkpoint
repo_id, weights_name = _extract_repo_id_and_weights_name(pretrained_model_link_or_path)
File "/home/obfuscated/OneTrainer/venv/src/diffusers/src/diffusers/loaders/single_file_utils.py", line 340, in _extract_repo_id_and_weights_name
raise ValueError("Invalid `pretrained_model_name_or_path` provided. Please set it to a valid URL.")
ValueError: Invalid `pretrained_model_name_or_path` provided. Please set it to a valid URL.
Traceback (most recent call last):
File "/home/obfuscated/OneTrainer/modules/ui/TrainUI.py", line 569, in __training_thread_function
trainer.start()
File "/home/obfuscated/OneTrainer/modules/trainer/GenericTrainer.py", line 116, in start
self.model = self.model_loader.load(
File "/home/obfuscated/OneTrainer/modules/modelLoader/FluxLoRAModelLoader.py", line 48, in load
base_model_loader.load(model, model_type, model_names, weight_dtypes)
File "/home/obfuscated/OneTrainer/modules/modelLoader/flux/FluxModelLoader.py", line 248, in load
raise Exception("could not load model: " + model_names.base_model)
Exception: could not load model: black-forest-labs/FLUX.1-dev
dxqbYD
changed the title
[Bug]: "Exception: not an internal model" for flux
[Bug]: 'NoneType' in LinearNF4 during loading diffusers Flux model
Jan 18, 2025
What happened?
I tried to load flux as I was supposed to, including the HF token. I also downloaded the model(the whole file) from HF manually and I had the same issue
What did you expect would happen?
To just start training the lora as it should
Relevant log output
Output of
pip freeze
absl-py==2.1.0
accelerate==1.0.1
aiodns==3.2.0
aiohappyeyeballs==2.4.4
aiohttp==3.11.11
aiohttp-retry==2.9.1
aiosignal==1.3.2
annotated-types==0.7.0
antlr4-python3-runtime==4.9.3
anyio==4.8.0
async-timeout==5.0.1
attrs==24.3.0
backoff==2.2.1
bcrypt==4.2.1
boto3==1.35.97
botocore==1.35.97
Brotli==1.1.0
certifi==2024.12.14
cffi==1.17.1
charset-normalizer==3.4.1
click==8.1.8
cloudpickle==3.1.0
colorama==0.4.6
coloredlogs==15.0.1
contourpy==1.3.1
cryptography==43.0.3
customtkinter==5.2.2
cycler==0.12.1
dadaptation==3.2
darkdetect==0.8.0
decorator==5.1.1
Deprecated==1.2.15
-e git+https://github.com/huggingface/diffusers.git@55ac1dbdf2e77dcc93b0fa87d638d074219922e4#egg=diffusers
dnspython==2.7.0
email_validator==2.2.0
exceptiongroup==1.2.2
fabric==3.2.2
fastapi==0.115.6
fastapi-cli==0.0.7
filelock==3.16.1
flatbuffers==24.12.23
fonttools==4.55.3
frozenlist==1.5.0
fsspec==2024.12.0
ftfy==6.3.1
grpcio==1.69.0
h11==0.14.0
httpcore==1.0.7
httptools==0.6.4
httpx==0.28.1
huggingface-hub==0.26.2
humanfriendly==10.0
idna==3.10
importlib_metadata==8.5.0
inquirerpy==0.3.4
invisible-watermark==0.2.0
invoke==2.2.0
itsdangerous==2.2.0
Jinja2==3.1.5
jmespath==1.0.1
kiwisolver==1.4.8
lightning-utilities==0.11.9
lion-pytorch==0.2.2
Markdown==3.7
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.9.2
mdurl==0.1.2
-e git+https://github.com/Nerogar/mgds.git@e6bd96b0cf0d127a8a721bdbf218e4e5aa6c16f8#egg=mgds
mpmath==1.3.0
multidict==6.1.0
networkx==3.4.2
numpy==1.26.4
nvidia-ml-py==12.560.30
omegaconf==2.3.0
onnxruntime-gpu==1.19.2
open_clip_torch==2.28.0
opencv-python==4.10.0.84
orjson==3.10.14
packaging==24.2
paramiko==3.5.0
pfzy==0.3.4
pillow==11.0.0
platformdirs==4.3.6
pooch==1.8.2
prettytable==3.12.0
prodigyopt==1.1.1
prompt_toolkit==3.0.48
propcache==0.2.1
protobuf==5.29.3
psutil==6.1.1
py-cpuinfo==9.0.0
pycares==4.5.0
pycparser==2.22
pydantic==2.10.5
pydantic-extra-types==2.10.1
pydantic-settings==2.7.1
pydantic_core==2.27.2
Pygments==2.19.1
PyNaCl==1.5.0
pyparsing==3.2.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-multipart==0.0.20
pytorch-lightning==2.4.0
pytorch-triton-rocm==3.1.0
pytorch_optimizer==3.3.0
PyWavelets==1.8.0
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.3
rich==13.9.4
rich-toolkit==0.12.0
runpod==1.7.4
s3transfer==0.10.4
safetensors==0.4.5
scalene==1.5.45
schedulefree==1.3
scipy==1.14.1
sentencepiece==0.2.0
shellingham==1.5.4
six==1.17.0
sniffio==1.3.1
starlette==0.41.3
sympy==1.13.1
tensorboard==2.18.0
tensorboard-data-server==0.7.2
timm==1.0.13
tokenizers==0.21.0
tomli==2.2.1
tomlkit==0.13.2
torch==2.5.1+rocm6.2
torchmetrics==1.6.1
torchvision==0.20.1+rocm6.2
tqdm==4.66.6
tqdm-loggable==0.2
transformers==4.47.0
typer==0.15.1
typing_extensions==4.12.2
ujson==5.10.0
urllib3==2.3.0
uvicorn==0.34.0
uvloop==0.21.0
watchdog==6.0.0
watchfiles==1.0.4
wcwidth==0.2.13
websockets==14.1
Werkzeug==3.1.3
wrapt==1.17.1
yarl==1.18.3
zipp==3.21.0
The text was updated successfully, but these errors were encountered: