You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
its a very simple model
it dont recognize photos from cartoon and is very low on NSFW stuff
my joycaption-model what I mentioned is 10times better at all
Using interrogator device: cuda
Florence2LanguageForConditionalGeneration has generative capabilities, as prepare_inputs_for_generation is explicitly overwritten. However, it doesn't directly inherit from GenerationMixin. From 👉v4.50👈 onwards, PreTrainedModel will NOT inherit from GenerationMixin, and this model will lose the ability to call generate and other related functions.
If you are the owner of the model architecture code, please modify your model class such that it inherits from GenerationMixin (after PreTrainedModel, otherwise you'll get an exception).
If you are not the owner of the model architecture class, please contact the model code owner to update it.
generation_config.json: 100%|███████████████████████████████████████████████████████████████| 51.0/51.0 [00:00<?, ?B/s]
preprocessor_config.json: 100%|███████████████████████████████████████████████████████████████| 806/806 [00:00<?, ?B/s]
processing_florence2.py: 100%|████████████████████████████████████████████████████| 46.4k/46.4k [00:00<00:00, 9.30MB/s]
A new version of the following files was downloaded from https://huggingface.co/microsoft/Florence-2-large-ft:
processing_florence2.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████| 34.0/34.0 [00:00<?, ?B/s]
vocab.json: 100%|█████████████████████████████████████████████████████████████████| 1.10M/1.10M [00:00<00:00, 5.24MB/s]
tokenizer.json: 100%|█████████████████████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 4.65MB/s]
Exception processing item!
Exception string: 'Task token should be the only token in the text.'
Traceback (most recent call last):
File "d:\kohya_ss_Booru\interrogator_rpc\main.py", line 192, in InterrogateImage
tag_ret = interrogate_image(network_conf.interrogator_network, image_obj, paramDict, skip_online=request.skip_internet_requests)
File "d:\kohya_ss_Booru\interrogator_rpc\main.py", line 68, in interrogate_image
tags = intg.predict(image_obj)
File "d:\kohya_ss_Booru\interrogator_rpc\ext_kohya\captioning.py", line 114, in predict
res = self.interrogator.apply(image)
File "d:\kohya_ss_Booru\interrogator_rpc\ext_kohya\interrogators\florence2_captioning.py", line 65, in apply
inputs = self.processor(text=prompt, images=image, return_tensors="pt").to(devices.device,
File "C:\Users\kallemst.cache\huggingface\modules\transformers_modules\microsoft\Florence-2-large-ft\bb44b80c15e943b1bf7cec6e076359cec6e40178\processing_florence2.py", line 266, in call
text = self._construct_prompts(text)
File "C:\Users\kallemst.cache\huggingface\modules\transformers_modules\microsoft\Florence-2-large-ft\bb44b80c15e943b1bf7cec6e076359cec6e40178\processing_florence2.py", line 145, in _construct_prompts
assert _text == task_token, f"Task token {task_token} should be the only token in the text."
AssertionError: Task token should be the only token in the text.
it works only with out a prompt
The text was updated successfully, but these errors were encountered:
its a very simple model
it dont recognize photos from cartoon and is very low on NSFW stuff
my joycaption-model what I mentioned is 10times better at all
Using interrogator device: cuda
Florence2LanguageForConditionalGeneration has generative capabilities, as
prepare_inputs_for_generation
is explicitly overwritten. However, it doesn't directly inherit fromGenerationMixin
. From 👉v4.50👈 onwards,PreTrainedModel
will NOT inherit fromGenerationMixin
, and this model will lose the ability to callgenerate
and other related functions.trust_remote_code=True
, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classesGenerationMixin
(afterPreTrainedModel
, otherwise you'll get an exception).generation_config.json: 100%|███████████████████████████████████████████████████████████████| 51.0/51.0 [00:00<?, ?B/s]
preprocessor_config.json: 100%|███████████████████████████████████████████████████████████████| 806/806 [00:00<?, ?B/s]
processing_florence2.py: 100%|████████████████████████████████████████████████████| 46.4k/46.4k [00:00<00:00, 9.30MB/s]
A new version of the following files was downloaded from https://huggingface.co/microsoft/Florence-2-large-ft:
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████| 34.0/34.0 [00:00<?, ?B/s]
vocab.json: 100%|█████████████████████████████████████████████████████████████████| 1.10M/1.10M [00:00<00:00, 5.24MB/s]
tokenizer.json: 100%|█████████████████████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 4.65MB/s]
Exception processing item!
Exception string: 'Task token should be the only token in the text.'
Traceback (most recent call last):
File "d:\kohya_ss_Booru\interrogator_rpc\main.py", line 192, in InterrogateImage
tag_ret = interrogate_image(network_conf.interrogator_network, image_obj, paramDict, skip_online=request.skip_internet_requests)
File "d:\kohya_ss_Booru\interrogator_rpc\main.py", line 68, in interrogate_image
tags = intg.predict(image_obj)
File "d:\kohya_ss_Booru\interrogator_rpc\ext_kohya\captioning.py", line 114, in predict
res = self.interrogator.apply(image)
File "d:\kohya_ss_Booru\interrogator_rpc\ext_kohya\interrogators\florence2_captioning.py", line 65, in apply
inputs = self.processor(text=prompt, images=image, return_tensors="pt").to(devices.device,
File "C:\Users\kallemst.cache\huggingface\modules\transformers_modules\microsoft\Florence-2-large-ft\bb44b80c15e943b1bf7cec6e076359cec6e40178\processing_florence2.py", line 266, in call
text = self._construct_prompts(text)
File "C:\Users\kallemst.cache\huggingface\modules\transformers_modules\microsoft\Florence-2-large-ft\bb44b80c15e943b1bf7cec6e076359cec6e40178\processing_florence2.py", line 145, in _construct_prompts
assert _text == task_token, f"Task token {task_token} should be the only token in the text."
AssertionError: Task token should be the only token in the text.
it works only with out a prompt
The text was updated successfully, but these errors were encountered: