Skip to content
This repository has been archived by the owner on Mar 29, 2024. It is now read-only.

Fix error with different prompt sizes #5

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

antoinedelplace
Copy link

Proposed solution for fixing #4

@laksjdjf
Copy link
Owner

Thanks!
I have two queations.

  1. Does the context repeat affect the generation?
  2. Doesn't maxi_prompt_size need to be the least common multiple of the all prompt size?

@antoinedelplace
Copy link
Author

You're welcome! :)

  1. No, the repeat does not affect the generation when the context size is the same. It fixes the bug when the context sizes are different.
  2. Yes, maxi_prompt_size needs to be the least common multiple of all prompt sizes. In practice, CLIP takes no more than 77 tokens and send multiple of 77 token according to the size of the prompt.

@condac
Copy link

condac commented Jan 10, 2024

Applying this fix did not fully solve the issue, Still get error when writing (not that long) inputs.

  File "/opt/ComfyUI/custom_nodes/attention-couple-ComfyUI/attention_couple.py", line 123, in patch
    context_cond = torch.cat([cond.repeat(1, maxi_prompt_size_cond//cond.shape[1], 1) for cond in self.negative_positive_conds[1]], dim=0)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 231 but got size 154 for tensor number 2 in the list.

@seanphan
Copy link

Make sure the size if the width/height is a 64 step (512, 576, 640...)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants