How you calculated attention mask? #228
tumurzakov
started this conversation in
General
Replies: 1 comment
-
Calculation of attention maps is operated in main_forward in attention.py
Here, consolidating the attn read from each layer into a single map. I pre-count the tokens of the word I want to inquire about, and extract only the attn corresponding to those word tokens. As it stands, the attn is unusable, so I'm implementing a certain degree of cutoff, etc. I have been thinking that this technique might be usable in animation, and recently tried creating an animation utilizing this. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm reading attention.py, function
promptsepcalc
and cant understan where mask actually calculated.How you extract this mask?
I'm try to use this technic with animatediff. Mask introduces occlusions. This not visible on single image but it visible on video. I want to calculate attention mask as you did, but cant understand how
Beta Was this translation helpful? Give feedback.
All reactions