Replies: 4 comments 3 replies
-
I'm still trying out different workflows, but it might simply be a case of different tools for different uses. I tend to toggle it on/off and HyperTile as well. Sometimes it helps, sometimes it hinders. |
Beta Was this translation helpful? Give feedback.
-
I wanted to start a discussion on this extension. I tried it a lot with A1111 and got some great results 40% of the time. Lots of mutants. 920x1660 res and yes they looked similar but the detail was amazing. SD1.5 |
Beta Was this translation helpful? Give feedback.
-
i wrote this a s a feature request but i think this is more important than new features. imho. you do great work but a lot of this work is wasted without proper documentation. you want to improve webui, to make it better. i get that writing documentation is not as fun as including new features. instead of 20.000 people wasting their time trying to figure stuff out... it would be a better approach if someone who knows this stuff spends some time explaining it. explaining it in non-coder language. excuse me if i such a documentation exists and i missed it. this post shows exactly the problem. how to start finding settings that work when i have NO idea what the sliders do? in the example of the hiresfix... the resulting images change so dramatically that i have no idea what the sliders do even after testing for 2 hours. i change the "block number" slider from 3 to 4.... and i get a totally different image. downscaling the same. it´s like totally random seeds. please please..... spend some time on a proper feature documentation. |
Beta Was this translation helpful? Give feedback.
-
Try using the "Extra Settings" that give you a variance seed and amount, use its different resize seed width and height, and also push the starting percent and end percent up to taste. (PS you also may want to keep your seeds the same to get a better idea of what settings are doing, it will at least try to make the same image. Then also have your realtime progress showing and have it update as fast as you can stand the wait time added. That can help you intuit things as they are happening pers step. This can help you figure out what is happening, unfortunately, describing the effects of a value on a car vs. a real life waifu vs. animu wifu is hard to do, as I try to explain in the next paragraphs. Dunno how well...) It's hard to explain machine learning things even in a coding vernacular. Like how pseudo-random generation works. You may know the seed but you likely won't know the outcome unless you know and understand how the generator takes the seed value and spits out the pattern that seed makes. So the problem with knowing how an outside extension or adaptation effects the base model probably starts at the hidden layer of our diffusor/transformer model. These are dependent on how they are trained, and if you are also using adaptations or NNets in the mix with the prompt, it's hard to even visualize in an intuitive, comprehendible way what results you are going to get visually each time. So things are very experimental until someone develops a way to visualize or intuit predictions of the output. If the extension has to inject non-linear algorithms or NNet layers of its own, it's pretty touch and go. So in non-Machine Learning terms, way to much stuff is going on to give a prediction of output that will explain what effect a value has that you are injecting. |
Beta Was this translation helpful? Give feedback.
-
I tried to use this extension, but after playing around for a few days, I found that using it seemed to kill creativity. Yes, generation is noticeably faster, but the images become quite identical. I couldn't find a combination of settings that would fix this. I used 1024x1536 resolution - and the results are noticeably less varied, compared to generating at 512x768 and Hires Fix by 2x. I think the problem is most apparent in photorealistic models, but I'm not sure.
Does anyone experience a similar issue?
Beta Was this translation helpful? Give feedback.
All reactions