You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR: Go into 'Text Encoder' in the settings, and DISABLE 'Use line break as prompt segment marker'. If that still doesn't fix it, try switching prompt attention type: Go to 'User Interface' in settings, under 'Quicksettings list', add prompt_attention. Restart SD.Next, go to your quick settings, and change 'Prompt attention parser' to a1111.
Hello everyone,
This is my first time using SD.Next. I've been using A1111/Forge/reForge for the most part until now, as well as experimenting with ComfyUI and Invoke, but I wanted to explore more of my options, so I decided to give SD.Next a try.
Unfortunately up until now, I'd been trying to troubleshoot getting very bad generations on SD.Next... not slightly 'bad' generations where details were slightly off, I mean downright horrifying:
At first I thought it must've been an issue with my parameters, maybe my CFG was set weirdly or CLIP skip was set incorrectly. I tried experimenting with both, as well as many other parameters, and nothing helped. Even at default settings I got terrible results. Tried different models, unfortunately same result.
I'd also been looking at the SD.Next docs, the GitHub issues & discussions, as well as Reddit to see if anyone had a solution. Unfortunately everything I tried failed. Even tried doing a full reinstall on a different Python version to see if maybe it was some weird combination of Python version + Torch version. I use CUDA, and I don't use ROCm or ZLUDA, so I doubt that was the problem either.
Well it turns out that one option in the settings, as well as the way I was formatting my prompts, caused the issue. It's this:
The way I found this option is this: I noticed an option in the quick settings list that lets me change the 'prompt attention parser'. I think I discovered it in the SD.Next docs. I'm not too familiar with how exactly 'attention' works in stable diffusion, but I figured that it could very well be a culprit of the bad gens I was getting, if for whatever reason the 'attention' the model got was broken, it would produce very bad results.
So I tried changing the prompt attention parser from 'native' to 'a1111', since I figured it probably worked similarly if not identical to how A1111 worked (which I knew worked fine based on my previous experience with my models and settings in the A1111 webUI), and suddenly all my gens were perfectly fine. Okay, so for some reason the native parser doesn't work, but a1111 does. Then I tried all the other prompt parsers, and found out only the native one was broken. I could've stopped there, but I was curious as to why native was broken, so I started looking into the code (sdnext/modules/prompt_parser.py), and found this:
Interesting... so if a 'line break' option is enabled in settings, then it replaces every newline with a BREAK? I'd used BREAK a bit on A1111 before, so I was a bit familiar with what it did. And I tend to get quite generous with the number of newlines I use in my prompts, so that could be the culprit.
Lo and behold, I disabled that 'Use line break as prompt segment marker' in settings, and now the native prompt parser worked like a charm:
I also tried doing things the other way around: keep that option enabled in settings, and just remove ALL newlines from my prompt. That also works. But I like using newlines for organization, instead of having one huge wall of text.
So, in terms of user experience: I'm not sure what the history of this line break option is; why it's even there. At the end of the day, this isn't a bug with the program, though I did have to go through all this troubleshooting and on top of that, be somewhat familiar with how stable diffusion works to have a general idea of what the problem might be. And I think this option is unique to SD.Next, or at least, I haven't seen it in any other webUIs. A noob wouldn't be able to figure this out. I'm not going to file a bug report for this, since I don't believe it's a bug. But I still wanted to make this post to give insight into a very bizarre issue, and hopefully save future people from the same headache.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
TL;DR: Go into 'Text Encoder' in the settings, and DISABLE 'Use line break as prompt segment marker'. If that still doesn't fix it, try switching prompt attention type: Go to 'User Interface' in settings, under 'Quicksettings list', add
prompt_attention. Restart SD.Next, go to your quick settings, and change 'Prompt attention parser' toa1111.Hello everyone,
This is my first time using SD.Next. I've been using A1111/Forge/reForge for the most part until now, as well as experimenting with ComfyUI and Invoke, but I wanted to explore more of my options, so I decided to give SD.Next a try.
Unfortunately up until now, I'd been trying to troubleshoot getting very bad generations on SD.Next... not slightly 'bad' generations where details were slightly off, I mean downright horrifying:
At first I thought it must've been an issue with my parameters, maybe my CFG was set weirdly or CLIP skip was set incorrectly. I tried experimenting with both, as well as many other parameters, and nothing helped. Even at default settings I got terrible results. Tried different models, unfortunately same result.
I'd also been looking at the SD.Next docs, the GitHub issues & discussions, as well as Reddit to see if anyone had a solution. Unfortunately everything I tried failed. Even tried doing a full reinstall on a different Python version to see if maybe it was some weird combination of Python version + Torch version. I use CUDA, and I don't use ROCm or ZLUDA, so I doubt that was the problem either.
Well it turns out that one option in the settings, as well as the way I was formatting my prompts, caused the issue. It's this:
The way I found this option is this: I noticed an option in the quick settings list that lets me change the 'prompt attention parser'. I think I discovered it in the SD.Next docs. I'm not too familiar with how exactly 'attention' works in stable diffusion, but I figured that it could very well be a culprit of the bad gens I was getting, if for whatever reason the 'attention' the model got was broken, it would produce very bad results.
So I tried changing the prompt attention parser from 'native' to 'a1111', since I figured it probably worked similarly if not identical to how A1111 worked (which I knew worked fine based on my previous experience with my models and settings in the A1111 webUI), and suddenly all my gens were perfectly fine. Okay, so for some reason the native parser doesn't work, but a1111 does. Then I tried all the other prompt parsers, and found out only the native one was broken. I could've stopped there, but I was curious as to why native was broken, so I started looking into the code (
sdnext/modules/prompt_parser.py), and found this:Interesting... so if a 'line break' option is enabled in settings, then it replaces every newline with a
BREAK? I'd usedBREAKa bit on A1111 before, so I was a bit familiar with what it did. And I tend to get quite generous with the number of newlines I use in my prompts, so that could be the culprit.Lo and behold, I disabled that 'Use line break as prompt segment marker' in settings, and now the native prompt parser worked like a charm:
I also tried doing things the other way around: keep that option enabled in settings, and just remove ALL newlines from my prompt. That also works. But I like using newlines for organization, instead of having one huge wall of text.
So, in terms of user experience: I'm not sure what the history of this line break option is; why it's even there. At the end of the day, this isn't a bug with the program, though I did have to go through all this troubleshooting and on top of that, be somewhat familiar with how stable diffusion works to have a general idea of what the problem might be. And I think this option is unique to SD.Next, or at least, I haven't seen it in any other webUIs. A noob wouldn't be able to figure this out. I'm not going to file a bug report for this, since I don't believe it's a bug. But I still wanted to make this post to give insight into a very bizarre issue, and hopefully save future people from the same headache.
Beta Was this translation helpful? Give feedback.
All reactions