First, confirm
What happened?
I started utilizing Reactor Face Swap back in September 2025. Mostly generating wan 2.2 T2V and doing a face swap for one subject. This worked really great for a while and is actually still working on my older machine running the september version of the code.
I ended up buying a new PC and did a brand new ComfyUI install, updating everything including the latest pull from the Reactor github page. Now when I do the same exact wan 2.2 T2V video generation with the same Reactor Face Swap settings, I frequently get face blur even when the face isn't moving much at all.
I have tried different face restore models, different face detection models, different visibility, different weights. It still happens pretty consistently. Since I am well versed in python, I looked at the actual differences in some of the relevant source code. Pretty much the only differences I could see in a cursory look were related to reswapper/hyperswap added capabilities but nothing noticeably different elsewhere in the logic.
It is most assuredly being introduced by Reactor because I looked at the pre-swapped videos and the face is not blurred at all.
Workflow attached below works in September version of the code and will create the blur with the most recent version of the code. Of course the exact seed saved off in the workflow did not produce the issue but it definitely happens.
Steps to reproduce the problem
wan22_simple_gguf_chris.json
Sysinfo
Windows 11 Pro, RTX 5080, AMD Ryzen 9 9900X3D, 64 GB DDR5 RAM
Relevant console log
Additional information
No response
First, confirm
What happened?
I started utilizing Reactor Face Swap back in September 2025. Mostly generating wan 2.2 T2V and doing a face swap for one subject. This worked really great for a while and is actually still working on my older machine running the september version of the code.
I ended up buying a new PC and did a brand new ComfyUI install, updating everything including the latest pull from the Reactor github page. Now when I do the same exact wan 2.2 T2V video generation with the same Reactor Face Swap settings, I frequently get face blur even when the face isn't moving much at all.
I have tried different face restore models, different face detection models, different visibility, different weights. It still happens pretty consistently. Since I am well versed in python, I looked at the actual differences in some of the relevant source code. Pretty much the only differences I could see in a cursory look were related to reswapper/hyperswap added capabilities but nothing noticeably different elsewhere in the logic.
It is most assuredly being introduced by Reactor because I looked at the pre-swapped videos and the face is not blurred at all.
Workflow attached below works in September version of the code and will create the blur with the most recent version of the code. Of course the exact seed saved off in the workflow did not produce the issue but it definitely happens.
Steps to reproduce the problem
wan22_simple_gguf_chris.json
Sysinfo
Windows 11 Pro, RTX 5080, AMD Ryzen 9 9900X3D, 64 GB DDR5 RAM
Relevant console log
Additional information
No response