Thank you for developing such an excellent technology.
I tried running inference with RoMa and confirmed that it can correctly recognize local geometric correspondences even when there are local misalignments between the two images.
I know that it’s possible to globally overlay or visualize the two images after inference, but I’m currently looking for a way to visualize the two images after applying the local alignment that RoMa has implicitly estimated.
Do you know if there is already a method or recommended post-processing step to realize this kind of local alignment visualization after RoMa inference?