Skip to content

Facial animation precision #12

@masterchabino

Description

@masterchabino

Hi,

First of all, thanks for your amazing work in this technical path @yxbian23 ,
It's exactly what was missing in motion diffusion model.

I manage to make MotionCraft works on my Windows computer directly in conda without WSL and I took the opportunity to update the code to mmcv-2.20, cuda-11.8 and pytorch 2.5.1 (if you're interrested, I can make a pull request, I have not tested the update on a linux machine but it should works).
I also took the opportunity to update the code to have real inference for S2G, T2M and M2D to be able to use my own text or audio with command parameters. And also use a boolean params to end the process before the plot3d generation to save time when I only want to visualize with Blender.

But I have one final issue, the animation for the face seems to be very fixed. For S2G for example, which is the most dependant to facial animation, the mouth seems to not moving at all :
https://github.com/user-attachments/assets/2287b5da-325a-4e91-b782-32068e96fc51

Compared to the last version of the PantoMatrix code :
https://github.com/user-attachments/assets/55d40303-29b9-4cf6-93a2-754ed63762ec

Do you have some tips to enhance the quality of the generated facial animations ?

And also, the check point are very slow to load. Again compared to the last version of PantoMatrix code, it's 10 times slower. I think that it's because your code is based on the previous version of PantoMatrix and EMAGE, before the 2025 code refactoring by "H-Liu1997".
Do you plane to update the code with the last version of PantoMatrix ?

You can also email me if you want my version of code.

Cheers,

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions