For this feature you mentioned:
"Deploy the voice model as an API for everyone to be able to use it locally(We can deploy on my vps i don't know, if the specs will even let us though, we'll see)"
I think it is not necessary to run it on the VPS and you can somehow use your own local computer with GPU as the main compute unit and use your VPS only as the relay here. The process is also very easy and there is no need of an actual Static IP on your local machine so you can route your network through pre-built private networks like Zerotier or tailscale + a simple reverse proxy service like nginx on your VPS.
I do not know much about the project since I just got familiar with your project through an old Reddit forum but if I have get it right, I can help with this task and we can test its functionality.
For this feature you mentioned:
"Deploy the voice model as an API for everyone to be able to use it locally(We can deploy on my vps i don't know, if the specs will even let us though, we'll see)"
I think it is not necessary to run it on the VPS and you can somehow use your own local computer with GPU as the main compute unit and use your VPS only as the relay here. The process is also very easy and there is no need of an actual Static IP on your local machine so you can route your network through pre-built private networks like Zerotier or tailscale + a simple reverse proxy service like nginx on your VPS.
I do not know much about the project since I just got familiar with your project through an old Reddit forum but if I have get it right, I can help with this task and we can test its functionality.