-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Description
Is your feature request related to a problem? Please describe.
Meshroom uses very low CPU (about 1/3 average) and RAM (about 1/4) when running a pipeline.
Describe the solution you'd like
I would like Meshroom to better utilize my available resources. Ideally this would be automatic but for now manually setting the number of chunks/chunk size for nodes would be useful. Right now FeatureExtraction uses 200 tasks and the FeatureMatching attempts to use 400 nodes. This results in super spiky CPU usage (as the parallelization seems pretty good at the start and end of nodes but goes very low in the middle). Using much larger chunks seems like it would greatly improve CPU usage. For me it seems that I could raise the chunk size 3-4x and expect much better throughput.
Describe alternatives you've considered
I tried setting MESHROOM_USE_MULTI_CHUNKS=false to avoid chunking at all. But this resulted in using too much memory and OOMing.
Running multiple tasks in parallel would also mitigate this issue.