Open
Description
We cache some information about the active orchestrators:
- during the startup (cacheOrchestratorStake(), cacheDBOrchs())
- every 1h (cacheDBOrchs()).
This caching process involves executing n of Os
HTTP requests in parallel. While it's not an issue for now, in the future, when we increase the number of active orchestrators, we may encounter an issue of too many HTTP requests running in parallel.
We should:
- verify that we really need the information about all active orchestrators
- spread in time fetching this information to avoid sudden spikes of CPU usage or network congestion, for example, by batching the number of parallel HTTP requests and executing batches sequentially