People (EDF, BCHydro, ...) have situations were abGRAbsolute is applied 100 times to an area source. In that case the contexts and mean_stds are equal for all source variations, only the occurrence rates change, however we recompute them 100 times. It would be nice to avoid that. It is quite challenging to modify the engine to enable that optimization, since the distribution works by source model realization (i.e. each source variations ends up in a different tasks, making it impossible reusing contexts and mean_stds). Before trying to change the distribution which is a huge task, we should at least try to assess the magnitude of the possible speedup. A pessimistic view would say:
- performance is dominated by memory allocation
- the memory allocation in the RateMap would be the same
- => the performance would be the same, the work would be useless
This is VERY relevant for the POINT methodology (which will use a different distribution where all source variations will end up in the same task, thus making it possible to reuse contexts and mean_stds).
Also, we should investigate which optimizations are possible with different uncertainties (like maxMagGRAbsolute).