Skip to content

See if we can optimize the calculation of hazard curves for source specific logic trees, at least for area sources #10906

@micheles

Description

@micheles

People (EDF, BCHydro, ...) have situations were abGRAbsolute is applied 100 times to an area source. In that case the contexts and mean_stds are equal for all source variations, only the occurrence rates change, however we recompute them 100 times. It would be nice to avoid that. It is quite challenging to modify the engine to enable that optimization, since the distribution works by source model realization (i.e. each source variations ends up in a different tasks, making it impossible reusing contexts and mean_stds). Before trying to change the distribution which is a huge task, we should at least try to assess the magnitude of the possible speedup. A pessimistic view would say:

  1. performance is dominated by memory allocation
  2. the memory allocation in the RateMap would be the same
  3. => the performance would be the same, the work would be useless

This is VERY relevant for the POINT methodology (which will use a different distribution where all source variations will end up in the same task, thus making it possible to reuse contexts and mean_stds).

Also, we should investigate which optimizations are possible with different uncertainties (like maxMagGRAbsolute).

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions