Replies: 2 comments 4 replies
-
|
Thank you for your interest in this topic! On your specific questions
On the potential utility of adaptive or automated VRIn shielding problems, adaptive VR would be a significant win! In medical physics and industrial radiation processing applications, VR methods tend to fall in generic classes, for example electron beam bremsstrahlung generation, or dose scoring in small volumes within large phantoms. Our experience is that the currently implemented VR techniques have efficiency profiles that exhibit broad, smooth peaks around their optimal parameters. Because of this, we typically tune VR parameters by running a few short simulations on a logarithmic scale to identify the optimal parameter space. An adaptive VR system would need to outperform this manual tuning process to yield practical advantages. That said, your proposed research direction is valuable; it would provide insight and perhaps advances in optimizing Monte Carlo radiation transport simulations. |
Beta Was this translation helpful? Give feedback.
-
|
Looking further ahead, I can imagine eliminating VR parameters entirely via a scheme I'll call retrospective splitting. The insight is that the simulation already knows when scoring events occur, so it should be able to concentrate efforts in the relevant regions of phase space. In terms of integrating the Boltzmann transport equation, the simulation would automatically allocate more samples around tracks that contribute to the score. Visually: when a particle history reaches the scoring region, the simulation would "rewind" and enrich the upstream cascade to extract more information from that productive trajectory. The challenge is to do this without introducing bias. For example, resampling the last interaction multiple times leads to oversampling with no obvious way to adjust the weights. The solution is to split particles upstream in the cascade. Since splitting a particle into Implementing this would require storing the full cascade history (positions, directions, energies, interactions, perhaps even rng states!) to enable "rewinding" to ancestor particles. That's a lot of bookkeeping, but otherwise would not significantly complicate current variance estimators if carried out per history. I wouldn't want to shoehorn this into the existing EGSnrc codebase! 😨 But I trust one could demonstrate the idea in a minimal egs++ application with a small purpose-built state machine. Of course, this doesn't eliminate the optimization question. You'd still have parameters like recursion depth and splitting ratios to choose. But my hunch is that these would be more problem-agnostic than current VR parameters; there might emerge a reasonable universal heuristic for "how far back to split," for example. And since the simulation can track its own efficiency in real time, could it potentially tune these on the fly, walking toward an optimum as it runs? Just thinking out loud! 😃 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello all,
This issue is related to the following topic mentioned in "EGSnrc development project ideas #790":
I am currently working on the design of a portable open source C++ library for adaptive variance reduction techniques extending some efficient techniques already implemented in a shielding Monte Carlo Fortran code that I maintain. The library aims to provide a concise API for collecting information about Monte Carlo sampling by sending current particle tracking and scoring data (aka "learning") and returning the optimal parameters of variance reduction techniques on request. I would like to investigate the possibility and relevance of linking it to the EGSnrc family codes and adjust the method conception at an early stage if needed.
While I am familiar with neutral particle transport Monte Carlo codes organization principles and their nuclear reactor and radiation shielding applications, I am less familiar with the accelerator and medical physics areas of EGSnrc applicability. Therefore, it would be highly appreciated if the principal developers could clarify several points:
EGSnrc is a quite comprehensive system, so I plan to dive into a specific code for investigation of possible linking issues. From my study of the documentation, it seems that it could be BEAMnrc implementing the split/roulette and implicit capture techniques, but I can be wrong.
I am not experienced with Mortran, but since Mortran is just a preprocessor it seems to be possible to call routines of a specially designed wrapper, as a part of EGSnrc, from specific tracking and scoring points.
Also, it would be helpful to know thoughts of community members about potential utility of automated variance reduction in EGSnrc. For example, radiation shielding problems of nuclear power facilities and spent nuclear fuel and radioactive waste transportation, as a rule, cannot be solved in the analog Monte Carlo mode due to very high attenuation. Therefore, these calculations are always performed using variance reduction techniques which provide several orders of magnitude computational gain. What is the status of variance reduction in treatment planning, accelerator physics, and other problems for which EGSnrc is used?
Thanks for your time.
Best regards,
Vitaly Mogulian
Beta Was this translation helpful? Give feedback.
All reactions