You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am building a simulation engine that works in two distinct calculation stages:
Calculation flow
Stage 1 (Common calculation):
Generates a 3D "Risk Factor Cube" (Factors$\times$ Times $\times$ Paths).
This cube will be re-used from list of individual (entity) calculations afterwards
Stage 2 (Entity - unique):
Each individual calculation entity reads from the 3D Cube to generate a 2D "Projection Matrix" of results (Times $\times$ Paths). This matrix is then is aggregated into a single scalar result of we which I need to differentiate.
Hence, I need to evaluate the derivatives of the "list" of the dependent scalars with respect to the independent variables.
Contraint
Because I have thousands of individual entities to calculate, I cannot store simultaneously all the 2D "Projection Matrices" at once. So I need to stage the results of each individual "entity" one at the time, to keep RAM usage constant.
My Proposed Loop:
Step A (Entity AD: from Stage 2 -> 1):
Run __enzyme_autodiff on the Entity Simulation result. I capture the gradient of the scalar with respect to the 3D Cube (from Calculation Stage 1) in a temporary shadow buffer (d_Cube_Staging).
Step B (Bridge AD: from Stage 1 -> 0):
Run __enzyme_autodiff on the Common Simulation. I seed it with the d_Cube_Staging from Step A to propagate the sensitivity back to my initial model parameters.
Step C (Cleanup the staging 2D matrix): memset the staging shadow buffer (of staged gradients) to zero and repeat for the next entity.
Questions:
Is it supported to "seed" the second Enzyme call using the shadow output of the first call?
Is manually zeroing and reusing the 3D shadow buffer (d_Cube_Staging) inside a loop a safe and recommended pattern?
Since the 3D Cube is the same for every entity, is there a way to avoid re-running the adjoint of the Common Simulation every time, or is this sequential "Bridge" approach the standard for keeping memory usage flat?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi Enzyme Team,
I am building a simulation engine that works in two distinct calculation stages:
Calculation flow
Stage 1 (Common calculation):$\times$ Times $\times$ Paths).
Generates a 3D "Risk Factor Cube" (Factors
This cube will be re-used from list of individual (entity) calculations afterwards
Stage 2 (Entity - unique):$\times$ Paths). This matrix is then is aggregated into a single scalar result of we which I need to differentiate.
Each individual calculation entity reads from the 3D Cube to generate a 2D "Projection Matrix" of results (Times
Hence, I need to evaluate the derivatives of the "list" of the dependent scalars with respect to the independent variables.
Contraint
Because I have thousands of individual entities to calculate, I cannot store simultaneously all the 2D "Projection Matrices" at once. So I need to stage the results of each individual "entity" one at the time, to keep RAM usage constant.
My Proposed Loop:
Step A (Entity AD: from Stage 2 -> 1):
Run
__enzyme_autodiffon the Entity Simulation result. I capture the gradient of the scalar with respect to the 3D Cube (from Calculation Stage 1) in a temporary shadow buffer (d_Cube_Staging).Step B (Bridge AD: from Stage 1 -> 0):
Run
__enzyme_autodiffon the Common Simulation. I seed it with thed_Cube_Stagingfrom Step A to propagate the sensitivity back to my initial model parameters.Step C (Cleanup the staging 2D matrix):
memset the staging shadow buffer (of staged gradients) to zero and repeat for the next entity.
Questions:
d_Cube_Staging) inside a loop a safe and recommended pattern?Beta Was this translation helpful? Give feedback.
All reactions