Skip to content

Commit e38933c

Browse files
author
Anthony David Gruber
committed
testing transparency
1 parent 855a0d8 commit e38933c

File tree

5 files changed

+4
-4
lines changed

5 files changed

+4
-4
lines changed

_pages/about.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ Structure-Informed Model Reduction and Function Approximation
8383
{: .notice--info}
8484

8585
### Multifidelity Monte Carlo Estimation for Efficient Uncertainty Quantification in Climate-Related Modeling [Preprint](https://egusphere.copernicus.org/preprints/2022/egusphere-2022-797/){: .btn .btn--info .btn--small}{: .align-right}
86-
<img src="/images/ice_mfmc.png" style="max-height: 250px; max-width: 250px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** Uncertainties in an output of interest that depends on the solution of a complex system (e.g., of partial differential equations with random inputs) are often, if not nearly ubiquitously, determined in practice using Monte Carlo (MC) estimation. While simple to implement, MC estimation fails to provide reliable information about statistical quantities (such as the expected value of the output of interest) in application settings such as climate modeling for which obtaining a single realization of the output of interest is a costly endeavor. Specifically, the dilemma encountered is that many samples of the output of interest have to be collected in order to obtain an MC estimator having sufficient accuracy; so many, in fact, that the available computational budget is not large enough to effect the number of samples needed. To circumvent this dilemma, we consider using multifidelity Monte Carlo (MFMC) estimation which leverages the use of less costly and less accurate surrogate models (such as coarser grids, reduced-order models, simplified physics, interpolants, etc.) to achieve, for the same computational budget, higher accuracy compared to that obtained by an MC estimator or, looking at it another way, an MFMC estimator obtains the same accuracy as the MC estimator at lower computational cost. The key to the efficacy of MFMC estimation is the fact that most of the required computational budget is loaded onto the less costly surrogate models, so that very few samples are taken of the more expensive model of interest. We first provide a more detailed discussion about the need to consider an alternate to MC estimation for uncertainty quantification. Subsequently, we present a review, in an abstract setting, of the MFMC approach along with its application to three climate-related benchmark problems as a proof-of-concept exercise.
86+
<img src="/images/ice_mfmc.pdf" style="max-height: 250px; max-width: 250px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** Uncertainties in an output of interest that depends on the solution of a complex system (e.g., of partial differential equations with random inputs) are often, if not nearly ubiquitously, determined in practice using Monte Carlo (MC) estimation. While simple to implement, MC estimation fails to provide reliable information about statistical quantities (such as the expected value of the output of interest) in application settings such as climate modeling for which obtaining a single realization of the output of interest is a costly endeavor. Specifically, the dilemma encountered is that many samples of the output of interest have to be collected in order to obtain an MC estimator having sufficient accuracy; so many, in fact, that the available computational budget is not large enough to effect the number of samples needed. To circumvent this dilemma, we consider using multifidelity Monte Carlo (MFMC) estimation which leverages the use of less costly and less accurate surrogate models (such as coarser grids, reduced-order models, simplified physics, interpolants, etc.) to achieve, for the same computational budget, higher accuracy compared to that obtained by an MC estimator or, looking at it another way, an MFMC estimator obtains the same accuracy as the MC estimator at lower computational cost. The key to the efficacy of MFMC estimation is the fact that most of the required computational budget is loaded onto the less costly surrogate models, so that very few samples are taken of the more expensive model of interest. We first provide a more detailed discussion about the need to consider an alternate to MC estimation for uncertainty quantification. Subsequently, we present a review, in an abstract setting, of the MFMC approach along with its application to three climate-related benchmark problems as a proof-of-concept exercise.
8787
<br><br>
8888
(Joint with [Max Gunzburger](https://people.sc.fsu.edu/~mgunzburger/), [Lili Ju](https://people.math.sc.edu/ju/), [Rihui Lan](https://scholar.google.com/citations?user=qkMD9tsAAAAJ&hl=en), and [Zhu Wang](https://people.math.sc.edu/wangzhu/).)
8989
{: .notice--info}
@@ -95,7 +95,7 @@ Structure-Informed Model Reduction and Function Approximation
9595
{: .notice--info}
9696

9797
### Energetically Consistent Model Reduction for Metriplectic Systems [Preprint](https://arxiv.org/abs/2204.08049#){: .btn .btn--info .btn--small}{: .align-right}
98-
<img src="/images/gas_containers_FOMs.png" style="max-height: 250px; max-width: 250px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** The metriplectic formalism is useful for describing complete dynamical systems which conserve energy and produce entropy. This creates challenges for model reduction, as the elimination of high-frequency information will generally not preserve the metriplectic structure which governs long-term stability of the system. Based on proper orthogonal decomposition, a provably convergent metriplectic reduced-order model is formulated which is guaranteed to maintain the algebraic structure necessary for energy conservation and entropy formation. Numerical results on benchmark problems show that the proposed method is remarkably stable, leading to improved accuracy over long time scales at a moderate increase in cost over naive methods.
98+
<img src="/images/gas_containers_FOMs.pdf" style="max-height: 250px; max-width: 250px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** The metriplectic formalism is useful for describing complete dynamical systems which conserve energy and produce entropy. This creates challenges for model reduction, as the elimination of high-frequency information will generally not preserve the metriplectic structure which governs long-term stability of the system. Based on proper orthogonal decomposition, a provably convergent metriplectic reduced-order model is formulated which is guaranteed to maintain the algebraic structure necessary for energy conservation and entropy formation. Numerical results on benchmark problems show that the proposed method is remarkably stable, leading to improved accuracy over long time scales at a moderate increase in cost over naive methods.
9999
<br>
100100
(Joint with [Max Gunzburger](https://people.sc.fsu.edu/~mgunzburger/), [Lili Ju](https://people.math.sc.edu/ju/), and [Zhu Wang](https://people.math.sc.edu/wangzhu/).)
101101
{: .notice--info}
@@ -113,13 +113,13 @@ Structure-Informed Model Reduction and Function Approximation
113113
{: .notice--info}
114114

115115
### Pseudo-Reversible Neural Networks [Preprint](https://arxiv.org/abs/2112.01438#){: .btn .btn--info .btn--small}{: .align-right}
116-
<img src="/images/prnn.png" style="max-height: 250px; max-width: 250px; margin-right: 16px" align=left> **Abstract:** Due to the curse of dimensionality and limitations on training data, approximating high-dimensional functions is a very challenging task even for powerful deep neural networks. Inspired by the Nonlinear Level set Learning (NLL) method that uses the reversible residual network (RevNet), in this paper we propose a new method for function approximation called Dimension Reduction via Learning Level Sets (DRiLLS). Our method contains two major components: one is the pseudo-reversible neural network (PRNN) module that effectively transforms high-dimensional input variables to low-dimensional active variables, and the other is the synthesized regression module for approximating function values based on the transformed data in the low-dimensional space. Extensive experimental results demonstrate that DRiLLS outperforms both the NLL and Active Subspace methods, especially when the target function possesses critical points in the interior of its input domain.
116+
<img src="/images/prnn.pdf" style="max-height: 250px; max-width: 250px; margin-right: 16px" align=left> **Abstract:** Due to the curse of dimensionality and limitations on training data, approximating high-dimensional functions is a very challenging task even for powerful deep neural networks. Inspired by the Nonlinear Level set Learning (NLL) method that uses the reversible residual network (RevNet), in this paper we propose a new method for function approximation called Dimension Reduction via Learning Level Sets (DRiLLS). Our method contains two major components: one is the pseudo-reversible neural network (PRNN) module that effectively transforms high-dimensional input variables to low-dimensional active variables, and the other is the synthesized regression module for approximating function values based on the transformed data in the low-dimensional space. Extensive experimental results demonstrate that DRiLLS outperforms both the NLL and Active Subspace methods, especially when the target function possesses critical points in the interior of its input domain.
117117
<br><br>
118118
(Joint with [Lili Ju](https://people.math.sc.edu/ju/), [Yuankai Teng](https://slooowtyk.github.io/), [Zhu Wang](https://people.math.sc.edu/wangzhu/), and [Guannan Zhang](https://sites.google.com/view/guannan-zhang/home).)
119119
{: .notice--info}
120120

121121
### Active Manifolds: Geometric Data Analysis for Dimension Reduction [Here](http://proceedings.mlr.press/v97/bridges19a/bridges19a.pdf){: .btn .btn--info .btn--small}{: .align-right} [Read More.](/am/){: .btn .btn--info .btn--small}{: .align-right}
122-
<img src="/images/AMstuff.png" style="max-height: 300px; max-width: 300px; margin-right: 16px" align=left> **Abstract:** We present an approach to analyze $$C^1(\mathbb{R}^m)$$ functions that addresses limitations present in the Active Subspaces (AS) method of Constantine et al. Under appropriate hypotheses, our Active Manifolds (AM) method identifies a 1-D curve in the domain (the active manifold) on which nearly all values of the unknown function are attained, and which can be exploited for approximation or analysis, especially when $$m$$ is large (high-dimensional input space). We provide theorems justifying our AM technique and an algorithm permitting functional approximation and sensitivity analysis.
122+
<img src="/images/AMstuff.pdf" style="max-height: 300px; max-width: 300px; margin-right: 16px" align=left> **Abstract:** We present an approach to analyze $$C^1(\mathbb{R}^m)$$ functions that addresses limitations present in the Active Subspaces (AS) method of Constantine et al. Under appropriate hypotheses, our Active Manifolds (AM) method identifies a 1-D curve in the domain (the active manifold) on which nearly all values of the unknown function are attained, and which can be exploited for approximation or analysis, especially when $$m$$ is large (high-dimensional input space). We provide theorems justifying our AM technique and an algorithm permitting functional approximation and sensitivity analysis.
123123
Using accessible, low-dimensional functions as initial examples, we show AM reduces approximation error by an order of magnitude compared to AS, at the expense of more computation. Following this, we revisit the sensitivity analysis by Glaws et al. who apply AS to analyze a magnetohydrodynamic power generator model, and compare the performance of AM on the same data. Our analysis provides detailed information not captured by AS, exhibiting the influence of each parameter individually along an active manifold. Overall, AM represents a novel technique for analyzing functional models with benefits including: reducing $$m$$-dimensional analysis to a 1-D analogue, permitting more accurate regression than AS (at more computational expense), enabling more informative sensitivity analysis, and granting accessible visualizations (2-D plots) of parameter sensitivity along the AM.
124124
<br><br>
125125
(Joint with [Robert Bridges](https://sites.google.com/site/robertbridgeshomepage/), [Christopher Felder](https://www.math.wustl.edu/~cfelder/), and [Miki Verma](https://scholar.google.com/citations?user=1jUa6nwAAAAJ&hl=en).) <br>

images/AMstuff.pdf

77.4 KB
Binary file not shown.

images/gas_containers_FOMs.pdf

47.1 KB
Binary file not shown.

images/ice_mfmc.pdf

449 KB
Binary file not shown.

images/prnn.pdf

913 KB
Binary file not shown.

0 commit comments

Comments
 (0)