Skip to content

Commit ebc22eb

Browse files
author
Anthony David Gruber
committed
trying
1 parent e800473 commit ebc22eb

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

_pages/about.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -47,17 +47,17 @@ Broad research keywords which tend to interest me include: scientific machine le
4747

4848
Structure-Informed Model Reduction and Function Approximation
4949
-----
50-
### Tensor Parametric Hamiltonian Operator Inference [Preprint](https://arxiv.org/abs/2502.10888#){: .btn .btn--info .btn--small}{: .align-right}
51-
<img src="/images/tensor_wave.pdf" style="max-height: 300px; max-width: 300px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** TThis work presents a tensor-based approach to constructing data-driven reduced-order models corresponding to semi-discrete partial differential equations with canonical Hamiltonian structure. By expressing parameter-varying operators with affine dependence as contractions of a generalized parameter vector against a constant tensor, this method leverages the operator inference framework to capture parametric dependence in the learned reduced-order model via the solution to a convex, least-squares optimization problem. This leads to a concise and straightforward implementation which compactifies previous parametric operator inference approaches and directly extends to learning parametric operators with symmetry constraints, a key feature required for constructing structure-preserving surrogates of Hamiltonian systems. The proposed approach is demonstrated on both a (non-Hamiltonian) heat equation with variable diffusion coefficient as well as a Hamiltonian wave equation with variable wave speed.
52-
<br><br>
53-
(Joint with [Arjun Vijaywargia](https://arjunveejay.notion.site/Arjun-Vijaywargiya-4f155526b32e4b0a97b7f5dad4c89dde) and [Shane A. McQuarrie](https://github.com/shanemcq18).)
54-
{: .notice--info}
55-
5650
<img src="/images/GCNN_recon2.png" style="max-height: 275px; max-width: 325px; margin-right: 16px" align=left> Due to their high computational cost, scientific studies based on large-scale simulation frequently operate at a data deficit which creates problems inverse to the issues with "big data". Particularly, there is a need for efficient function approximation and model reduction strategies which can serve as cheap and reliable surrogates for the high-fidelity models used in practical applications. These projects develop such technology using invariances and other structural considerations as a starting point, allowing for informed surrogates with beneficial behavior.
5751

5852
<details markdown="1"><summary><b>Projects</b></summary>
5953
{: .notice}
6054

55+
### Tensor Parametric Hamiltonian Operator Inference [Preprint](https://arxiv.org/abs/2502.10888#){: .btn .btn--info .btn--small}{: .align-right}
56+
<img src="/images/tensor_wave.pdf" style="max-height: 250px; max-width: 250px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** TThis work presents a tensor-based approach to constructing data-driven reduced-order models corresponding to semi-discrete partial differential equations with canonical Hamiltonian structure. By expressing parameter-varying operators with affine dependence as contractions of a generalized parameter vector against a constant tensor, this method leverages the operator inference framework to capture parametric dependence in the learned reduced-order model via the solution to a convex, least-squares optimization problem. This leads to a concise and straightforward implementation which compactifies previous parametric operator inference approaches and directly extends to learning parametric operators with symmetry constraints, a key feature required for constructing structure-preserving surrogates of Hamiltonian systems. The proposed approach is demonstrated on both a (non-Hamiltonian) heat equation with variable diffusion coefficient as well as a Hamiltonian wave equation with variable wave speed.
57+
<br><br>
58+
(Joint with [Arjun Vijaywargia](https://arjunveejay.notion.site/Arjun-Vijaywargiya-4f155526b32e4b0a97b7f5dad4c89dde) and [Shane A. McQuarrie](https://github.com/shanemcq18).)
59+
{: .notice--info}
60+
6161
### Efficiently Parameterized Neural Metriplectic Systems [Preprint](https://arxiv.org/abs/2405.16305#){: .btn .btn--info .btn--small}{: .align-right}
6262
<img src="/images/metriplectic_diagram.pdf" style="max-height: 250px; max-width: 250px; margin-right: 16px; margin-bottom: 10px" align=left> **Abstract:** Metriplectic systems are learned from data in a way that scales quadratically in both the size of the state and the rank of the metriplectic data. Besides being provably energy conserving and entropy stable, the proposed approach comes with approximation results demonstrating its ability to accurately learn metriplectic dynamics from data as well as an error estimate indicating its potential for generalization to unseen timescales when approximation error is low. Examples are provided which illustrate performance in the presence of both full state information as well as when entropic variables are unknown, confirming that the proposed approach exhibits superior accuracy and scalability without compromising on model expressivity.
6363
<br><br>

0 commit comments

Comments
 (0)