Skip to content

saforem2/personal_site

Repository files navigation

Sam Foreman 2025-10-23

Sam Foreman

👋 Hi, I’m Sam!

🧑🏻‍💻 About

[!TIP]

✏️ Last Updated

Updated: 2025-10-23 @ 08:39:43

[!TIP]

🎶 Now Playing

[!TIP]

spotify Now Playing

Now Playing

[!TIP]

last.fm

<script> /** Developed by Prashant Shrestha + https://prashant.me */ var lastfmData = { baseURL: "https://ws.audioscrobbler.com/2.0/?method=user.getrecenttracks&user=", // Your Last.fm Username user: "saforem2", // Your API key api_key: "1dbc15037c1fe71ce06acbb3f73adc75", additional: "&format=json&limit=1" }; var getSetLastFM = function() { $.ajax({ type: "GET", url: lastfmData.baseURL + lastfmData.user + "&api_key=" + lastfmData.api_key + lastfmData.additional, dataType: "json", success: function(resp) { var recentTrack = resp.recenttracks.track[0]; var formatted = // "" + recentTrack.name; "🎶 " + recentTrack.name; $("a#tracktitle") .html(formatted) .attr("href", recentTrack.url) .attr("title", recentTrack.name + " by " + recentTrack.artist["#text"]) .attr("target", "_blank"); var artistFormatted = // "" + recentTrack.artist["#text"]; "🗣️ " + recentTrack.artist["#text"]; $("a#trackartist") .html(artistFormatted) .attr("title", "Artist : " + recentTrack.artist["#text"]); $("img#trackart").attr("src", recentTrack.image[2]["#text"]); }, error: function(resp) { $("a#tracktitle").html( "" + "Silence!" ); $("img#trackart").attr("src", "🧑🏻‍💻"); var artistFormatted = "Sam Foreman"; $("a#trackartist") .html(artistFormatted) .attr("href", "https://samforeman.me"); } }); }; // Get the new one. getSetLastFM(); // Start the countdown. setInterval(getSetLastFM, 10 * 5000); </script>

[!TIP]

➕ More

<iframe src="https://github.com/sponsors/saforem2/button" title="Sponsor saforem2" height="32" width="114" style="border: 0; border-radius: 6px;"> </iframe> hits

© Copyright Sam Foreman

📝 Work

[!NOTE]

You can find a full list of my publications on my Google Scholar

  1. 🌎 AERIS: Argonne Earth Systems Model for Reliable and Skillful Predictions (Hatanpää et al. (2025))
  2. Aurora: Architecting Argonne’s First Exascale Supercomputer for Accelerated Scientific Discovery (Allen et al. (2025))
  3. HiPerRAG: High-Performance Retrieval Augmented Generation for Scientific Insights (Gokdemir et al. (2025))
  4. Automated Tuning for HMC Mass Ratios (Torsiello et al. (2025))
  5. MOFA: Discovering Materials for Carbon Capture with a GenAI and Simulation-Based Workflow (Yan et al. (2025))
  6. 🧪 MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design with DPO (Dharuman et al. (2024))
  7. Intro to HPC Bootcamp: Engaging New Communities Through Energy Justice Projects (Leung et al. (2024))
  8. Thorough Characterization and Analysis of Large Transformer Model Training At-Scale (Cheng et al. (2024))
  9. MLMC: Machine Learning Monte Carlo for Lattice Gauge Theory (Sam Foreman, Jin, and Osborn (2023))
  10. Protein Generation via Genome-scale Language Models with Bio-physical Scoring (Dharuman et al. (2023))
  11. DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery (Song et al. (2023)) - 📰 DeepSpeed4Science.ai Blog Post - 🚂 Loooooooong Sequence Lengths
  12. Comprehensive Performance Study of LLMs on Novel AI Accelerators (Emani et al. (2023))
  13. Exploratory Analysis of Climate Data with ClimRR, Intro to HPC Bootcamp @ NERSC (Sam Foreman (2023))
  14. 🧬 GenSLMs: Genome-scale language models reveal SARS-Cov-2 evolutionary dynamics (Zvyagin et al. (2023))
  1. Lattice QCD and Particle Physics (Kronfeld et al. (2022))
  2. Applications of ML to Lattice QFT (Boyda et al. (2022))
  3. LeapFrogLayers: Trainable Framework for Effective Sampling (Sam Foreman et al. (2021))
  4. HMC with Normalizing Flows [slides] (Sam Foreman et al. (2021))
  5. Deep Learning Hamiltonian Monte Carlo [+ poster] (Sam Foreman, Jin, and C. (2021))
  6. Machine Learning and Neural Networks for Field Theory (Sam Foreman, Jin, and Osborn (2020))
  7. Examples of renormalization group transformations for image sets (Samuel Foreman et al. (2018))
  8. RG inspired Machine Learning for lattice field theory (Sam Foreman et al. (2018))
  9. Large Energy Density in Three-Plate Nanocapacitors due to Coulomb Blockade (Hubler et al. (2018))
  10. Superconductivity of In and Sn Samples (Deamont and Foreman (2014))

🦜 Talks

[!TIP]

[HTML ⇆ Reveal.js]

Convert from HTML to slideshow version of a page by appending /slides to the end of its URL, e.g.

📆 2025

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/2025/10/24/slides#" title="Training Foundation Models on Supercomputers" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

Training Foundation Models on Supercomputers @ Georgia Institute of Technology [10/2025]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/2025/10/15/slides#" title="Training Foundation Models on Supercomputers" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/2025/10/08/slides#" title="AERIS: Argonne Earth Systems Model" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/2025/09/24/slides#" title="Training Foundation Models on Supercomputers" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/openskai25/ai4science/slides#" title="Scientific AI at Scale: AI for Science" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/openskai25/training/slides.html" title="Scientific AI at Scale: Distributed Training" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/AuroraGPT-SIAM25/slides#" title="AuroraGPT: Large Scale Training on Diverse Accelerators" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/incite-hackathon-2025/AuroraGPT/slides#/section" title="LLMs on Aurora: AuroraGPT" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="https://samforeman.me/talks/incite-hackathon-2025/ezpz/slides#/section" title="🍋 ezpz on Aurora" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="/talks/aurora-gpt-fm-for-electric-grid/slides.html" title="AuroraGPT: Foundation Models for Science" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

📆 2024

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="/talks/ai-for-science-2024/slides.html" title="Parallel Training Methods" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="/talks/AuroraGPT/alcf-hpc-workshop-2024/slides.html" title="AuroraGPT" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://samforeman.me/talks/alcf-hpc-workshop-2024/slides#/section" title="Machine Learning and Foundation Models at Scale" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck reveal-full-page" loading="lazy" src="/talks/hpc-user-forum/slides.html" title="AuroraGPT" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="/talks/llms-at-scale/slides.html" title="Training LLMs at Scale" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="/talks/llms-on-polaris/slides.html" title="LLMs on Polaris" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/parallel-training-slides" title="Parallel Training Techniques" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/llm-workshop-talk" title="LLMs from Scratch" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

📆 2023

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/LLM-tutorial" title="Creating Small(-ish) LLMs" align="center" frameborder="0" webkitallowfullscreen allowfullscreen> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/oneapi-talk" title="Exascale Science on Aurora" align="center" frameborder="0" webkitallowfullscreen allowfullscreen> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/llm-lunch-talk/#/section" title="LLMs on Polaris" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/scaling4science/#/section" title="Scaling LLMs for Science and Ongoing Collaborations" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/lattice23/#/title-slide" title="MLMC: Machine Learning Monte Carlo" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/lqcd-pasc23/" title="Generative Modeling and Efficient Sampling" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/deep-fridays/" title="Efficient Sampling for LGT" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

📆 2022

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/ai4sci-large-scale-training/#" title="Large Scale Training" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/hparam-management-sdl2022" title="Hyperparameter Management" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/ATPESC-StatisticalLearning/#/" title="Statistical Learning" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/anl-job-talk" title="Scientific Data Science" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

Machine Learning in HEP @ UNC Greensboro [03/2022]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/physicsSeminar" title="Machine Learning in HEP" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="width:100%!important; "> </iframe>

📆 2021

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/l2hmc-dwq25/" title="Accelerated Sampling Methods for LGT" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://saforem2.github.io/l2hmc_talk_ect2021" title="Training Topological Samplers for LGT" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

[!TIP]

l2hmc-qcd @ MIT Lattice Group Seminar [2021]

l2hmc-qcd at the MIT Lattice Group Seminar, 2021

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://slides.com/samforeman/dlhmc/embed" title="Deep Learning HMC for Improved Gauge Generation" scrolling="no" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

📆 2020

[!TIP]

<iframe class="slide-deck" loading="lazy" src="https://slides.com/samforeman/l2hmc-qcd/embed" title="Machine Learning for Lattice QCD" align="center" frameborder="0" webkitallowfullscreen allowfullscreen style="aspect-ratio:1.3671875;"> </iframe>

📬 Posts

📦 Projects

[!TIP]

📊 GitHub Stats

GitHub Streak

Github Contributions

Even More !!

Wakatime

[!TIP]

🪖 Experience

🎓 Education

👔 Professional Experience

  • Assistant Computational Scientist
    • Argonne National Laboratory, Leadership Computing Facility (ALCF) Lemont, IL | 2022–Present
      • Research lead on scaling large language models (LLMs) and generative AI for science on supercomputers (Aurora, Frontier, LUMI, Leonardo, …).
        • Co-lead the Models and Pretraining team of the AuroraGPT project
      • Optimize large-scale training of foundation models and language models for scientific applications.
      • Collaborate with interdisciplinary teams to enhance simulation efficiency and scalability
      • Focus on AI and HPC for scientific applications, including:
        • Training large language models on supercomputers
        • Genome scale language models (GenSLMs) for studying SARS-CoV-2 evolutionary dynamics
        • Direct Preference Optimization (DPO) for multimodal protein design workflows
        • Climate modeling and weather forecasting using foundation models
        • Developing improved sampling algorithms for lattice quantum chromodynamics (QCD)
      • https://www.alcf.anl.gov/about/people/sam-foreman
  • Postdoctoral Researcher
    • Argonne National Laboratory, Leadership Computing Facility (ALCF) Lemont, IL | 2019 – 2022
      • Applied deep learning to lattice gauge theory and quantum field simulations.
      • Developed ML-enhanced Monte Carlo methods for QCD (l2hmc-qcd).
      • Engaged in AI-for-Science collaborations with national labs and university partners.
  • Graduate Researcher (DOE SCGSR Fellowship)
    • Argonne National Laboratory, Mathematics and Computer Sciences Division (MCS)
      Lemont, IL | 2018 – 2019
      • Development of l2hmc-qcd in collaboration with ALCF for my PhD Thesis research

🏆 Awards and Honors

  • Nominated to serve on the US Coordinating Panel for Software and Computing by the Division of Particles and Fields of the American Physical Society (APS).

  • Finalist, ACM Gordon Bell Prize in Climate Modeling, 2025

    • Recognized for our work on
      🌎 AERIS (Hatanpää et al. (2025)): The first billion-parameter pixel-level diffusion model for global weather and subseasonal-to-seasonal forecasting. Trained efficiently at scales from 1.3–80B parameters with our sequence-window parallelism (SWiPe) strategy, we achieve a sustained mixed-precision performance of 10.21 ExaFLOPS and peak performance of 11.21 ExaFLOPS, scaling to 10,080 nodes (120,960 GPUs) on the Aurora supercomputer.
  • Finalist, ACM Gordon Bell Prize, 2024

  • ACM Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research, 2022

  • DOE Office of Science Graduate Student Research Fellow, 2018

    • Awarded by the Department of Energy for outstanding research contributions during graduate studies.

🎪 Events

🎶 Music

<iframe loading="lazy" width="auto" src="https://descent.live/saforem2" style="width: 100%; border: none; height: min(800px, calc(0.8*100vh)); border-radius: 4pt;"> </iframe>

<iframe loading="lazy" width="auto" src="https://descent.live/saforem2" style="width: 100%; border: none; height: min(800px, calc(0.8*100vh)); border-radius: 4pt;"> </iframe>

Allen, Benjamin S., James Anchell, Victor Anisimov, Thomas Applencourt, Abhishek Bagusetty, Ramesh Balakrishnan, Riccardo Balin, et al. 2025. “Aurora: Architecting Argonne’s First Exascale Supercomputer for Accelerated Scientific Discovery.” https://arxiv.org/abs/2509.08207.

Boyda, Denis, Salvatore Calı̀, Sam Foreman, Lena Funcke, Daniel C Hackett, Yin Lin, Gert Aarts, et al. 2022. “Applications of Machine Learning to Lattice Quantum Field Theory.” arXiv Preprint arXiv:2202.05838. https://arxiv.org/abs/2202.05838.

Cheng, Scott, Jun-Liang Lin, Murali Emani, Siddhisanket Raskar, Sam Foreman, Zhen Xie, Venkatram Vishwanath, and Mahmut Taylan Kandemir. 2024. “Thorough Characterization and Analysis of Large Transformer Model Training at-Scale.” Proc. ACM Meas. Anal. Comput. Syst. 8 (1). https://doi.org/10.1145/3639034.

Deamont, George, and Sam Foreman. 2014. “Superconductivity of in and Sn Samples.”

Dharuman, Gautham, Kyle Hippe, Alexander Brace, Sam Foreman, Väinö Hatanpää, Varuni K. Sastry, Huihuo Zheng, et al. 2024. “MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design Workflows with Direct Preference Optimization.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis. SC ’24. Atlanta, GA, USA: IEEE Press. https://doi.org/10.1109/SC41406.2024.00013.

Dharuman, Gautham, Logan Ward, Heng Ma, Priyanka V Setty, Ozan Gokdemir, Sam Foreman, Murali Emani, et al. 2023. “Protein Generation via Genome-Scale Language Models with Bio-Physical Scoring.” In Proceedings of the SC’23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, 95–101.

Emani, Murali, Sam Foreman, Varuni Sastry, Zhen Xie, Siddhisanket Raskar, William Arnold, Rajeev Thakur, Venkatram Vishwanath, and Michael E Papka. 2023. “A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators.” arXiv Preprint arXiv:2310.04607. https://arxiv.org/abs/2310.04607.

Foreman, Sam. 2023. “Energy Justice Analysis of Climate Data with ClimRR.” August 7, 2023. https://saforem2.github.io/climate-analysis.

Foreman, Sam, Joel Giedt, Yannick Meurice, and Judah Unmuth-Yockey. 2018. “RG-inspired machine learning for lattice field theory.” In European Physical Journal Web of Conferences, 175:11025. European Physical Journal Web of Conferences. https://doi.org/10.1051/epjconf/201817511025.

Foreman, Sam, Taku Izubuchi, Luchang Jin, Xiao-Yong Jin, James C Osborn, and Akio Tomiya. 2021. “HMC with Normalizing Flows.” arXiv Preprint arXiv:2112.01586. https://arxiv.org/abs/2112.01586.

Foreman, Sam, Xiao-Yong Jin, and Osborn James C. 2021. “Deep Learning Hamiltonian Monte Carlo.” https://arxiv.org/abs/2105.03418.

Foreman, Sam, Xiao-Yong Jin, and James C Osborn. 2020. “Machine Learning and Neural Networks for Field Theory.”

Foreman, Sam, Xiao-Yong Jin, and James C. Osborn. 2023. “MLMC: Machine Learning Monte Carlo for Lattice Gauge Theory.” https://arxiv.org/abs/2312.08936.

Foreman, Samuel, Joel Giedt, Yannick Meurice, and Judah Unmuth-Yockey. 2018. “Examples of Renormalization Group Transformations for Image Sets.” Physical Review E 98 (5): 052129.

Gokdemir, Ozan, Carlo Siebenschuh, Alexander Brace, Azton Wells, Brian Hsu, Kyle Hippe, Priyanka V. Setty, et al. 2025. “HiPerRAG: High-Performance Retrieval Augmented Generation for Scientific Insights.” https://arxiv.org/abs/2505.04846.

Hatanpää, Väinö, Eugene Ku, Jason Stock, Murali Emani, Sam Foreman, Chunyong Jung, Sandeep Madireddy, et al. 2025. “AERIS: Argonne Earth Systems Model for Reliable and Skillful Predictions.” https://arxiv.org/abs/2509.13523.

Hubler, A, S Foreman, J Liu, and L Wortsmann. 2018. “Large Energy Density in Three-Plate Nanocapacitors Due to Coulomb Blockade.” Journal of Applied Physics 123 (10).

Kronfeld, Andreas S, Tanmoy Bhattacharya, Thomas Blum, Norman H Christ, Carleton DeTar, William Detmold, Robert Edwards, et al. 2022. “Lattice QCD and Particle Physics.” arXiv Preprint arXiv:2207.07641. https://arxiv.org/abs/2207.07641.

Leung, Mary Ann, Katharine Cahill, Rebecca Hartman-Baker, Paige Kinsley, Lois Curfman McInnes, Suzanne Parete-Koon, Sreeranjani Ramprakash, et al. 2024. “Intro to HPC Bootcamp: Engaging New Communities Through Energy Justice Projects.” Journal of Computational Science Education 15 (1). https://doi.org/10.22369/issn.2153-4136/15/1/10.

Song, Shuaiwen Leon, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, et al. 2023. “DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery Through Sophisticated AI System Technologies.” arXiv Preprint arXiv:2310.04610. https://arxiv.org/abs/2310.04610.

Torsiello, J., G. T. Fleming, S. Foreman, X.-Y. Jin, and J. C. Osborn. 2025. “Automated Tuning for HMC Mass Ratios.” PoS. Argonne, ALCF; Argonne National Laboratory (ANL), Argonne, IL (United States); Temple U.; Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States). https://doi.org/10.22323/1.466.0052.

Yan, Xiaoli, Nathaniel Hudson, Hyun Park, Daniel Grzenda, J. Gregory Pauloski, Marcus Schwarting, Haochen Pan, et al. 2025. “MOFA: Discovering Materials for Carbon Capture with a GenAI- and Simulation-Based Workflow.” https://arxiv.org/abs/2501.10651.

Zvyagin, Maxim, Alexander Brace, Kyle Hippe, Yuntian Deng, Bin Zhang, Cindy Orozco Bohorquez, Austin Clyde, et al. 2023. “GenSLMs: Genome-Scale Language Models Reveal SARS-CoV-2 Evolutionary Dynamics.” The International Journal of High Performance Computing Applications 37 (6): 683–705.

About

My personal website

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published