 Command
Site Info

WebTUI docs and showcase for the terminal-inspired CSS library.

Keybinds
⌃K Command palette
/ Search
? Show keybinds
Theme
Theme
 Search
about: 🪪 About landing: Landing ideas: 💡 Ideas more: ➕ More now: Now posts: 📬 Posts projects: 📚 Projects talks: 🎙️ Talks posts/2025: 📆 2025 posts/ai-for-physics: ⚛️ AI for Physics posts/auroragpt: 🤖 AuroraGPT posts/ezpz-at-alcf: 🍋 ezpz @ ALCF posts/dope-slides: 💅 How to Make Dope Slides posts/ezpz-v1: 📝 ezpz-v1 posts/jupyter: 📗 Jupyter posts/resume: 🧑🏻‍💻 Sam Foreman’s Résumé posts/svgbob: 🫥 svgbob posts/torchtune-aurora: 🪛 Torchtune on Aurora posts/torchtune-patch-aurora: 🚑 Torchtune Patch on Aurora talks/auroragpt-siam25: AuroraGPT talks/ai-for-science-2024: Parallel Training Methods talks/alcf-hpc-workshop-2024/alcf-hpc-workshop-2024: Deep Learning and Foundation Models at Scale talks/aurora-gpt-fm-for-electric-grid/auroragpt-fm-for-electric-grid: AuroraGPT: Foundation Models for Science talks/hpc-user-forum/auroragpt: AuroraGPT talks/incite-hackathon-2025: ALCF Incite Hackathon 2025 talks/llms-at-scale: Training LLMs at Scale talks/llms-on-polaris: Training LLMs on Polaris talks/openskai25: Open SkAI2025 webtui/components/accordion: Accordion webtui/components/badge: Badge webtui/components/button: Button webtui/components/checkbox: Checkbox webtui/components/dialog: Dialog webtui/components/input: Input webtui/components/pre: Pre webtui/components/popover: Popover webtui/components/progress: Progress webtui/components/radio: Radio webtui/components/range: Range webtui/components/spinner: Spinner webtui/components/separator: Separator webtui/components/switch: Switch webtui/components/table: Table webtui/components/textarea: Textarea webtui/components/tooltip: Popover webtui/components/typography: Typography webtui/components/view: View webtui/plugins/plugin-nf: Nerd Font Plugin webtui/plugins/theme-catppuccin: Catppuccin Theme webtui/plugins/theme-everforest: Everforest Theme webtui/plugins/theme-gruvbox: Gruvbox Theme webtui/plugins/theme-nord: Nord Theme webtui/contributing/contributing: Contributing webtui/contributing/contributing: ## Local Development webtui/contributing/contributing: ## Issues webtui/contributing/contributing: ## Pull Requests webtui/contributing/style-guide: Style Guide webtui/contributing/style-guide: ## CSS Units webtui/contributing/style-guide: ## Selectors webtui/contributing/style-guide: ## Documentation webtui/plugins/plugin-dev: Developing Plugins webtui/plugins/plugin-dev: ### Style Layers webtui/plugins/theme-vitesse: Vitesse Theme webtui/start/ascii-boxes: ASCII Boxes webtui/start/changelog: Changelog webtui/start/intro: Introduction webtui/start/intro: ## Features webtui/installation/nextjs: Next.js webtui/installation/vite: Vite webtui/start/plugins: Plugins webtui/start/plugins: ## Official Plugins webtui/start/plugins: ### Themes webtui/start/plugins: ## Community Plugins webtui/start/tuis-vs-guis: TUIs vs GUIs webtui/start/tuis-vs-guis: ## Monospace Fonts webtui/start/tuis-vs-guis: ## Character Cells posts/2025/06: 06 posts/ai-for-physics/diffusion: 🎲 MCMC + Diffusion Sampling posts/ai-for-physics/l2hmc-qcd: 🎢 L2HMC for LQCD webtui/start/theming: Theming webtui/start/theming: ## CSS Variables webtui/start/theming: ### Font Styles webtui/start/theming: ### Colors webtui/start/theming: ### Light & Dark webtui/start/theming: ## Theme Plugins webtui/start/theming: ### Using Multiple Theme Accents webtui/installation/astro: Astro webtui/installation/astro: ## Scoping webtui/installation/astro: ### Frontmatter Imports webtui/installation/astro: ### <style> tag webtui/installation/astro: ### Full Library Import webtui/start/installation: Installation webtui/start/installation: ## Installation webtui/start/installation: ## Using CSS webtui/start/installation: ## Using ESM webtui/start/installation: ## Using a CDN webtui/start/installation: ## Full Library Import webtui/start/installation: ### CSS webtui/start/installation: ### ESM webtui/start/installation: ### CDN posts/auroragpt/aurora-gpt: 🏎️ Megatron-DeepSpeed on Intel XPU posts/auroragpt/checkpoints: 💾 Converting Checkpoints posts/auroragpt/determinstic-flash-attn/deterministic-flash-attn: 🎰 Deterministic `flash-attn` posts/auroragpt/flash-attn-sunspot: 📸 `flash-attn` on Sunspot posts/auroragpt/mpi4py-reproducer: 🐛 `mpi4py` bug on Sunspot posts/auroragpt/long-sequences: 🚂 Loooooooong Sequence Lengths posts/auroragpt/spike-skipper: 🏔️ Spike Skipper posts/auroragpt/startup-times: 🐢 Starting Up Distributed Training on Aurora posts/jupyter/test: 🏁 `l2hmc` Example: 2D $U(1)$ posts/jupyter/l2hmc-4dsu3: 🔳 `l2hmc-qcd` Example: 4D SU(3) talks/auroragpt/alcf-hpc-workshop-2024/auroragpt-alcf-hands-on-hpc-workshop-2024: AuroraGPT: ANL's General Purpose Scientific LLM talks/incite-hackathon-2025/auroragpt: LLMs on Aurora: Overview talks/incite-hackathon-2025/ezpz: LLMs on Aurora: Hands-On talks/openskai25/ai4science: Scientific AI at Scale: AuroraGPT talks/openskai25/training: Scientific AI at Scale: Distributed Training posts/2025/05/03: 🚧 Frameworks Issue with numpy \> 2 posts/2025/04/28: 🔥 Building PyTorch 2.6 from Source on Aurora posts/2025/06/01: 📰 Nice Headings posts/2025/06/02: 🧜‍♀️ Mermaid posts/2025/06/14: 🏗️ Building PyTorch 2.8 from Source on Aurora posts/2025/09/17: 📊 `pbs-tui`: TUI for PBS Job Scheduler Monitoring posts/2025/09/12: 🍹 BlendCorpus + TorchTitan @ ALCF posts/2025/10/06: 🎨 Mixing Between Distributions While Training posts/2025/11/12: 🧊 Cooling Down Checkpoints: Best Practices for Model Evaluation posts/2026/01/07: 🎉 Happy New Year! posts/2026/01/10: 🍋 ezpz posts/ai-for-physics/l2hmc-qcd/2du1: 🎢 l2hmc-qcd Example: 2D U(1) posts/ai-for-physics/l2hmc-qcd/4dsu3nb/index-broken: 🕸️ l2hmc-qcd Example: 4D SU(3) posts/jupyter/l2hmc/4dsu3: 🔳 l2hmc-qcd Example: 4D SU(3) talks/2025/09/24: Training Foundation Models on Supercomputers talks/2025/10/08: AERIS: Argonne's Earth Systems Model talks/2025/10/15: Training Foundation Models on Supercomputers talks/2025/10/24: Training Foundation Models on Supercomputers talks/2025/12/16: AuroraGPT: Training Foundation Models on Supercomputers posts/drafts/2025/09/22: 📝 2025 Annual Report
 Theme

Landing

Sam Foreman 2026-02-04

Sam Foreman

👋 Hi, I’m Sam!

🧑🏻‍💻 About

I’m a Computational Scientist in the AI / ML group at the Argonne Leadership Computing Facility (ALCF).

I’m generally interested in the large scale distributed training of AI models for scientific applications, and am the co-lead of the Models / Pre-Training group for the AuroraGPT project.

Prior to this, I received my PhD in Physics from the University of Iowa in 2019, where I used ML to build better Markov Chain Monte Carlo sampling techniques for Lattice Quantum Chromodynamics (l2hmc-qcd).

✨ New!

🌎 AERIS: [Argonne Earth Systems Model for Reliable and Skillful Predictions]1 (Hatanpää et al. (2025))

✏️ Last Updated

Updated:

2026

-

02

-

04

@

18:17:04

🎶 Now Playing
spotify Now PlayingNow Playing

last.fm

Now playing artwork
➕ More
🔥 What I Work on

As a member of the AI / ML Group at ALCF, I work on:

📍 How I got here

My current research focuses on using deep generative modeling to help build better sampling algorithms in lattice gauge theory. In particular, I’m interested in building gauge equivariant neural network architectures and using inductive priors to incorporate physical symmetries into machine learning models.


I received my PhD in Physics from the University of Iowa in 2019 and my thesis was on [Learning Better Physics: A Machine Learning

Approach to Lattice Gauge Theory].


Prior to this, I completed two bachelors degrees (Engineering Physics and Applied Mathematics, 2015) at The University of Illinois at Urbana-Champaign. My undergraduate dissertation was titled Energy Storage in Quantum Resonators and was supervised by Professor Alfred Hübler within the Center for Complex Systems Research at UIUC.

This work ultimately resulted in a patent !!

💌 Contact
hits© Copyright 2025 Sam Foreman

📬 Posts

📊 Talks

Note

See talks for a live view!

[HTML ⇆ Reveal.js]

Convert from HTML to slideshow version of a page by appending /slides to the end of its URL, e.g.

📆 2025

AuroraGPT: Training Foundation Models on Supercomputers @ ANL [12/2025]
Training Foundation Models on Supercomputers @ UIUC [10/2025]
Training Foundation Models on Supercomputers @ Georgia Institute of Technology [10/2025]
AERIS: Argonne Earth Systems Model @ 2025 ALCF Hands On HPC Workshop [10/2025]
Training Foundation Models on Supercomputers @ 2025 ALCF Hands On HPC Workshop [09/2025]
Scientific AI at Scale: AI for Science @ Open SkAI 2025 [09/2025]
Scientific AI at Scale: Distributed Training @ Open SkAI 2025 [09/2025]
Large Scale Training on Diverse Accelerators @ Scalable Deep Learning, SIAM AN2025 [07/2025]
LLMs on Aurora: 🌌 AuroraGPT @ 2025 ALCF INCITE GPU Hackathon [05/2025]
LLMs on Aurora: 🍋 ezpz @ 2025 ALCF INCITE GPU Hackathon [05/2025]
AuroraGPT: Foundation Models for Science @ Foundation Models for the Electric Grid [02/2025]

📆 2024

Parallel Training Methods @ AI-for-Science on Supercomputers [11/2024]
AuroraGPT @ 2024 ALCF Hands-On HPC Workshop [10/2024]
Machine Learning and Foundation Models at Scale @ 2024 ALCF Hands-On HPC Workshop [10/2024]
AuroraGPT @ HPC User Forum, 2024 [09/2024]
Training LLMs at Scale @ ATPESC, 2024 [08/2024]
LLMs on Polaris @ Center for Scientific Foundation Models, Summer School 24’ [07/2024]
Parallel Training Techniques @ AI-4-Science Training Series [03/2024]
LLMs from Scratch @ LLM Tutorial Workshop [02/2024]

📆 2023

Creating Small(-ish) LLMs @ LLM Tutorial Workshop (1) [11/2023]
Exascale Science on Aurora @ Intel oneAPI Workshop @ UIC [10/2023]
LLM Lunch Talk @ ALCF Hands On HPC Workshop [10/2023]
Scaling LLMs for Science @ Data-Intensive Computing + AI/ML at Scale [08/2023]
MLMC: Machine Learning Monte Carlo @ Lattice 2023 [07/2023]
Generative Modeling and Efficient Sampling @ PASC23 [07/2023]
Efficient Sampling for LGT @ Deep Fridays @ U. Bologna [04/2023]

📆 2022

Large Scale Training @ AI4Science on Supercomputers (ALCF) [11/2022]
Hyperparameter Management @ ALCF SDL Workshop [10/2022]
Statistical Learning @ ATPESC 2022 [08/2022]

Scientific Data Science: An Emerging Symbiosis @ ANL [05/2022]

Machine Learning in HEP @ UNC Greensboro [03/2022]

📆 2021

Accelerated Sampling Methods for LGT @ DWQ @ 25 [BNL] [12/2021]

Training Topological Samplers for LGT @ ML4HEP, ECT* Trento [09/2021]
l2hmc-qcd @ MIT Lattice Group Seminar [2021]

l2hmc-qcd at the MIT Lattice Group Seminar, 2021

Deep Learning HMC for Improved Gauge Generation @ ML in LQCD Workshop [2021]

📆 2020

Machine Learning for Lattice QCD @ U. Iowa [2020]

📝 Work

Note

You can find a full list of my publications on my Google Scholar

  1. 🌎 AERIS: Argonne Earth Systems Model for Reliable and Skillful Predictions (Hatanpää et al. (2025))

  2. Aurora: Architecting Argonne’s First Exascale Supercomputer for Accelerated Scientific Discovery (Allen et al. (2025))

  3. HiPerRAG: High-Performance Retrieval Augmented Generation for Scientific Insights (Gokdemir et al. (2025))

  4. Automated Tuning for HMC Mass Ratios (Torsiello et al. (2025))

  5. MOFA: Discovering Materials for Carbon Capture with a GenAI and Simulation-Based Workflow (Yan et al. (2025))

  6. 🧪 MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design with DPO (Dharuman et al. (2024))

  7. Intro to HPC Bootcamp: Engaging New Communities Through Energy Justice Projects (Leung et al. (2024))

  8. Thorough Characterization and Analysis of Large Transformer Model Training At-Scale (Cheng et al. (2024))

  9. MLMC: Machine Learning Monte Carlo for Lattice Gauge Theory (Foreman et al. (2023))

  10. Protein Generation via Genome-scale Language Models with Bio-physical Scoring (Dharuman et al. (2023))

  11. DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery (Song et al. (2023))

  12. Comprehensive Performance Study of LLMs on Novel AI Accelerators (Emani et al. (2023))

  13. Exploratory Analysis of Climate Data with ClimRR, Intro to HPC Bootcamp @ NERSC (Foreman (2023))

  14. 🧬 GenSLMs: Genome-scale language models reveal SARS-Cov-2 evolutionary dynamics (Zvyagin et al. (2023))

  15. Lattice QCD and Particle Physics (Kronfeld et al. (2022))

  16. Applications of ML to Lattice QFT (Boyda et al. (2022))

  17. LeapFrogLayers: Trainable Framework for Effective Sampling (Foreman, Izubuchi, et al. (2021))

  18. HMC with Normalizing Flows [slides] (Foreman, Izubuchi, et al. (2021))

  19. Deep Learning Hamiltonian Monte Carlo [+ poster] (Foreman, Jin, et al. (2021))

  20. Machine Learning and Neural Networks for Field Theory (Foreman et al. (2020))

  21. Examples of renormalization group transformations for image sets (Samuel Foreman et al. (2018))

  22. RG inspired Machine Learning for lattice field theory (Sam Foreman et al. (2018))

  23. Large Energy Density in Three-Plate Nanocapacitors due to Coulomb Blockade (Hubler et al. (2018))

  24. Superconductivity of In and Sn Samples (Deamont and Foreman (2014))

📓 References

Allen, Benjamin S., James Anchell, Victor Anisimov, et al. 2025. Aurora: Architecting Argonne’s First Exascale Supercomputer for Accelerated Scientific Discovery. https://arxiv.org/abs/2509.08207.

Boyda, Denis, Salvatore Calı̀, Sam Foreman, et al. 2022. “Applications of Machine Learning to Lattice Quantum Field Theory.” arXiv Preprint arXiv:2202.05838. https://arxiv.org/abs/2202.05838.

Cheng, Scott, Jun-Liang Lin, Murali Emani, et al. 2024. “Thorough Characterization and Analysis of Large Transformer Model Training at-Scale.” Proc. ACM Meas. Anal. Comput. Syst. (New York, NY, USA) 8 (1). https://doi.org/10.1145/3639034.

Deamont, George, and Sam Foreman. 2014. Superconductivity of in and Sn Samples.

Dharuman, Gautham, Kyle Hippe, Alexander Brace, et al. 2024. “MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design Workflows with Direct Preference Optimization.” Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (Atlanta, GA, USA), SC ’24. https://doi.org/10.1109/SC41406.2024.00013.

Dharuman, Gautham, Logan Ward, Heng Ma, et al. 2023. “Protein Generation via Genome-Scale Language Models with Bio-Physical Scoring.” Proceedings of the SC’23 Workshops of the International Conference on High Performance Computing, Network, Storage, and Analysis, 95–101.

Emani, Murali, Sam Foreman, Varuni Sastry, et al. 2023. “A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators.” arXiv Preprint arXiv:2310.04607. https://arxiv.org/abs/2310.04607.

Foreman, Sam. 2023. “Energy Justice Analysis of Climate Data with ClimRR.” August 7. https://saforem2.github.io/climate-analysis.

Foreman, Sam, Joel Giedt, Yannick Meurice, and Judah Unmuth-Yockey. 2018. “RG-inspired machine learning for lattice field theory.” European Physical Journal Web of Conferences, European physical journal web of conferences, vol. 175 (March): 11025. https://doi.org/10.1051/epjconf/201817511025.

Foreman, Sam, Taku Izubuchi, Luchang Jin, Xiao-Yong Jin, James C Osborn, and Akio Tomiya. 2021. “HMC with Normalizing Flows.” arXiv Preprint arXiv:2112.01586. https://arxiv.org/abs/2112.01586.

Foreman, Sam, Xiao-Yong Jin, and Osborn James C. 2021. Deep Learning Hamiltonian Monte Carlo. https://arxiv.org/abs/2105.03418.

Foreman, Sam, Xiao-Yong Jin, and James C Osborn. 2020. Machine Learning and Neural Networks for Field Theory.

Foreman, Sam, Xiao-Yong Jin, and James C. Osborn. 2023. MLMC: Machine Learning Monte Carlo for Lattice Gauge Theory. https://arxiv.org/abs/2312.08936.

Foreman, Samuel, Joel Giedt, Yannick Meurice, and Judah Unmuth-Yockey. 2018. “Examples of Renormalization Group Transformations for Image Sets.” Physical Review E 98 (5): 052129.

Gokdemir, Ozan, Carlo Siebenschuh, Alexander Brace, et al. 2025. HiPerRAG: High-Performance Retrieval Augmented Generation for Scientific Insights. https://arxiv.org/abs/2505.04846.

Hatanpää, Väinö, Eugene Ku, Jason Stock, et al. 2025. AERIS: Argonne Earth Systems Model for Reliable and Skillful Predictions. https://arxiv.org/abs/2509.13523.

Hubler, A, S Foreman, J Liu, and L Wortsmann. 2018. “Large Energy Density in Three-Plate Nanocapacitors Due to Coulomb Blockade.” Journal of Applied Physics 123 (10).

Kronfeld, Andreas S, Tanmoy Bhattacharya, Thomas Blum, et al.

  1. “Lattice QCD and Particle Physics.” arXiv Preprint arXiv:2207.07641. https://arxiv.org/abs/2207.07641.

Leung, Mary Ann, Katharine Cahill, Rebecca Hartman-Baker, et al.

  1. “Intro to HPC Bootcamp: Engaging New Communities Through Energy Justice Projects.” Journal of Computational Science Education 15 (1). https://doi.org/10.22369/issn.2153-4136/15/1/10.

Song, Shuaiwen Leon, Bonnie Kruft, Minjia Zhang, et al.

  1. “DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery Through Sophisticated AI System Technologies.” arXiv Preprint arXiv:2310.04610. https://arxiv.org/abs/2310.04610.

Torsiello, J., G. T. Fleming, S. Foreman, X.-Y. Jin, and J. C. Osborn. 2025. “Automated Tuning for HMC Mass Ratios.” In PoS. Argonne, ALCF; Argonne National Laboratory (ANL), Argonne, IL (United States); Temple U.; Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States). https://doi.org/10.22323/1.466.0052.

Yan, Xiaoli, Nathaniel Hudson, Hyun Park, et al. 2025. MOFA: Discovering Materials for Carbon Capture with a GenAI- and Simulation-Based Workflow. https://arxiv.org/abs/2501.10651.

Zvyagin, Maxim, Alexander Brace, Kyle Hippe, et al. 2023. “GenSLMs: Genome-Scale Language Models Reveal SARS-CoV-2 Evolutionary Dynamics.” The International Journal of High Performance Computing Applications 37 (6): 683–705.

📂 Projects

saforem2s GitHub Repositories

Loading repositories from GitHub…

👔 Experience

🎓 Education

👔 Professional Experience

  • Assistant Computational Scientist
    • Argonne National Laboratory, Leadership Computing Facility (ALCF) Lemont, IL | 2022–Present
      • Research lead on scaling large language models (LLMs) and generative AI for science on supercomputers (Aurora, Frontier, LUMI, Leonardo, …).
        • Co-lead the Models and Pretraining team of the AuroraGPT project
      • Optimize large-scale training of foundation models and language models for scientific applications.
      • Collaborate with interdisciplinary teams to enhance simulation efficiency and scalability
      • Focus on AI and HPC for scientific applications, including:
        • Training large language models on supercomputers
        • Genome scale language models (GenSLMs) for studying SARS-CoV-2 evolutionary dynamics
        • Direct Preference Optimization (DPO) for multimodal protein design workflows
        • Climate modeling and weather forecasting using foundation models
        • Developing improved sampling algorithms for lattice quantum chromodynamics (QCD)
      • https://www.alcf.anl.gov/about/people/sam-foreman
  • Postdoctoral Researcher
    • Argonne National Laboratory, Leadership Computing Facility (ALCF) Lemont, IL | 2019 – 2022
      • Applied deep learning to lattice gauge theory and quantum field simulations.
      • Developed ML-enhanced Monte Carlo methods for QCD (l2hmc-qcd).
      • Engaged in AI-for-Science collaborations with national labs and university partners.
  • Graduate Researcher (DOE SCGSR Fellowship)
    • Argonne National Laboratory, Mathematics and Computer Sciences Division (MCS)
      Lemont, IL | 2018 – 2019
      • Development of l2hmc-qcd in collaboration with ALCF for my PhD Thesis research

🏆 Awards and Honors

  • Member of the DeepSpeed Technical Steering Commiittee, 2025 – Present

    • Contributing to the development and direction of the DeepSpeed library for large-scale model training.
  • Nominated to serve on the US Coordinating Panel for Software and Computing by the Division of Particles and Fields of the American Physical Society (APS).

  • Finalist, ACM Gordon Bell Prize in Climate Modeling, 2025

    • Recognized for our work on
      🌎 AERIS (Hatanpää et al. (2025)): The first billion-parameter pixel-level diffusion model for global weather and subseasonal-to-seasonal forecasting. Trained efficiently at scales from 1.3–80B parameters with our sequence-window parallelism (SWiPe) strategy, we achieve a sustained mixed-precision performance of 10.21 ExaFLOPS and peak performance of 11.21 ExaFLOPS, scaling to 10,080 nodes (120,960 GPUs) on the Aurora supercomputer.
  • Finalist, ACM Gordon Bell Prize, 2024

  • ACM Gordon Bell Special Prize for High Performance Computing-Based COVID-19 Research, 2022

  • DOE Office of Science Graduate Student Research Fellow, 2018

    • Awarded by the Department of Energy for outstanding research contributions during graduate studies.

🎪 Events

🎶 Music

➕ More

👤 About Me

➕ More
🔥 What I Work on

As a member of the AI / ML Group at ALCF, I work on:

📍 How I got here

My current research focuses on using deep generative modeling to help build better sampling algorithms in lattice gauge theory. In particular, I’m interested in building gauge equivariant neural network architectures and using inductive priors to incorporate physical symmetries into machine learning models.


I received my PhD in Physics from the University of Iowa in 2019 and my thesis was on [Learning Better Physics: A Machine Learning

Approach to Lattice Gauge Theory].


Prior to this, I completed two bachelors degrees (Engineering Physics and Applied Mathematics, 2015) at The University of Illinois at Urbana-Champaign. My undergraduate dissertation was titled Energy Storage in Quantum Resonators and was supervised by Professor Alfred Hübler within the Center for Complex Systems Research at UIUC.

This work ultimately resulted in a patent !!

💌 Contact
hits© Copyright 2025 Sam Foreman

💭 Thoughts

💌 Guestbook

Temporarily disabled while guesbooks gets their Azure issues worked out :(

Footnotes

  1. 🏅 Finalist for the Gordon Bell Prize in Climate Based Modeling at SC25!