 Command
Site Info

WebTUI docs and showcase for the terminal-inspired CSS library.

Keybinds
⌃K Command palette
/ Search
? Show keybinds
Theme
Theme
 Search
about: 🪪 About landing: Landing ideas: 💡 Ideas more: ➕ More now: Now posts: 📬 Posts projects: 📚 Projects talks: 🎙️ Talks posts/2025: 📆 2025 posts/ai-for-physics: ⚛️ AI for Physics posts/auroragpt: 🤖 AuroraGPT posts/ezpz-at-alcf: 🍋 ezpz @ ALCF posts/dope-slides: 💅 How to Make Dope Slides posts/ezpz-v1: 📝 ezpz-v1 posts/jupyter: 📗 Jupyter posts/resume: 🧑🏻‍💻 Sam Foreman’s Résumé posts/svgbob: 🫥 svgbob posts/torchtune-aurora: 🪛 Torchtune on Aurora posts/torchtune-patch-aurora: 🚑 Torchtune Patch on Aurora talks/auroragpt-siam25: AuroraGPT talks/ai-for-science-2024: Parallel Training Methods talks/alcf-hpc-workshop-2024/alcf-hpc-workshop-2024: Deep Learning and Foundation Models at Scale talks/aurora-gpt-fm-for-electric-grid/auroragpt-fm-for-electric-grid: AuroraGPT: Foundation Models for Science talks/hpc-user-forum/auroragpt: AuroraGPT talks/incite-hackathon-2025: ALCF Incite Hackathon 2025 talks/llms-at-scale: Training LLMs at Scale talks/llms-on-polaris: Training LLMs on Polaris talks/openskai25: Open SkAI2025 webtui/components/accordion: Accordion webtui/components/badge: Badge webtui/components/button: Button webtui/components/checkbox: Checkbox webtui/components/dialog: Dialog webtui/components/input: Input webtui/components/pre: Pre webtui/components/popover: Popover webtui/components/progress: Progress webtui/components/radio: Radio webtui/components/range: Range webtui/components/spinner: Spinner webtui/components/separator: Separator webtui/components/switch: Switch webtui/components/table: Table webtui/components/textarea: Textarea webtui/components/tooltip: Popover webtui/components/typography: Typography webtui/components/view: View webtui/plugins/plugin-nf: Nerd Font Plugin webtui/plugins/theme-catppuccin: Catppuccin Theme webtui/plugins/theme-everforest: Everforest Theme webtui/plugins/theme-gruvbox: Gruvbox Theme webtui/plugins/theme-nord: Nord Theme webtui/contributing/contributing: Contributing webtui/contributing/contributing: ## Local Development webtui/contributing/contributing: ## Issues webtui/contributing/contributing: ## Pull Requests webtui/contributing/style-guide: Style Guide webtui/contributing/style-guide: ## CSS Units webtui/contributing/style-guide: ## Selectors webtui/contributing/style-guide: ## Documentation webtui/plugins/plugin-dev: Developing Plugins webtui/plugins/plugin-dev: ### Style Layers webtui/plugins/theme-vitesse: Vitesse Theme webtui/start/ascii-boxes: ASCII Boxes webtui/start/changelog: Changelog webtui/start/intro: Introduction webtui/start/intro: ## Features webtui/installation/nextjs: Next.js webtui/installation/vite: Vite webtui/start/plugins: Plugins webtui/start/plugins: ## Official Plugins webtui/start/plugins: ### Themes webtui/start/plugins: ## Community Plugins webtui/start/tuis-vs-guis: TUIs vs GUIs webtui/start/tuis-vs-guis: ## Monospace Fonts webtui/start/tuis-vs-guis: ## Character Cells posts/2025/06: 06 posts/ai-for-physics/diffusion: 🎲 MCMC + Diffusion Sampling posts/ai-for-physics/l2hmc-qcd: 🎢 L2HMC for LQCD webtui/start/theming: Theming webtui/start/theming: ## CSS Variables webtui/start/theming: ### Font Styles webtui/start/theming: ### Colors webtui/start/theming: ### Light & Dark webtui/start/theming: ## Theme Plugins webtui/start/theming: ### Using Multiple Theme Accents webtui/installation/astro: Astro webtui/installation/astro: ## Scoping webtui/installation/astro: ### Frontmatter Imports webtui/installation/astro: ### <style> tag webtui/installation/astro: ### Full Library Import webtui/start/installation: Installation webtui/start/installation: ## Installation webtui/start/installation: ## Using CSS webtui/start/installation: ## Using ESM webtui/start/installation: ## Using a CDN webtui/start/installation: ## Full Library Import webtui/start/installation: ### CSS webtui/start/installation: ### ESM webtui/start/installation: ### CDN posts/auroragpt/aurora-gpt: 🏎️ Megatron-DeepSpeed on Intel XPU posts/auroragpt/checkpoints: 💾 Converting Checkpoints posts/auroragpt/determinstic-flash-attn/deterministic-flash-attn: 🎰 Deterministic `flash-attn` posts/auroragpt/flash-attn-sunspot: 📸 `flash-attn` on Sunspot posts/auroragpt/mpi4py-reproducer: 🐛 `mpi4py` bug on Sunspot posts/auroragpt/long-sequences: 🚂 Loooooooong Sequence Lengths posts/auroragpt/spike-skipper: 🏔️ Spike Skipper posts/auroragpt/startup-times: 🐢 Starting Up Distributed Training on Aurora posts/jupyter/test: 🏁 `l2hmc` Example: 2D $U(1)$ posts/jupyter/l2hmc-4dsu3: 🔳 `l2hmc-qcd` Example: 4D SU(3) talks/auroragpt/alcf-hpc-workshop-2024/auroragpt-alcf-hands-on-hpc-workshop-2024: AuroraGPT: ANL's General Purpose Scientific LLM talks/incite-hackathon-2025/auroragpt: LLMs on Aurora: Overview talks/incite-hackathon-2025/ezpz: LLMs on Aurora: Hands-On talks/openskai25/ai4science: Scientific AI at Scale: AuroraGPT talks/openskai25/training: Scientific AI at Scale: Distributed Training posts/2025/05/03: 🚧 Frameworks Issue with numpy \> 2 posts/2025/04/28: 🔥 Building PyTorch 2.6 from Source on Aurora posts/2025/06/01: 📰 Nice Headings posts/2025/06/02: 🧜‍♀️ Mermaid posts/2025/06/14: 🏗️ Building PyTorch 2.8 from Source on Aurora posts/2025/09/17: 📊 `pbs-tui`: TUI for PBS Job Scheduler Monitoring posts/2025/09/12: 🍹 BlendCorpus + TorchTitan @ ALCF posts/2025/10/06: 🎨 Mixing Between Distributions While Training posts/2025/11/12: 🧊 Cooling Down Checkpoints: Best Practices for Model Evaluation posts/2026/01/07: 🎉 Happy New Year! posts/2026/01/10: 🍋 ezpz posts/ai-for-physics/l2hmc-qcd/2du1: 🎢 l2hmc-qcd Example: 2D U(1) posts/ai-for-physics/l2hmc-qcd/4dsu3nb/index-broken: 🕸️ l2hmc-qcd Example: 4D SU(3) posts/jupyter/l2hmc/4dsu3: 🔳 l2hmc-qcd Example: 4D SU(3) talks/2025/09/24: Training Foundation Models on Supercomputers talks/2025/10/08: AERIS: Argonne's Earth Systems Model talks/2025/10/15: Training Foundation Models on Supercomputers talks/2025/10/24: Training Foundation Models on Supercomputers talks/2025/12/16: AuroraGPT: Training Foundation Models on Supercomputers posts/drafts/2025/09/22: 📝 2025 Annual Report
 Theme

🍋 ezpz

Sam Foreman 2026-01-10

In ancient times1, back in ~ 2022–2023, virtually all (production) PyTorch code was designed to run on NVIDIA GPUs.

In April 2023, AMD announced day-zero support for PyTorch 2.0 within the ROCm 6.0 ecosystem, leveraging new features like TorchDynamo for performance

gantt
    title AMD and Intel PyTorch Enablement Timeline
    dateFormat  YYYY
    axisFormat  %Y

    section AMD ROCm and PyTorch
      Torch7 era and early CUDA to HIP ports        :amd1, 2012, 2016
      ROCm 1.0 and HIPIFY tooling                   :amd2, 2016, 2020
      Official PyTorch ROCm Python packages         :amd3, 2021, 2022
      PyTorch Foundation governance participation   :amd4, 2022, 2023
      Triton ecosystem support                      :amd6, 2023, 2024
      MI300x PyTorch guidance                       :amd7, 2024, 2024

    section Intel and PyTorch
      Initial PyTorch contributions                :i2,2018, 2019
      Intel Extension for PyTorch launch           :i3,2020, 2024
      VTune ITT API integration in PyTorch         :i4,2022, 2022
      PyTorch Foundation Premier membership        :i5,2023, 2023
      Prototype native Intel GPU support           :i6,2024, 2024
      Solid native Intel GPU support               :i7,2025, 2025
      IPEX feature upstreaming completion          :i8,2025, 2025
      Intel Extension for PyTorch end of life      :i9,2026, 2026

gantt
    title PyTorch Vendor Integration Timeline AMD vs Intel
    dateFormat  YYYY-MM-DD
    axisFormat  %Y

section AMD
    Installable PyTorch ROCm Python packages         :amd2, 2021-03-04,
    ROCm marked stable                               :amd3,2022-06-28,

section PyTorch Releases
    1.8                                             :milestone,crit, pt180, 2021-03-04,
    1.12                                            :pt1120, 2022-06-28,
    2.0                                             :milestone,crit,pt200, 2023-03-15,
    2.4                                             :pt24, 2024-07-24,
    2.5                                             :milestone,crit,pt250, 2024-10-17,
    2.6                                             :pt260, 2025-01-29,
    2.7                                             :pt270, 2025-04-23,
    2.8                                             :crit, pt280, 2025-08-06,
    2.9                                             :pt290, 2025-10-15,
    2.10                                            :pt210, 2026-01-15,

section Intel
    Incremental Intel GPU improvements begin           :int2, 2024-07-24,
    Native Intel GPU support announced in PyTorch 2.5  :int3, 2024-10-17,
    Intel GPU eager and compile parity in PyTorch 2.7  :int4, 2025-04-23,
    Intel XCCL Backend introduced in PyTorch 2.8       :int5, 2025-04-23,
    IPEX discontinued                                  :int6, 2025-08-06, 2026-03-31
    IPEX end of life                                   :int7, 2026-03-31,

Intel: Mar 2026 (planned): IPEX end-of-lifemove to native PyTorch

%% --- Forward-looking roadmap (AMD) --- AMD—>>PT: 2026: MI450X rack-scale target (2H 2026 competitiveness) AMD—>>PT: Post-2026: MI500 series plans (major AI perf increase)

gantt
    title AMD and Intel PyTorch Enablement Timeline
    dateFormat  YYYY
    axisFormat  %Y

    section AMD ROCm and PyTorch
      Torch7 era and early CUDA to HIP ports        :amd1, 2012, 2016
      ROCm 1.0 and HIPIFY tooling                   :amd2, 2016, 2020
      Official PyTorch ROCm Python packages         :amd3, 2021, 2022
      PyTorch Foundation governance participation   :amd4, 2022, 2023
      Triton ecosystem support                      :amd6, 2023, 2024
      MI300x PyTorch guidance                       :amd7, 2024, 2024

    section Intel and PyTorch
      Initial PyTorch contributions                :i2,2018, 2019
      Intel Extension for PyTorch launch           :i3,2020, 2024
      VTune ITT API integration in PyTorch         :i4,2022, 2022
      PyTorch Foundation Premier membership        :i5,2023, 2023
      Prototype native Intel GPU support           :i6,2024, 2024
      Solid native Intel GPU support               :i7,2025, 2025
      IPEX feature upstreaming completion          :i8,2025, 2025
      Intel Extension for PyTorch end of life      :i9,2026, 2026

``` mermaid
gantt
title AMD and Intel PyTorch Enablement Timeline
dateFormat  YYYY
axisFormat  %Y

section amd AMD
  Torch7 era and early CUDA to HIP ports        :2012, 2016
  ROCm 1.0 and HIPIFY tooling                   :2016, 2020
  Official PyTorch ROCm Python packages         :2021, 2022
  PyTorch Foundation governance participation   :2022, 2023
  ROCm                                          :vert, 2023, 2023
  PyTorch 2.0 day zero ROCm support             :milestone,crit, 2023, 2023
  Triton ecosystem support                      :2023, 2024
  MI300x PyTorch guidance                       :2024, 2024
  Torchtune on AMD GPUs guide                   :2024, 2024
  PyTorch on Windows public preview             :2025, 2025
  AMD PyTorch on Windows ROCm 7.1.1             :2025, 2025
  MI450X rack scale roadmap                     :2026, 2026
  MI500 series future roadmap                   :2027, 2028

section intel Intel
  Initial PyTorch contributions                :2018, 2019
  Intel Extension for PyTorch launch           :2020, 2024
  VTune ITT API integration in PyTorch         :2022, 2022
  PyTorch Foundation Premier membership        :2023, 2023
  Prototype native Intel GPU support           :2024, 2025
  Solid native Intel GPU support               :milestone,crit, 2025, 2025
  X&#123;PU,CCL&#125;                                    :vert, 2025, 2025
  IPEX feature upstreaming completion          :2025, 2025
  Intel Extension for PyTorch end of life      :2026, 2026

–>

AMD Timeline

  • Pre-2021: Early Efforts and Torch7
    • 2012: Torch7 was released, a precursor to PyTorch, written in C++ and CUDA.
    • ROCm 1.0: AMD demonstrated the ability to port CUDA code to HIP (AMD’s C++ dialect for GPU computing) using the HIPIFY tool, including ports of Caffe and Torch7.
  • 2021-2022: Official Support and Foundation
    • March 2021: PyTorch for the AMD ROCm platform became officially available as a Python package, simplifying installation on supported Linux systems.
    • September 2022: The PyTorch project joined the independent Linux Foundation, with AMD participating as a founding member of the PyTorch Foundation governing board.
  • 2023: PyTorch 2.0 Integration
    • April 2023: AMD announced day-zero support for PyTorch 2.0 within the ROCm 6.0 ecosystem, leveraging new features like TorchDynamo for performance improvements.
    • OpenAI Triton Support: The ecosystem grew to include support for OpenAI Triton, a key component for high-performance AI workloads.
  • 2024-2025: Expanding Accessibility (Windows & Consumer GPUs)
    • June 2024: AMD released guides and information on running PyTorch models on AMD MI300x systems, highlighting near drop-in compatibility with code written for Nvidia GPUs.
    • September 2025: AMD released a public preview of PyTorch on Windows, enabling native AI inference on select consumer Radeon RX 7000 and 9000 series GPUs and Ryzen AI APUs, without needing workarounds like WSL2.
    • October 2024: AMD released a “how-to” guide for using Torchtune, a PyTorch library for fine-tuning LLMs, on AMD GPUs.
    • November 2025: Release of AMD Software: PyTorch on Windows Edition 7.1.1, featuring an update to AMD ROCm 7.1.1.
  • Future/Upcoming
    • 2026: AMD is working on its next generation MI450X rack-scale solution, which aims to be competitive with NVIDIA’s high-end offerings by the second half of 2026.
    • Post-2026: The company has also detailed plans for future MI500 series data center GPUs, targeting a significant increase in AI performance

Intel Timeline

  • 2018: Intel begins contributing to the open-source PyTorch framework.
  • 2020: The Intel® Extension for PyTorch* (IPEX) is launched as a separate package to provide optimized performance on Intel CPUs and GPUs.
  • October 20222: PyTorch 1.13 is released with integrated support for Intel® VTune™ Profiler’s ITT APIs.
  • August 20233: Intel joins the PyTorch Foundation as a Premier member, deepening its commitment to the ecosystem.
  • July 2024: PyTorch 2.4 debuts with initial (prototype) native support for Intel GPUs (client and data center).
  • April 2025: PyTorch 2.7 establishes a solid foundation for Intel GPU support in both eager and graph modes (torch.compile) on Windows and Linux.
  • August 2025: Active development of the separate Intel® Extension for PyTorch* ceases following the PyTorch 2.8 release, as most features are now upstreamed into the main PyTorch project.
  • End of March 2026 (Planned): The Intel® Extension for PyTorch* project will officially reach end-of-life. Users are strongly recommended to use native PyTorch directly.

This made sense at the time, as NVIDIA had the vast majority of the GPU market share and was the only major GPU manufacturer.

This was before the advent of

we were still in the early days of trying to run PyTorch on

I’ve been working on the 🍋 ezpz package for a while now,

Footnotes

  1. Even now, in 2026, a lot of code is still NVIDIA-centric and is rarely designed with multi-platform support in mind.

  2. PyTorch 1.13 release

  3. Intel Joins the PyTorch Foundation