Skip to content
James Dinan edited this page Sep 24, 2024 · 80 revisions

HPC node architectures are trending toward large numbers of cores/CPUs, as well as accelerators such as GPUs. To make better use of shared resources within a node and to program accelerators, users have turned to hybrid programming that combines MPI with a node-level and data parallel programming models. The goal of this working group is to improve the programmability and performance of MPI+X usage models

Goals

Investigate support for MPI communication involving accelerators

  • Hybrid programming of MPI + [CUDA, HIP, DPC++, ...]
  • Host-initiated communication with accelerator memory
  • Host-setup with accelerator triggering
  • Host-setup, enqueued on a stream or queue
  • Accelerator-initiated communication

Investigate improved compatibility/efficiency for multithreaded MPI communication

  • MPI + [Pthreads, OpenMP, C/C++ threading, TBB, ...]

Chairs

  • James Dinan -- jdinan (at) nvidia (dot) com

Mailing List

mpiwg-hybridpm (at) lists (dot) mpi-forum (dot) org -- Subscribe

Meeting Information

The HACC WG is currently sharing a meeting time with the Persistence WG.

We meet every two weeks on Wednesdays at 10:00 AM - 11:00 AM ET

Meeting details and recordings are available here.

Active Topics

  1. Continuations proposal #6 (Joseph)
  2. Memory Allocation Kinds Side Document v2
    • OpenMP (Edgar, Maria)
    • Coherent Memory, std::par (Rohit)
    • Backlog: OpenCL
  3. Accelerator bindings for partitioned communication #4 (Ryan Grant et al.)
    • Partitioned communication buffer preparation (shared with Persistence WG) #264
  4. File IO from GPUs (Edgar, topic shared with File IO WG)
  5. Accelerator Synchronous MPI Operations #11 (Need someone to drive)

Backlog

  1. MPI Teams / Helper Threads (Joseph)
  2. Clarification of thread ordering rules #117 (MPI 4.1)
  3. Integration with accelerator programming models:
    1. Accelerator info keys follow-on
      • Memory allocation kind in MPI allocators (e.g. MPI_Win_allocate, MPI_Alloc_mem, etc.)
  4. Asynchronous operations #585

2024 Meeting Schedule

  • 12/25 -- Canceled

  • 12/11 -- Open

  • 11/20 -- Open

  • 11/6 -- Open

  • 10/23 -- Open

  • 10/9 -- Open

  • 9/25 -- MPI Forum Meeting

  • 9/11 -- Canceled

  • 8/28 -- Continue Discussion on GPU Triggering APIs [Patrick and Tony]

  • 8/14 -- GPU Triggering APIs for MPI+X Communication [Patrick Bridges]

  • 7/31 -- Partitioned Communication [Ryan Grant]

  • 7/17 -- Canceled

  • 7/3 -- Canceled

  • 6/19 -- US Juneteenth Holiday

  • 6/5 -- Open

  • 3/27 -- Application use case for device-side continuations (Joachim Jenke)

  • 3/20 -- MPI Forum Meeting in Chicago

  • 3/13 -- Continuations (Joseph Schuchart)

  • 3/6 -- Continuations (Joseph Schuchart)

  • 2/28 -- Memory Alloc Kinds (Rohit Zambre)

  • 2/21 -- No Meeting

  • 2/14 -- Topic rescheduled to 3/27

  • 2/7 -- Canceled

  • 1/31 -- Canceled

  • 1/17 -- Planning Meeting