Skip to content

Threads WG Meeting 08 07 2018

Manjunath Gorentla Venkata edited this page Oct 2, 2018 · 1 revision

Agenda

  • OpenSHMEM and MPI interoperability (Min Si)
  • shmem_ctx_quiet_local (Jim Dinan)

Attendees

  • Note taker: Naveen
  • Attendees: Naveen and Bob (Cray), Manju, Swaroop, and Thomas Naughton (ORNL), Anshuman (NVIDIA), Jim, Jeff Hammond, Dave and Wasi (Intel), Khaled Hamidouche (AMD), Michael Raymond (HPE), Min Si (ANL), Tony (SBU) and Alex (Mellanox)

Notes

  • MPI and SHMEM interoperability

    • Min Si - Shared slides for Interoperability with other programming models
    • Challenges:
      • Minimum changes from the user side to make MPI + SHMEM working
      • Define Undefined behavior and make it explicit
    • Overview of existing OpenSHMEM runtime designs:
      • Standalone OpenSHMEM (eg: SOS)
      • Runtime supporting both MPI and OpenSHMEM (eg: Cray SHMEM, HPE SHMEM)
      • SHMEM over MPI (eg: OSHMEM)
    • HPE runtime design model (retrieved from Mike's email)
      • only MPI_COMM_WORLD rank is the same as SHMEM PE
      • do not allow OpenSHMEM between MPI worlds
      • memory model: allow SHEAP to be used by MPI, MPI_Alloc_mem() by SHMEM
      • completion semantics: shmem_quiet() doesn't flush MPI operations?
      • ordering initialize and finalize operations between SHMEM and MPI - do we have a clear ordering requirements
      • same questionable operations on THREAD_SAFETY
    • SOS Model
      • future plans for PMIx handles the ordering for init and finalize
      • future plans for using MPI process manager as SHMEM process manager
    • Resource cleanup - manual progress related discussions
    • Runtime progress sharing for resource cleanup
    • Trigger progress in the other runtime progress engine - separate operations
    • Common keys which can be used on Open MPI - Alex developed a separate key-value library for OSHMEM for Open MPI
    • Looks like PMIx has better features
    • Final thoughts:
      • MPI dynamic process - not supported
      • Atomicity, ordering, and completion of mixed MPI/SHMEM RMA and atomics is not guaranteed
    • Scope of the ticket:
      • No need for semantic interoperability between programming models
      • Just make the runtimes work separately
    • Unanswered questions
      • MPI_RANK_REORDERING
      • PE/Rank translation with the topology-aware mapping
    • Manju plan: NVSHMEM and MPI - NVSHMEM on-node and MPI across node
    • Min Si - Next Step: not optimizations just the initial working
  • Teams WG question

    • PE-accessible/PE-address PE-accessible
    • Team specific changes
  • shmem_ctx_quiet_local

    • Ticket #234
    • local completion on non-blocking operation
    • based on waiting from the merge request handle
    • Manju - local completion seems to be useful for small messages and not large messages
    • infiniband - local vs remote completion - local allows buffer reuse? Not clear from Min Si's discussion
    • ugni - local completion vs global completion - do you have similar semantics?
    • get_nbi will be similar to shmem_quiet
Clone this wiki locally