Skip to content
Thomas Naughton edited this page Sep 12, 2024 · 100 revisions

Welcome to the PMIx Standard wiki!

Standardization Process

The PMIx standardization process is defined in the PMIx Governance document:

Active Working Groups

Below is a list of the active working groups along with links to more information about their meeting schedules.

The teleconference information is shared via a private link. Contact someone in the working group for the current link.

Implementation Agnostic Document Working Group

This working group is reviewing the PMIx standard and reworking the text that assumes or requires a specific implementation of the PMIx standard. The effort is designed to better enable multiple implementations of the PMIx standard to be explored. The group seeks to also better define a separation between the three dimensions of the standard interface namely: clients, tools, and servers.

Tools Working Group

The large scale and dynamic behaviour of current and post-exascale HPC installations require updates to existing tools and the creation of new ones. This working group will update and extend the tools API to assist in the development of highly-scalable tools. Tool portability will be enhanced. The tools interface will be extended with feedback and control mechanisms for both design time and online tools.

Dynamic Workflows Working Group

Dynamic Workflow will focus on defining APIs/attributes by which dynamic applications can interact with the system management stack to request allocation changes, spawn and terminate processes, and other actions associated with workflow control. Initial efforts are expected to target Data Analytic (e.g., Spark) paradigms and then extend to look at typical AI (e.g., TensorFlow) operations. Some capability (e.g., PMIx_Allocate_resources and PMIx_Job_control) already exists, and so one of the WGs tasks will be to promote adoption and identify impedance mismatches between the dynamic workflow community and the existing PMIx definitions.

  • This working group has been combined and meets with the Tools working group (above)

Working Groups on Hold

Storage Working Group

Storage will investigate APIs/attributes by which applications, schedulers, and other subsystems can interact with storage subsystems. This can include the parallel file system (e.g., Lustre and GPFS) and non-file-based storage (e.g., DAOS), as well as caching mechanisms (e.g., burst buffers and network-near NVRAM caches). Expected areas might include: * Given a list of files and a uid/gid (or credential), return their accessibility status * Queries on available storage, supported storage strategies, storage subsystem topology, etc. * Delete, move files to specified locations * Specify storage policy for a given job

Slicing/Grouping of functionality Working Group

Given the size and structure of the PMIx standard document, it can be difficult to find the PMIx components necessary for a given use-case. The goal of this working group is to provide a mechanism for focusing on the aspects of the standard that are of interest to a particular user/use-case.

Use Cases

Use cases help to drive the development of the PMIx API. Below is a list of use cases that the PMIx community has defined thus far.

  • Issue #191 : Business Card Exchange for Process-to-Process Wire-up
  • Issue #216 : Debugging with Parallel Debuggers

Note that this list is not yet comprehensive in its coverage of PMIx use cases. Please see the prior RFCs and papers for more examples.

To propose a new Use Case please file an Issue for discussion and select the "Use Case" Template. You do not need to know the specific PMIx interfaces that might apply to your use case to open an issue for discussion. A general overview of the use case and its needs is enough to get the conversation going. The PMIx community can help identify existing interfaces/attributes that can be used, and collaborate with you on new interfaces/attributes, where necessary.

Events

Meeting Information

Monthly PMIx Standard Meeting:

Regular Teleconference Meeting Notes

PMIx Administrative Steering Committee (ASC) Quarterly Meetings

  • Calendar [iCal Subscription]

  • 2024

    • 2024 - Q1 Meeting - Tues., Jan. 23 & Thurs., Jan. 25 (Virtual) 10am-1pm CT each day
    • 2024 - Q2 Meeting - Tues., May 7 & Thurs., May 9 (Virtual) 10am-1pm CT each day
    • 2024 - Q3 Meeting - Tues., July 16 & Thurs., July 18 (Virtual) 10am-1pm CT each day
    • 2024 - Q4 Meeting - Tues., Oct. 15 & Thurs., Oct. 17 (Virtual) 10am-1pm CT each day
  • 2023

    • 2023 - Q1 Meeting - Tues., Jan. 24 & Thurs., Jan. 26 (Virtual) 10am-1pm CT each day
    • 2023 - Q2 Meeting - Tues., May 9 & Thurs., May 11 (Virtual) 10am-1pm CT each day
    • 2023 - Q3 Meeting - Tues., July 18 & Thurs., July 20 (Virtual) 10am-1pm CT each day
    • 2023 - Q4 Meeting - Tues., Oct. 17 & Thurs., Oct. 19 (Virtual) 10am-1pm CT each day
  • 2022

    • 2022 - Q1 Meeting - Tues., Feb. 15 & Thurs., Feb. 17 (Virtual) 10am-1pm CT each day
    • 2022 - Q2 Meeting - Tues., May 10 & Thurs., May 12 (Virtual) 10am-1pm CT each day
    • 2022 - Q3 Meeting - Tues., Aug. 9 & Thurs., Aug. 11 (Virtual) 10am-1pm CT each day
    • 2022 - Q4 Meeting - Tues., Oct. 25 & Thurs., Oct. 27 (Virtual) 10am-1pm CT each day
  • 2021

    • 2021 - Q1 Meeting - Tues., Feb. 16 & Thurs., Feb. 18 (Virtual) 10am-1pm CT each day
    • 2021 - Q2 Meeting - Tues., May 11 & Thurs., May 13 (Virtual) 10am-1pm CT each day
    • 2021 - Q3 Meeting - Tues., July 20 & Thurs., July 22 (Virtual) 10am-1pm CT each day
    • 2021 - Q4 Meeting - Tues., Oct. 26 & Thurs., Oct. 28 (Virtual) 10am-1pm CT each day
  • 2020

    • 2020 - Q4 Meeting - Oct. 1, 2020 (Virtual)
      • This was meant to be a Face-to-Face co-located with MPI Forum in Austin, TX (Texas Advanced Computing Center), but due to COVID-19 the Co-Chairs have decided to move this to a virtual meeting.
    • 2020 - Q3 Meeting - July 22, 2020 (Virtual)
    • 2020 - Q2 Meeting - April 15, 2020 (Virtual)
    • 2020 - Q1 Meeting - Jan. 23, 2020 (Virtual)
  • 2019

    • 2019 - Q4 Meeting - Oct. 17, 2019 (Virtual)

References

Clone this wiki locally