-
Notifications
You must be signed in to change notification settings - Fork 28
mOS for HPC v0.5 Readme
mOS for HPC is an operating systems research project at Intel, targeting extreme scale HPC systems. It aims to deliver a high performance computing environment with the scalability, low noise, and repeatability expected of lightweight kernels, while maintaining overall Linux compatibility that HPC applications need.
mOS for HPC remains under development at this time. These materials are being made available to interested parties to explore, to test drive, and to provide feedback through the mailing list. Consider the quality level to be pre-alpha and as-is. It is not intended to be used in production or for business critical uses. The required knowledge level of users is expected to be expert for Linux internals and for operating system principles. Support is limited by the development team's ability to respond through the mailing list. Please note that these are the only mechanisms for interaction with Intel on this project.
Feature | Description | Learn More |
---|---|---|
Dynamic resource designation |
mOS for HPC now supports dynamic CPU and memory designation without rebooting. Designating CPUs and memory for Linux and mOS for HPC lightweight kernel (LWK) can seem impossible given the natural desire for each to "own" the resources once they have them. In this release, mOS for HPC has been able to leverage CPU offlining and memory hot-plug features in Linux to support dynamic resource allocation between Linux and LWK partitions without rebooting and without side effects of things like memory fragmentation. |
See the Administrators Guide for dynamic configuration controls. |
Memory Interleaving |
mOS for HPC now supports memory interleaving across NUMA domains. Applications running 1 or 2 ranks per node in a sub-NUMA cluster configuration such as SNC-4 can experience poor performance due to inefficient memory controller utilization where the OS prioritizes contiguous near-memory allocation over interleaving with far-memory. This limits such applications to only Quadrant Cluster options, as supported by Intel(R) Xeon Phi(TM) processors. By enabling interleaving, mOS for HPC expands the range of applications that can achieve peak performance in SNC-4 without having to reboot into another configuration. |
See the Users Guide for job launch options that support memory interleaving. |
Utility Thread Application Programmer's Interface (UTI API) |
mOS for HPC adds a reference implementation for Utility Thread Application Programmer's Interface (UTI API). Some applications and runtimes such as OpenMP and MPI create utility threads to enable effective communications in terms of progress semantics and overall performance. Separating these utility threads from the compute threads can materially improve performance in applications. The UTI API gives runtimes and applications the ability to 1) identify which threads are utility threads and 2) give placement guidance, so the system can make intelligent placement and scheduling decisions. |
See the Users Guide for job launch options that support optimized placement of utility threads. |
The development platform for mOS for HPC v0.5 has been Intel(R) Xeon Phi(TM) processors 7250 with 96GiB of DRAM and 16GiB of MCDRAM and booted in SNC-4 cluster mode, Flat memory mode. Your mileage may vary on other platforms and configurations in terms of functionality and performance:
- If you use the Intel (R) Xeon Phi(TM) processor 7230, then Quadrant cluster mode, Flat memory mode is recommended.
- If you want to make full MCDRAM available to applications on Intel (R) Xeon Phi(TM) processors, you must verify that MCDRAM is hot-swap-able in the BIOS settings. Please see the Administrator's Guide for further reference.
- The development team has observed lower performance of mOS for HPC when running in the cache memory mode on Intel(R) Xeon Phi(TM) processors, which is not necessarily attributed to hardware.
- Limited testing has been performed on systems with the Intel (R) Xeon (R) processor E7-88xx and on systems with the Intel(R) Xeon(R) Platinum 8168 processor.
- Processors outside of the x86_64 architecture designation in Linux are unsupported – the kernel code will not configure and build.
The Linux distribution used by the development team for building, installing, and testing mOS for HPC has been CentOS 7. Other distributions have had almost no testing, and may require that you adapt the build, install instructions to your environment.
mOS for HPC has been tested with applications using Intel(R) Parallel Studio XE 2017 Cluster Edition Update 4 for Linux*, which includes Intel MPI (2017.4.196). The development team plans to track the Intel(R) Software Tools 2018 programs and MPICH/MPICH4 updates as they become available. Almost no testing has been done using other compilers (e.g. gcc) or MPI runtimes (e.g. MPICH or MVAPICH or OpenMPI).
The mOS for HPC source can be checked out from GitHub at https://github.com/intel/mOS. Please see the Administrator's Guide for further instructions.
Register for the mOS for HPC mailing list at https://groups.google.com/g/mos-devel/. Please, submit feedback and follow discussions through this list.
*Other names and brands may be claimed as the property of others.