Skip to content

mOS for HPC v0.6 Administrator's Guide

Andrew Tauferner edited this page Oct 17, 2018 · 2 revisions

This document provides the instructions to check out, build, install, boot and validate mOS for HPC. All the instructions provided below are validated on the following system configuration:

Component Configuration
Processor Intel(R) Xeon Phi(TM) processor 7250
Cluster mode SNC-4
Memory mode Flat
Memory 96 GB DDR, 16 GB MCDRAM 
Distribution CentOS 
Boot loader GRUB

You may need to modify the steps documented here if you have different hardware or software.  See the mOS for HPC v0.6 Readme for information about platform requirements.

Check out

The mOS for HPC source can be checked out from GitHub at https://github.com/intel/mOS

 

$ git clone https://github.com/intel/mOS.git
Cloning into 'mOS'...
remote: Counting objects: 5099718, done.
remote: Compressing objects: 100% (10063/10063), done.
remote: Total 5099718 (delta 22076), reused 27221 (delta 21785), pack-reused 5067822
Receiving objects: 100% (5099718/5099718), 1.11 GiB | 8.70 MiB/s, done.
Resolving deltas: 100% (4193714/4193714), done.
Checking out files: 100% (56386/56386), done.
$ cd mOS
mOS $ git checkout 4.9.107_0.6.mos

Checking out files: 100% (3658/3658), done.
Branch 4.9.107_0.6.mos set up to track remote branch 4.9.107_0.6.mos from origin.
Switched to a new branch '4.9.107_0.6.mos'
mOS $

Configuration

The mOS for HPC source contains the config.mos example file that should be used to configure it. The table below shows the settings needed to configure the source code. 

Mandatory setting Description

CONFIG_MOS_FOR_HPC=y

 Activate the mOS for HPC code in the Linux kernel

CONFIG_MOS_MOVE_SYSCALLS=y

 Activate the mOS for HPC system call forwarding feature

CONFIG_MOS_SCHEDULER=y

 Enable the mOS for HPC scheduler

CONFIG_MOS_LWKMEM=y

 Enable the mOS for HPC memory management
Strongly recommended settings Description

CONFIG_NO_HZ_FULL=y

 Activate the tickless feature of Linux. In conjunction with the mOS for HPC scheduler, this limits noise on LWK CPUs.

CONFIG_NO_HZ_FULL_ALL=y

 Full dynticks system on all CPUs by default (except CPU 0).
 CONFIG_RCU_NOCB_CPU=y  Offload RCU callback processing from boot-selected CPUs. mOS for HPC uses this capability to reduce noise on LWK CPUs.
CONFIG_RCU_NOCB_CPU_ALL=y  All CPUs are build-forced no-CBs CPUs.

In addition, there are several standard Linux kernel settings that mOS for HPC depends on; e.g., NUMA. See kernel/mOS/Kconfig for details. Although it may appear that it is possible to disable major functions in mOS for HPC by disabling them during the build process, please always use all four mandatory settings plus the strongly recommended ones. mOS for HPC has not really been tested with different settings combinations; not enabling all is for debugging only.

Build

It is recommended that you build kernel RPMs for installation of mOS for HPC.  The minimum build system requirements can be found at https://www.kernel.org/doc/html/latest/process/changes.html.  A sample configuration file, config.mos, is provided.  Your specific compute node hardware may require a different configuration.  Please run the following commands from the directory where you checked out mOS for HPC:

mOS $ cp config.mos .config
mOS $ make -j 32 binrpm-pkg
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
SHIPPED scripts/kconfig/zconf.tab.c
SHIPPED scripts/kconfig/zconf.lex.c
SHIPPED scripts/kconfig/zconf.hash.c
HOSTCC scripts/kconfig/zconf.tab.o
HOSTLD scripts/kconfig/conf
scripts/kconfig/conf --silentoldconfig Kconfig
CHK include/config/kernel.release
UPD include/config/kernel.release
make KBUILD_SRC=
SYSTBL arch/x86/entry/syscalls/../../include/generated/asm/syscalls_32.h
SYSHDR arch/x86/entry/syscalls/../../include/generated/asm/unistd_32_ia32.h
SYSHDR arch/x86/entry/syscalls/../../include/generated/asm/unistd_64_x32.h
SYSTBL arch/x86/entry/syscalls/../../include/generated/asm/syscalls_64.h
SYSHDR arch/x86/entry/syscalls/../../include/generated/uapi/asm/unistd_32.h
SYSHDR arch/x86/entry/syscalls/../../include/generated/uapi/asm/unistd_64.h
SYSHDR arch/x86/entry/syscalls/../../include/generated/uapi/asm/unistd_x32.h
HOSTCC scripts/basic/bin2c
CHK include/config/kernel.release
WRAP arch/x86/include/generated/asm/clkdev.h
WRAP arch/x86/include/generated/asm/cputime.h
WRAP arch/x86/include/generated/asm/dma-contiguous.h
WRAP arch/x86/include/generated/asm/early_ioremap.h
WRAP arch/x86/include/generated/asm/mcs_spinlock.h
WRAP arch/x86/include/generated/asm/mm-arch-hooks.h
CHK include/generated/uapi/linux/version.h
UPD include/generated/uapi/linux/version.h
CHK include/generated/utsrelease.h
UPD include/generated/utsrelease.h
HOSTCC scripts/kallsyms
HOSTCC scripts/conmakehash
HOSTCC scripts/recordmcount
HOSTCC scripts/sortextable
HOSTCC scripts/asn1_compiler
HOSTCC scripts/extract-cert
HOSTCC scripts/genksyms/genksyms.o
SHIPPED scripts/genksyms/parse.tab.c
SHIPPED scripts/genksyms/lex.lex.c
SHIPPED scripts/genksyms/keywords.hash.c
CC scripts/mod/empty.o

.

.
.

Processing files: kernel-mOS-4.9.107_0.6.mos-2.x86_64
Provides: kernel-mOS = 4.9.107_0.6.mos-2 kernel-mOS(x86-64) = 4.9.107_0.6.mos-2
Requires(interp): /bin/sh
Requires(rpmlib): rpmlib(PayloadFilesHavePrefix) <= 4.0-1 rpmlib(CompressedFileNames) <= 3.0.4-1
Requires(post): /bin/sh
Checking for unpackaged file(s): /usr/lib/rpm/check-files /home/admin/rpmbuild/BUILDROOT/kernel-4.9.107_0.6.mos-2.x86_64
Wrote: /home/admin/rpmbuild/RPMS/x86_64/kernel-4.9.107_0.6.mos-2.x86_64.rpm
Wrote: /home/admin/rpmbuild/RPMS/x86_64/kernel-headers-4.9.107_0.6.mos-2.x86_64.rpm
Wrote: /home/admin/rpmbuild/RPMS/x86_64/kernel-mOS-4.9.107_0.6.mos-2.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.Uk8Wda
+ umask 022
+ cd .
+ rm -rf /home/admin/rpmbuild/BUILDROOT/kernel-4.9.107_0.6.mos-2.x86_64
+ exit 0
rm binkernel.spec
mOS $

 

Installation

1. Install RPMS 

The RPMs built in the previous step needs to be installed on the compute nodes or into the compute node image.

At a minimum install the kernel-4.9.107_0.6.mos-2.x86_64 and kernel-mOS-4.9.107_0.6.mos-2.x86_64 RPMs into your compute node image. The exact RPM names may vary depending on the state of the code, whether a local version name is specified (in something like make menuconfig), and how many times the RPMs are built. However, the 4.9.107_0.6.mos part of the name should remain constant.

$ sudo rpm -ivh /home/admin/rpmbuild/RPMS/x86_64/kernel-4.9.107_0.6.mos-2.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:kernel-4.9.107_0.6.mos-2 ################################# [100%]
$ sudo rpm -ivh --force /home/admin/rpmbuild/RPMS/x86_64/kernel-mOS-4.9.107_0.6.mos-2.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:kernel-mOS-4.9.107_0.6.mos-2 ################################# [100%]
$

 

2. Update GRUB with the new kernel command line

After RPM installation the kernel needs to be added to the grub menu on the compute nodes. The kernel parameters needed are taken from /etc/defaults/grub via the GRUB_CMDLINE_LINUX variable in that file.  Please update or replace the GRUB_CMDLINE_LINUX variable in that file as follows:

GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8 selinux=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap intel_pstate=disable nmi_watchdog=0 lwkcpus=1.52-67,256-271:69.120-135,188-203:137.2-17,206-221:205.70-85,138-153:19.20-35,224-239:87.88-103,156-171:155.36-51,240-255:223.104-119,172-187 kernelcore=16G movable_node lwkmem=0:16G,1:16G,2:16G,3:16G,4:3968M,5:3968M,6:3968M,7:3968M"

Recommended Kernel Boot Parameters

The following parameters and values are recommended for mOS for HPC.  Not all combinations and variations of boot parameters have been validated and tested.  Boot failure is possible if, for example, lwkcpus and lwkmem are not properly set for your system.  Please refer to Documentation/kernel-parameters.txt in the mOS for HPC kernel source for further details.


Name Recommended Value Description
     
nmi_watchdog 0 Disable the NMI watchdog interrupt from occurring in order to eliminate this additional source of noise on the CPUs. An alternative method of turning off the watchdog is writing a zero to the system file /proc/sys/kernel/nmi_watchdog. This approach would eliminate the need to set it here.
intel_pstate disable Do not allow the system to dynamically adjust the frequency of the CPUs. When running HPC applications, we want a stable, consistent CPU frequency across the entire job.

lwkcpus

 

topology dependent

List of CPUs to be controlled by mOS. This includes the CPUs that will be exclusively owned by mOS (implicitly marked as 'isolated') and also Linux CPUs that will be used by mOS to host utility threads and to execute migrated system calls. The lwkcpus argument designates CPU resources to the LWK. The format of the entries is of the form:

lwkcpus=<syscall cpu1>.<lwkcpu set1>:<syscall cpu2>.<lwkcpu set2>...

For example:

lwkcpus=28.1-13,29-41:42.15-27,43-55

In this configuration, there are two Linux CPUs, 28 and 42, designated to handle syscalls. CPU 28 will host syscalls for LWK CPUs 1-13 and 29-41. CPU 42 will host syscalls for LWK CPUs 15-27 and 43-55.  Note that this is a simplified example and may not be the optimized configuration.

lwkmem

topology dependent

Designate memory for use by mOS. The amount of memory requested is specified in parse_mem format (K,M,G). Or designate memory via NUMA domain. The LWK memory requested using this kernel command line can only come from the movable memory in the system. Use the 'kernelcore' command line argument explained below to specify the total amount of non-movable and movable memory in the system.

Example #1: lwkmem=126G

This requests the kernel to designate a total of 126G of physical memory to the LWK. The memory requested will be allocated from all online NUMA nodes which have movable memory.

Example #2: lwkmem=0:58G,1:16G

This requests that the kernel designates a total of 58G of physical memory from NUMA node 0 and 16G of physical memory from NUMA node 1 to the LWK. If the full amount of requested memory can not be allocated on a specified NUMA node in the list, then the remainder of the request will be distributed uniformly among the requests on subsequent NUMA nodes in the request list. In this example, if the kernel could designate only 50G on NUMA node 0 then the remaining 8G of the request would be added to the 16G requested from NUMA node 1.

kernelcore 16G

This Linux boot argument sets the total non-movable memory in the system. Non-movable memory is the memory used only by the Linux kernel and cannot be dedicated to the LWK. The kernel treats the rest of the physical memory as movable memory which can be dynamically provisioned between Linux and the LWK. The memory requested using the 'lwkmem' kernel parameter described above can only come from movable memory in the system. Adjust the 'kernelcore' kernel parameter value based on the requirement.

In mOS for HPC,

    1. It is desirable to keep the total non-movable memory low since it cannot be dynamically moved between Linux and LWK.
    2. It is desirable to contain the non-movable memory to only DDR so that the entire MCDRAM memory is movable and can be given to the LWK.

On an Intel(R) Xeon Phi(TM) processor, point b. can be accomplished by specifying the 'movable_node' kernel parameter (described below) along with the 'kernelcore' parameter. Please see BIOS settings below for MCDRAM configuration.

Ex: kernel command line parameters: kernelcore=16G movable_node on a system with 96G DDR and 16G MCDRAM

In this case,

  • There will be 16G of total non-movable memory in the system and it will be uniformly spread across only the DDR NUMA nodes.
  • The entire 16G MCDRAM memory will be movable memory which can be dedicated to the LWK, and the remaining 80G of DDR memory will be movable as well.
movable_node  

On systems with the Intel(R) Xeon Phi(TM) processor, this marks MCDRAM NUMA nodes as movable nodes, if the MCDRAM is configured as hot-pluggable memory in BIOS, i.e. there won't be any kernel memory allocation in MCDRAM and all of it can be used by applications (Linux or LWK). Please see BIOS settings below for MCDRAM configuration.

     

 The last step is to update the grub configuration using the grub2-mkconfig command.  Please ensure that appropriate rd.lvm.lv settings are specified for your system.  The grub configuration file is grub.cfg.  The location of this file varies. The example below shows a system where it is located in /boot/efi/EFI/centos/grub.cfg.  Other systems might have it in /boot/grub2/grub.cfg.  You may want to save a backup copy of your grub.cfg file before the following step.

[~]$ sudo grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg

Note: This command will add the kernel parameters in GRUB_CMD_LINUX to every entry in the grub menu.  You should preserve and restore the existing kernel entries in grub.cfg after running grub2-mkconfig.

BIOS settings

It is recommended to treat MCDRAM as hot-pluggable memory.  This setting in conjunction with the 'movable_node' kernel parameter is necessary for maximum MCDRAM availability for applications (either Linux or LWK).  The following BIOS menu is used to configure MCDRAM:

EDKII Menu -> Advanced -> Uncore Configuration -> Treat MCDRAM as Hot-Pluggable Memory ==> <Yes>

Booting

If mOS for HPC has been properly installed and configured then the grub boot menu should have an entry for mOS.  Please select the 4.9.107_0.6.mos entry during boot.

CentOS Linux (4.9.107_0.6.mos) 7 (Core)
CentOS Linux (3.10.0-327.36.3.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-71e25674024146aaa3ff5de0e403b11d) 7 (Core)

Use the ^ and v keys to change the selection.

Press 'e' to edit the selected item, or 'c' for a command prompt.
The selected entry will be started automatically in 0s.

Validate operational state

In order to validate a successful installation, perform the following steps on the compute nodes where mOS for HPC is installed.

To test that yod is functional, launch a simple application using yod:

$ yod /bin/echo hello

hello

 

If LWK memory is active then you should be able to see some LWK entries in the process mapping of an LWK process:

$ yod cat /proc/self/maps | grep LWK
0060b000-0060d000 rw-p 00000000 00:00 0                     LWK
00800000-00a00000 rw-p 00000000 00:00 0                     [heap] LWK
2aaaaaad0000-2aaaaaad1000 rw-p 00000000 00:00 0     LWK
2aaaaaadb000-2aaaaaadc000 rw-p 00000000 00:00 0     LWK

The above example runs the cat program as an mOS process and reserves CPU and memory resources for it.

Alternatively, you can use the lwkctl utility to view the version and configuration

 

$ lwkctl -s
mOS version : 0.6
Linux CPU(s): 0-1,18-19,68-69,86-87,136-137,154-155,204-205,222-223
LWK CPU(s): 2-17,20-67,70-85,88-135,138-153,156-203,206-221,224-271
Syscall CPU(s): 1,19,69,87,137,155,205,223
LWK Memory(KB): 16777216 16777216 16777216 16777216 4063232 4063232 4063232 4063232
$

 

Check the dmesg log for mOS entries:

$ sudo dmesg | grep mOS

[ 7.444889] mOS-lwkctl: Creating default memory partition: lwkmem=0:16G,1:16G,2:16G,3:16G,4:3968M,5:3968M,6:3968M,7:3968M
[ 7.466945] mOS-mem: Initializing memory management
[ 7.545020] mOS-mem: Node 0: va 0xffff880148000000 pa 0x148000000 pfn 1343488-5537791 : 4194304
[ 7.564714] mOS-mem: Node 0: offlining va 0xffff880148000000 pa 0x148000000 pfn 1343488-5537791:4194304
[ 12.271329] mOS-mem: Node 0: Requested 16384 MB Allocated 16384 MB
[ 12.351064] mOS-mem: Node 1: va 0xffff880840000000 pa 0x840000000 pfn 8650752-12845055 : 4194304
[ 12.371139] mOS-mem: Node 1: offlining va 0xffff880840000000 pa 0x840000000 pfn 8650752-12845055:4194304
[ 15.363621] mOS-mem: Node 1: Requested 16384 MB Allocated 16384 MB
[ 15.450082] mOS-mem: Node 2: va 0xffff880f40000000 pa 0xf40000000 pfn 15990784-20185087 : 4194304
[ 15.470369] mOS-mem: Node 2: offlining va 0xffff880f40000000 pa 0xf40000000 pfn 15990784-20185087:4194304
[ 18.574863] mOS-mem: Node 2: Requested 16384 MB Allocated 16384 MB
[ 18.651289] mOS-mem: Node 3: va 0xffff881640000000 pa 0x1640000000 pfn 23330816-27525119 : 4194304
[ 18.672008] mOS-mem: Node 3: offlining va 0xffff881640000000 pa 0x1640000000 pfn 23330816-27525119:4194304
[ 21.709917] mOS-mem: Node 3: Requested 16384 MB Allocated 16384 MB
[ 21.744488] mOS-mem: Node 4: va 0xffff880640000000 pa 0x640000000 pfn 6553600-7569407 : 1015808
[ 21.765313] mOS-mem: Node 4: offlining va 0xffff880640000000 pa 0x640000000 pfn 6553600-7569407:1015808
[ 22.648293] mOS-mem: Node 4: Requested 3968 MB Allocated 3968 MB
[ 22.682239] mOS-mem: Node 5: va 0xffff880d40000000 pa 0xd40000000 pfn 13893632-14909439 : 1015808
[ 22.703642] mOS-mem: Node 5: offlining va 0xffff880d40000000 pa 0xd40000000 pfn 13893632-14909439:1015808
[ 23.434153] mOS-mem: Node 5: Requested 3968 MB Allocated 3968 MB
[ 23.467701] mOS-mem: Node 6: va 0xffff881440000000 pa 0x1440000000 pfn 21233664-22249471 : 1015808
[ 23.489516] mOS-mem: Node 6: offlining va 0xffff881440000000 pa 0x1440000000 pfn 21233664-22249471:1015808
[ 24.236935] mOS-mem: Node 6: Requested 3968 MB Allocated 3968 MB
[ 24.270712] mOS-mem: Node 7: va 0xffff881b40000000 pa 0x1b40000000 pfn 28573696-29589503 : 1015808
[ 24.292703] mOS-mem: Node 7: offlining va 0xffff881b40000000 pa 0x1b40000000 pfn 28573696-29589503:1015808
[ 25.061618] mOS-mem: Node 7: Requested 3968 MB Allocated 3968 MB
[ 25.080233] mOS-mem: Requested 81408 MB Allocated 81408 MB
[ 25.098209] mOS-lwkctl: LWK creating default LWKMEM partition..Done
[ 25.117199] mOS-lwkctl: Creating default CPU partition:
[ 25.162105] mOS-lwkctl: lwkcpu_profile=normal
[ 25.179630] mOS: LWK CPUs 52-67,256-271 will ship syscalls to Linux CPU 1
[ 25.199930] mOS: LWK CPUs 120-135,188-203 will ship syscalls to Linux CPU 69
[ 25.220570] mOS: LWK CPUs 2-17,206-221 will ship syscalls to Linux CPU 137
[ 25.241017] mOS: LWK CPUs 70-85,138-153 will ship syscalls to Linux CPU 205
[ 25.261665] mOS: LWK CPUs 20-35,224-239 will ship syscalls to Linux CPU 19
[ 25.282123] mOS: LWK CPUs 88-103,156-171 will ship syscalls to Linux CPU 87
[ 25.302691] mOS: LWK CPUs 36-51,240-255 will ship syscalls to Linux CPU 155
[ 25.323312] mOS: LWK CPUs 104-119,172-187 will ship syscalls to Linux CPU 223
[ 25.344314] mOS: Configured LWK CPUs: 2-17,20-67,70-85,88-135,138-153,156-203,206-221,224-271
[ 25.366816] mOS: LWK CPU profile set to: normal
[ 25.384948] mOS-sched: set unbound workqueue cpumask to 0-1,18-19,68-69,86-87,136-137,154-155,204-205,222-223,272-279
[ 25.409831] mOS-sched: IDLE MWAIT enabled. Hints min/max=80000000/c0000010. CPUID_MWAIT substates=00000110
[ 34.653162] mOS-lwkctl: mOS: LWK creating default partition.. Done

Check to validate that yod is using all the specified LWK CPUs:

$ [ $(yod cat /sys/kernel/mOS/lwkcpus_reserved) == $(cat /sys/kernel/mOS/lwkcpus) ] && echo "mOS for HPC is operational" || echo "mOS for HPC not operational"
mOS for HPC is operational

Additional things to know 

When mOS is booted and managing resources, an obvious question is what common system tools tell you about the machine state. Here is some information.

Command / Tool Notes
top, htop Behaves as expected showing CPU utilization, process placement across CPUs

/proc/meminfo

free

By default shows memory usage statistics on both Linux and LWK. Furthermore, the tool mosview can be used to see only LWK side usage or only Linux side usage.

 

dmesg mOS kernel will write information to the syslog, a good place to check for operational health
debugging and profiling tools mOS maintains compatibility with Linux so that tools such as ptrace, strace, and gdb continue to work as expected. In addition, Intel Parallel Studio XE tools such as Intel(R) Vtune(TM) Amplifier and Intel(R) Advisor also work as designed

Dynamic LWK partitioning

In mOS for HPC the resources, CPUs and memory, can be dynamically partitioned between Linux and the LWK. An LWK partition can be created using the user space command utility 'lwkctl' after the kernel boots up. A default LWK partition can also be created during the kernel boot up by specifying the LWK resources needed on the kernel command line through kernel parameters 'lwkcpus=' and 'lwkmem='.

lwkcpus=<syscall cpu1>.<lwkcpu set1>:<syscall cpu2>.<lwkcpu set2>...

lwkmem=<n1>:<size1>,<n2>:<size2>,...

Where, n1,n2,.. are NUMA node numbers. size1,size2,.. are sizes of the LWKMEM requests on corresponding NUMA node.

Based on the system need, this default LWK partition can be deleted later after the boot-up and a new LWK partition can be created using the lwkctl command.

 

Note: Linux interfaces to hotplug CPU and memory can not be used to further hotplug CPU and memory when an LWK partition is in place.  The LWK partition must be deleted to use those Linux interfaces.  Please see the lwkctl command man page.

Utility - lwkctl

This command line utility offlines the resources on Linux and hands them over (designates) to the LWK and vice versa. Once this partitioning is complete, further resource partitioning between LWK processes (reservation) is done using the mOS job launch utility yod. The lwkctl command requires root privileges in order to create or delete an LWK partition. Both LWK CPU and LWK memory specifications need to be provided while creating an LWK partition. Deleting an LWK partition deletes both LWK CPU and LWK memory designations. The command can also be used to view the current LWK partition.

Quick Reference:

  • Creating LWK partition:

    sudo lwkctl -c 'lwkcpus=<lwkcpu_spec> lwkmem=<lwkmem_spec>'

    Example:

    sudo lwkctl -c 'lwkcpus=1.52-67,256-271:69.120-135,188-203:137.2-17,206-221:205.70-85,138-153:19.20-35,224-239:87.88-103,156-171:155.36-51,240-255:223.104-119,172-187 lwkmem=0:16G,1:16G,2:16G,3:16G,4:3968M,5:3968M,6:3968M,7:3968M'

    Notice that the entire specification needs to be enclosed within ' '

  • Deleting LWK partition:
    sudo lwkctl -d
  • See the existing LWK partition:

    To output in human readable format,

    lwkctl -s

    To output in raw format,

    lwkctl -s -r

For further details regarding the usage refer to the lwkctl man page on the compute node where mOS for HPC is installed.

Documentation

Readme
    v1.0, v0.9, v0.8, v0.7, v0.6, v0.5, v0.4
User's Guide
    v1.0, v0.9, v0.8, v0.7, v0.6, v0.5, v0.4
Administrator's Guide
    v1.0, v0.9, v0.8, v0.7, v0.6, v0.5, v0.4
Memory Management and Scheduler Design
    LWKMem, Sched
Utility Thread API
    v1.0
Other Info
    An mOS flyer

Clone this wiki locally