Releases: mlcommons/ck
cm-v1.0.5
Stable release from the MLCommons taskforce on education and reproducibility to test the MLPerf inference benchmark automation for the Student Cluster Competition at SC'22.
MLCommons CM v1.0.0 - the next generation of the MLCommons Collective Knowledge framework
This is the stable release of the MLCommons Collective Mind framework v1.0.1 with reusable and portable MLOps components - the next generation of the MLCommons Collective Knowledge framework developed to modularize AI/ML Systems and automate their benchmarking, optimization and design space exploration based on the mature MLPerf methodology.
After donating the CK framework to the MLCommons, we have been developing this portable workflow automation technology as a community effort within the open education workgroup to modularize MLPerf and make it easier to plug in real-world tasks, models, data sets, software and hardware from the cloud to the edge.
We are very glad to see that more than 80% of all performance results and more than 95% of all power results were automated by the MLCommons CK v2.6.1 in the latest MLPerf inference round thanks to submissions from Qualcomm, Krai, Dell, HPE and Lenovo!
We invite you to join our public workgroup to continue developing this portable workflow framework and reusable automation for MLOps and DevOps as a community effort to:
- develop an open-source educational toolkit to make it easier to plug any real-world ML & AI tasks, models, data sets, software and hardware into the MLPerf benchmarking infrastructure;
- automate design space exploration of diverse ML/SW/HW stacks to trade off performance, accuracy, energy, size and costs;
- help end-users reproduce MLPerf results and deploy the most suitable ML/SW/HW stacks in production;
- support collaborative and reproducible research.
(C)opyright MLCommons 2022
Shortcuts
MLCommons CM toolkit v0.7.24 - the first stable release to modularize and automate MLPerf inference v2.1
Stable release of the MLCommons CM toolkit - the next generation of the CK framework developed in the open workgroup to modularize and automate MLPerf benchmarks.
A fix to support Python 3.9+
This release includes a fix for issue #184 .
Stable release for MLCommons CK v2.6.0
This is a stable release of the MLCommons CK framework with a few minor fixes to automate MLPerf inference benchmark v2.0+ submissions.
Several improvements including the possibility to skip arbitrary dependencies from CK workflows
- fixed copyright note in the License file
- improved problem reporting in module:program
- important fix in "module:program" detected while preparing MLPerf inference out-of-the-box benchmarking: clean tmp directory when running CK workflow that doesn't have compilation!
- added --remove_deps flag to module:program and module:env as suggested by CK users to be able to remove some dependencies from CK program workflows and thus use natively installed compilers, tools, libraries and other components - useful to debugging and testing. This flag takes a list of keys from dependencies from a given program workflows separated by comma.
- added the latest contributors to the CK project.
Extra improvements for MLPerf inference
- added 'ck_html_end_note' key to customize CK result dashboard
- fixed Pareto frontier filter
- added "ck filter_2d math.frontier" for MLPerf inference
Community extensions
This release includes multiple CK extensions based on the user feedback:
- added --j flag to "ck install package" to update CK_HOST_CPU_NUMBER_OF_PROCESSORS env and force number of processes in "make -j" if used by a given package
- added ck.cfg key "pip_user". If yes, add --user when to pip when installing CK repositories.
Turn it on as follows:
$ ck set kernel var.pip_user=yes
See ticket #157 - print path to the CK kernel when invoking "ck"
- extended "result" module to use meta.json as a base for all sub configurations (to simplify configuration for multiple dashboards - useful for MLPerf)
support for the new CK dashboards and other improvements
- added result.cfg to configure CK dashboards
- improved module:result to push results to CK dashboards
- improved module:dashboard to work with new CK dashboards
- improved module:wfe to work with new CK dashboards
- added module:ck-platform to work with cKnowledge.io platform (moved cbench functionality from the CK incubator to the CK module to work with cKnowledge.io dashboards directly)
- provided better support to work with CK modules as standard Python packages (see module:ck-platform as example)
regular extensions based on user feedback
- added support to automatically add simple packages from Git:
ck add package:my-git-repo --git={URL} --tags=my-repo --env_ext=MY_GIT_REPO
- added module:mlperf.result to abstract official MLPerf results
- added key "skip_global_deps" to program meta to skip global dependencies
for a given command line (for example, only to install python deps
for a given program) - improved handling of a situation when CK environment has changed
and make it possible to continue running a workflow at user risk
(useful for debugging)