During this project we focused on leveraging the docker engine to allow adaptability of the cpu shares of different containers, according to the cpu use of each container. Our new docker command allows modifications to the observe decide act state machine by changing different parameters such as observation frequency, adaptation frequency, and decision policy and its parameters.
So far we have not create any explicit dynamic adaptation but we have study an static approach with a basic policy. This policy consist of creating coarse grained time slots, with an order of magnitude of second, in which a unique container has the majority of the cpu shares. The policy starts taking effect only if there is oversubscription in the cpu shares. This policy is based in the fact that, when there is oversubscription, there is contantion in cache and in the OS scheduler (Cite required). The effect of this macroscheduling policy were confirmed by running several containers that were executing Linpack. Linpack is a common benchmark used in HPC community that does LU decomposition. Linpack has a high dependency in cache memory and tends to use 100% of the cpu computation.
Although this work is still incomplete as it is right now, we hope it will encourage docker community in the implementation of better policies and serves as a support to highligh the importance of better ODA policies and dynamic adaptation for different type of loads
So far we use different flags to change the Observe Decide Act parameters.
- -fd, --file_dump= fileeName. If defined, the output would ve in a CVS file. Else it would be std output.
- --help Print usage
- -ob, --observe Do not adapt, only observe dumping the info use -obv ms to change the observation interval
- -obv, --observe_value=value Setting the observe interval value default is 2000 ms. implies --observe
- -rt, --reaction_time Measure how long it takes since the moment the action to modify cgroups is sent to the moment it is actually on the cgroups (Incompatible with any other policy)
- -ts, --time_slots=false See description of this policy above. Create timeslots ,use -slot_size=val_ms to define time slot value
- -tsv, --time_slots_value=size Time slot size. Value in ms. -tsv=time_in_ms. Implies --time_slots
Stephen Herbein, Ayush Dusia, Aaron Landwehr, Sean McDaniel, Jose Monsalve, Yang Yang, Seetharami R. Seelam, and Michela Taufer. Resource Management for Running HPC Applications in Container Clouds. In Proceedings of 31st International Supercomputing Conference, ISC, Leipzig, Germany, June 2016.
@inproceedings{Docker-ISC,
author = {Stephen Herbein and Ayush Dusia and Aaron Landwehr and Sean McDaniel and Jose Monsalve and Yang Yang and Seetharami R. Seelam and Michela Taufer},
title={Resource Management for Running HPC Applications in Container Clouds},
booktitle={Proceedings of 31st International Supercomputing Conference},
series = {ISC},
year={2016},
month={June},
address = {Leipzig, Germany},
}
Docker is an open source project to pack, ship and run any application as a lightweight container
Docker containers are both hardware-agnostic and platform-agnostic. This means they can run anywhere, from your laptop to the largest EC2 compute instance and everything in between - and they don't require you to use a particular language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases, and backend services without depending on a particular stack or provider.
Docker began as an open-source implementation of the deployment engine which powers dotCloud, a popular Platform-as-a-Service. It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands of applications and databases.
Security is very important to us. If you have any issue regarding security, please disclose the information responsibly by sending an email to [email protected] and not by creating a github issue.
A common method for distributing applications and sandboxing their execution is to use virtual machines, or VMs. Typical VM formats are VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory these formats should allow every developer to automatically package their application into a "machine" for easy distribution and deployment. In practice, that almost never happens, for a few reasons:
- Size: VMs are very large which makes them impractical to store and transfer.
- Performance: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
- Portability: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
- Hardware-centric: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most: building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
By contrast, Docker relies on a different sandboxing method known as containerization. Unlike traditional virtualization, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones, and FreeBSD with Jails.
Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves all four problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead, they are completely portable, and are designed from the ground up with an application-centric design.
Perhaps best of all, because Docker operates at the OS level, it can still be run inside a VM!
Docker does not require you to buy into a particular programming language, framework, packaging system, or configuration language.
Is your application a Unix process? Does it use files, tcp connections, environment variables, standard Unix streams and command-line arguments as inputs and outputs? Then Docker can run it.
Can your application's build be expressed as a sequence of such commands? Then Docker can build it.
A common problem for developers is the difficulty of managing all their application's dependencies in a simple and automated way.
This is usually difficult for several reasons:
-
Cross-platform dependencies. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules, internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work well with each other, requiring awkward custom integrations.
-
Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease - but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
-
Custom dependencies. A developer may need to prepare a custom version of their application's dependency. Some packaging systems can handle custom versions of a dependency, others can't - and all of them handle it differently.
Docker solves the problem of dependency hell by giving the developer a simple way to express all their application's dependencies in one place, while streamlining the process of assembling them. If this makes you think of XKCD 927, don't worry. Docker doesn't replace your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.
Docker defines a build as running a sequence of Unix commands, one after the other, in the same container. Build commands modify the contents of the container (usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous commands, the order in which the commands are executed expresses dependencies.
Here's a typical Docker build process:
FROM ubuntu:12.04
RUN apt-get update && apt-get install -y python python-pip curl
RUN curl -sSL https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xzv
RUN cd helloflask-master && pip install -r requirements.txt
Note that Docker doesn't care how dependencies are built - as long as they can be built by running a Unix command in a container.
Docker can be installed on your local machine as well as servers - both bare metal and virtualized. It is available as a binary on most modern Linux systems, or as a VM on Windows, Mac and other systems.
We also offer an interactive tutorial for quickly learning the basics of using Docker.
For up-to-date install instructions, see the Docs.
Docker can be used to run short-lived commands, long-running daemons (app servers, databases etc.), interactive shell sessions, etc.
You can find a list of real-world examples in the documentation.
Under the hood, Docker is built on the following components:
- The cgroup and namespacing capabilities of the Linux kernel;
- The Go programming language.
- The [Docker Image Specification] (https://github.com/docker/docker/blob/master/image/spec/v1.md)
- The [Libcontainer Specification] (https://github.com/docker/libcontainer/blob/master/SPEC.md)
Want to hack on Docker? Awesome! We have instructions to help you get started contributing code or documentation..
These instructions are probably not perfect, please let us know if anything feels wrong or incomplete. Better yet, submit a PR and improve them yourself.
Want to run Docker from a master build? You can download master builds at master.dockerproject.com. They are updated with each commit merged into the master branch.
Don't know how to use that super cool new feature in the master build? Check out the master docs at docs.master.dockerproject.com.
Docker is a very, very active project. If you want to learn more about how it is run, or want to get more involved, the best place to start is the project directory.
We are always open to suggestions on process improvements, and are always looking for more maintainers.
Brought to you courtesy of our legal counsel. For more context, please see the "NOTICE" document in this repo.
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
For more information, please see http://www.bis.doc.gov
Docker is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.
There are a number of projects under development that are based on Docker's core technology. These projects expand the tooling built around the Docker platform to broaden its application and utility.
If you know of another project underway that should be listed here, please help us keep this list up-to-date by submitting a PR.
- Docker Registry: Registry server for Docker (hosting/delivery of repositories and images)
- Docker Machine: Machine management for a container-centric world
- Docker Swarm: A Docker-native clustering system
- Docker Compose (formerly Fig): Define and run multi-container apps