- Docker Playground
wget -qO- https://get.docker.com/ | sh
adding your user to the "docker" group:
sudo usermod -aG docker <your-user-name>
Operating System, Container, Isolation, Control Groups, Namespaces
- Control Groups (C Groups / in Windows Job Objects): grouping objects and setting limit
- Namespaces: isolation.
- Container
- Process ID (pid)
- Network (net)
- Filesystem/mount (mnt)
- Inter-proc comms (ipc)
- UTS (uts)
- User (user)
CLI -> API -> daemon -> containerd -> OCI
there are two types of windows containers
Attempt | Native windows containers | Hyper-V Containers |
---|---|---|
os tech | Namespace isolation | VM, but less performance than a full blown VM |
kernel/os | Uses hosts kurnel | not using the host's karnet, have own OS |
linux | can't have linux here | as it can have it's own karnel and OS, so we can have linux here |
run | docker container run .. | docker container run --isolation=hyperv |
Image | Container |
---|---|
read only template to create application containers | a running image |
build time | run time |
Image: a bunch of independent layer that are very loosely connected through manifest file (or config file).
to play with the manifest command, you need to enable docker experimental feature on your machine. You can get help from this link.
- create a file in .docker/config.json
- paste the following code into the file:
{
"experimental": "enabled",
"debug": true
}
- do want to now more about a manifest file? run the following command
docker manifest inspect <imagename>
-
what does
docker pull <imagename>
command do?- get the manifest file
- Fat Manifest: list architecture supported and and a manifest for all of those. first it get the fat manifest to match with my system architecture config match.
- Image Manifest: the actual manifest for my machine. Then it get the manifest that machaes my machine architecture.
- Pull layers
- get the manifest file
-
docker image ls --digests
-
To see the docker system info:
docker system info
-
The file system inside a container will be the base layer file system, not the host machine's file system.
-
do you want to see the layer history of an docker image?
docker history <imagename>
-
do you want to know the details of a docker image?
docker image inspect <imagename>
-
and finally you may want to free up space by deleting any unused image?
docker image rm <imagename>
Images line in regitries. Within the registries we have repos. Within repos, we have images or tags.
registry/repo/image(tag)
Along with many public registries other than docker hub, we can have our own private docker registry.
repo:latest
latest tagged repo: does nto alway means that it is the later version of the repo. To make a repo latest, we have to explicitely mention while creating the image.
mehdihasan/pokemon
you may need to add name of the user or the organization while dealing with a repo because they are not official version of any image.
distribution hash / cotent hash
- using official images
- keep the images small
- explicitely reference of the image version, not just only latest.
Dockerfile: where developers describe their apps and how they work, and for ops to read the file and understand.
-
Instructions for building images
- CAPITALIZE instructions
- <'INSTRUCTION'> <'value'>
- FROM = base image
- Good practice to list maintainer
- RUN = execute command and create layer
- COPY = copy code into image as new layer
- Some instructions add metadada instead of layers
- ENTRYPOINT = default app for image/container
-
create a
Dockerfile
-
Build your image
docker image build -t `myAppNameTag` .
- running the image
docker image ls
docker container run -d --name `nameYourContainer` -p `host-port`:`container-port` `myAppNameTag`
Build Context: Location of my code in my machine.
It is possible to build from git repo as well! Run the following command to run from git repo:
docker image build -t thesaurus https://github.com/mehdihasan/springboot-thesaurus-app.git
Very interesting topics! The concept is to build my expected image step by step. In the last stage, may be the production build, it will going to take only the output layers from the each stage, and build the final image out from those! Do you know why do they put the Dockerfile into the app directory other than the root directory?
the most atomic unit - container. in k8s - pod.
- here
-it
stands forinteractive terminal
docker container run -it alpine sh
- Here what is
-d
andsleep 1d
doing?
docker container run -d alpine sleep 1d
- You might want to run a single command inside a container. And you are lazy enough to avoid get inside the container using the
-it
command.
docker container exec <CONTAINER-ID> ls -al
docker container exec <CONTAINER-ID> cat newFile
- do you want to remove all your containers?
docker container rm $(docker container ls -aq) -f
Linux
- systemd
- journalctl -u docker.service
- non systemd
- try /var/log/messages
Windows ~/AppData/Local/Docker
- STDOUT, STDERR
- It is possible to collect the docker logs and forward those to existing logging solution system like Syslog, Gelf, Splunk.
- we can set default logging driver in
daemon.json
- it is possible to override per container with
--log-driver
& ||--log-opts
- inspect logs with
docker logs <container>
He just showed how to create a swarm and then create some nodes. I was not sure what did me ment by nodes. Probably different machines into same network.
-
check if docker swarm is enabled
docker system info
-
initialize docker swarm with the following command
docker swarm init
-
check the command to add manager to the docker swarm
docker swarm join-token manager
-
list out all docker nodes
docker node ls
Must read item for docker networking: Docker networking Grand Design DNA
- CNM (Container Network Model)
- Libnetwork
- Drivers
You should know about CNI as well. It is about K8s networking.
AGENDA
we are goin to create a user defined bridge network in docker host.
Let's create a new network
docker network create -d bridge --subnet 10.0.1.1/24 ps-bridge
Now, I want to test the bridge. Let's install the testing kit first
sudo apt install bridge-utils
after the package installed, run the following command:
brctl show
ip link show
Now, let's run 2 containers on the bridge:
docker run -dt --name c1 --network ps-bridge alpine sleep 1d
docker run -dt --name c2 --network ps-bridge alpine sleep 1d
Now, inspect the container:
docker network inspect ps-bridge
Now, if you run the following command you will see that there are two interfaces connected to the bridge
brctl show
Now, let's get inside into one of our docker container and ping the other one:
docker exec -it c1 sh
Now, you can check own IP and ping the other instance
# ip a
# ping c2
So, we have created a network bridge in which 2 instances are connected.
Now, how can other docker instances can able to access any of the instance outsside of this network?
we need to publish a container service in a host port
To, show a demo, let's make another container running in port 8080 in the container and map to the 5000 port in the host machine
docker run -d --name web1 \
--network ps-bridge \
-p 5000:8080 \
nigelpoulton/pluralsight-docker-ci
DONE! Now you can access the web1 container by going http://localhost:5000
using docker volume, we can take leverage of persistent data.
to know mode about the commond run the following
docker volume
to create a container with persistant data, run the following command
docker container run -dit --name voltest \
--mount source=ubervol,target=/vol alpine:latest
you can see the created volume in the host machine in the following directory
sudo ls -al /var/lib/docker/volumes/
do you want to test if file really writes to the host machine's location? TRY IT BY YOURSELF.