- A way to package application with all the necessary dependencies and configuration
- Portable artifact, easily shared and moved around between dev team
- Makes dev and deployment more efficient
- Container repository
- Private repository
- Public repository (hub.docker.com)
- Install on each operating system one by one
- Installation process is different
- Many steps
- Own isolared environment
- Packaged all needed config
- One command to install the app
- Run same app with 2 different versions
- Produce artifacts
- Configuration on server is needed
- Dependency version conflicts
- Misunderstanding
- Dev and Operations work together to package application
- No environmental config needed on server - except docker runtime on server to run containers
- Layers of stacked images (Mostly linux base image) alpine:3.10
- Applicaiton image on top
- Public repo (without login)
# pulls image and runs it (you can use it to run multiple versions on machine)
docker run postgres:9.6
# see all running containers and image
docker ps
- the actual package
- artifacts can be moved around
- Running mode
- actually start the application
- container environment is created
- Docker on OS level
- Applications run on Kernel layer
- Docker vitrualize the applications layer (size is much smaller)
- Faster bootup speed
- Virtual machine virtualizes complete operating system (GB large)
- docker toolbox
- Different level of abstractions
Difference between image and container
- Container is the running environment for image
- Port binded
- virtual file system
All artifacts in Dockerhub are images
docker pull [image] # pull image from hub
docker run [image] # will create a new container
docker ps # list running containers
docker run -d [image] # detached mode (to reuse same terminal)
docker stop [ID_OF_CONTAINER] # stop container
docker start [ID_OF_CONTAINER] # start containers (retains all attribute)
docker ps -a # list all containers (running and stopped)
docker images # list all images
## Debugging
docker logs [CONTAINER_ID|NAME] # get logs
docker run -d -p6000:6379 --name [NEW_IMAGE_NAME] [IMAGE] # renaming image
docker logs [CONTAINER_ID] | tail # get latest
docker logs [CONTAINER_ID] -f # stream logs
docker exec -it [CONTAINER_ID|NAME] /bin/bash # interactive terminal to enter docker's own terminal /bin/sh
exit # exit terminal
- Multiple containers running on host machine
- Create binding on laptop port and docker container port
docker run -p6000:3739 [image] -d # localhost:container port
-d
# Docker will start your container the same as before but this time will “detach” from the container and return you to the terminal prompt. To use terminal again
Develop -> commit to Git -> Jenkins CI -> Artifact -> Build and create docker image -> Pushed to private docker repo
Dev server pulls both images (app and used image)
- Pull images from docker hub
- Docker network - isolated network where containers are running
- App will connect to this network
docker network ls # shows all network available
docker network create [NETWORK-NAME]` # creates new network
docker run -d \
-p [host]:[container]
-e MONG_INIT_DB_ROOT_USERNAME=admin // environment variable
-e MONG_INIT_DB_ROOT_password=password
--name mongo-db
--net [NETWORK-NAME]
[IMAGE]
sh docker run -d \
-p [host]:[container]
-e MONG_INIT_DB_ROOT_USERNAME=admin
-e MONG_INIT_DB_ROOT_password=password
--name mongo-db
--net [NETWORK-NAME]
[IMAGE]
- Use mongo client to connect to db (use the host port) `docker logs
- mapping docker commands
- structured way to contain very normal common docker commands
version: '3'
services:
[CONTAINER_NAME]:
image: [IMAGE NAME]
ports:
- 27017:27017
environment:
- ENV_VARIABLES HERE
PS: DOCKER COMPOSE TAKES CARE OF CREATION OF NETWORK
docker-compose -f [FILE_NAME] up # start containers and creates network
PS: There is no data persistence on containers (keep in mind) so once you restart container, everything is lost BUT VOLUMES TO THE RESCUE (USED FOR DATA PERSISTENCE)
docker-compose -f [FILE_NAME] down # stops all services and network is gone
- Blueprint for building docker images
- To deploy, app needs to packages to it's own docker container
- build docker image and deploy to env
See sample dockerfile
docker build -t my-app:1.0 . # -t is tag 2nd parameter is allocation (usually current directory)
Jenkins build a docker image based on Dockerfile
Whenever docker file is adjusted, we must rebuild the image
docker rm [CONTAINER_ID] # delete container
docker rmi [IMAGE_ID] # delete image
env # to see env inside interactive terminal when using exec command
- Go to AWS and find service names ECR (Elastic container registry)
- Create a repository per image (only on AWS)
You need AWS CLI and credentials
docker login # login
registryDomain/imageName:tag - in Docker Hub it's shorthand
docker pull mongo:4.2 # is shorthand for
docker pull docker.io/library/mongo:4.2
in AWS ECR: `docker pull [registryName]/my-app:1.0
docker tag my-app:1.0 [registryName]/my-app:1.0
- took a copy and made an identical copy of image with different repository
docker build -t my-app:1.1 .
. is path to docker file
docker images
check images
docker tag my-app:1.1 [registryName]/my-app:1.1
docker push [registryName]/my-app:1.1
- push to repo
NOTE: One repository with different image versions
- Need all containers
- used for data persistence
- plug physical files system to container (mounted)
docker run -v [HOST_DIRECTORY]:[CONTAINER_DIRECTORY]
- host volumesdocker run -v [CONTAINER_DIRECTORY]
- anonymous volumesdocker run -v name:[CONTAINER_DIRECTORY]
- named volumes (prefered use)
(How ot add in docker-compose) check docker-componse.yml file
- Windows C:ProgramData/docker/volumes
- Linux /var/lib/docker/volumes
- Mac /var/lib/docker/volumes
Docker creates linux virtual machine on mac
screen ~/Library/Containers/com.docker.docker/Data/com.docker/driver.amd64-linux/tty
Ctrl a + k = kill screen session