Docker Tutorial


May 25, 2018
Welcome to the Docker Tutorial!

These are the items we're looking at in this tutorial
  1. What is docker?
  2. How to install docker
  3. Running your first docker image in your container
  4. Building your own docker image
  1. Docker is a great way to have your services between multiple running just like the production environment would. You don't have to do anything just because you are on a Linux machine your friend is on a Windows and the production environment is on a Linux kernel of some sort. Docker helps you solve this by running built images in a container environment, we can compare it to a more lightweight way of Hyper-V / Virtualbox, even though it's completely different. This makes sure that there will be no different between the environments and therefor you don't have to worry about your application running on different environments, docker makes sure it works all the time.

    What's also great about Docker is how simple it is, you write a simpel-syntax configuration file with the parts that your docker container should contain, lets say ours will contain NGINX. To create this you have to write a config file with 4 lines to start the simplest NGINX webserver.

    Loadbalancing can also be done with docker, and by loadbalancing Im not talking about the network load, only about the metrics of CPU/Memory/Other IO. What you can do is specify what exposed ports your application should have and start multiple docker containers in less than 10 seconds to balance the load, this can be automatically done when you attach Kubernetes/Minikube, which is another tutorial.
  2. Docker can be downloaded from , and the installation should just be a walk in the park. When you're done installing head over to and register an account, this is important if you want to push you're own images and save them in the cloud.

    Now we should start docker on our machine, go ahead and do that. When you are done you should open your terminal and run the command
    And a list of commands for helping you out should be visible, then you know Docker is up and running.

    Let's go ahead and log in to the docker hub account you've made before on your local docker service. Run the following command:
    docker login
    And fill in the prompted credentials.
    Congratulations, setup is completed!
  3. Now we are going to run our first docker image, I've pre-setup a simple website working with NGINX for this, and you can go ahead and pull this image from the docker hub, .

    as you can see, docker has a command for fetching images, it's called a pull. Head back to your terminal again and run the following command
    docker pull theovster/hello-devbest
    now you've got it, great lets run it!

    Run the following command in the terminal
    docker run -p 8080:80 --name devbest theovster/hello-devbest
    Your application should be starting now.

    There is a interesting piece in this command, the flag -p this specifies the port that should be exposed. The syntax for that command follows exposing_port:service_target_port what this mean is that we expose the port 8080 for our docker container, but docker routes it to port 80, which NGINX runs on.

    You can kill this by either running
    docker stop devbest
    docker kill devbest
    The prefered way is to use the stop command, as you then can start the image again by using
    docker start devbest

    Congratulations, now head over to which the docker image is running on, you should see a very simple website.
  4. Building your own docker image is very easy, in your code project you just have to add a file called Dockerfile, this file will contain the configuration for building your image.

    We are going to run on the same configuration as my theovster/hello-devbest image ran on, this is the configuration for it.
    FROM nginx
    RUN mkdir /etc/nginx/logs && touch /etc/nginx/logs/static.log
    ADD ./nginx.conf /etc/nginx/conf.d/default.conf
    ADD /build /www
    The FROM nginx specifies that we will pull a docker image with nginx,
    The RUN mkdir /etc/nginx/logs && touch /etc/nginx/logs/static.log is neccessary for nginx to start, this will keep nginx logs, and wont create it itself.
    The ADD ./nginx.conf /etc/nginx/conf.d/default.conf will add the nginx.conf in your root path of your project, to the dockers configuration for running nginx.
    The ADD /build /www will copy your built website, to nginx webroot called www/

    This is the only thing required for running your website on a NGINX server. Below is my nginx.conf which is a basic nginx configuration

    server {
        root /www;
        location / {
          autoindex on;
        # these settings are from
        # feel free to change as much as you like
        # cache.appcache, your document html and data
        location ~* \.(?:manifest|appcache|html?|xml|json)$ {
          expires -1;
          access_log logs/static.log;
        # Feed
        location ~* \.(?:rss|atom)$ {
          expires 1h;
          add_header Cache-Control "public";
        # Media: images, icons, video, audio, HTC
        location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
          expires 1M;
          access_log off;
          add_header Cache-Control "public";
        # CSS and Javascript
        location ~* \.(?:css|js)$ {
          expires 1y;
          access_log off;
          add_header Cache-Control "public";

    Now let's build the image
    Navigate your terminal to the root of your webproject where your Dockerfile is located, here we are going to build the image by running the following command
    docker build -t dockerhubusername/my-project .
    Wait until it's finished building

    When it's built go ahead and run it by using the command I wrote about how to run a docker image. Now you should be able to navigate to your localhost, with chosen exposed port.

    Congratulations, it works!
    You may now go ahead and deploy this docker image to all your servers, not setup needed, just run it!

    Go to and create a repository for this image, with the same name as your image. On the docker hub webpage, hit create and select create repository.

    To push our image to docker hub, we can use the following command
    docker push dockerhubusername/my-project:latest

    On your servers/production/other environments you can now do a docker pull and run the program like nothing ever happened!
You've made it this far, great!
I really hope that you found this tutorial good, please comment if you need more directions in how to do stuff. Remember, you can run docker way more advanced than this, but this is a great place to start. Place a comment below if you need any help on the road.



May 25, 2018
compose is part of docker core and should be utilized regardless if used for one container or multiple

No it's not, it's clearly stated that Docker compose is separate, running on Docker Engine (core), Just because it has docker in it's name, doesn't make it a part of the core which is the engine. I go through the simplest way of running the docker engine (core). You can read all of that in the documentation, an additional tool for docker.


23:37 [autobots] -!- eckostylez [[email protected]]
Nov 25, 2012
No it's not, it's clearly stated that Docker compose is separate, running on Docker Engine (core), Just because it has docker in it's name, doesn't make it a part of the core which is the engine. I go through the simplest way of running the docker engine (core). You can read all of that in the documentation, an additional tool for docker.
where does it state that docker compose is separate? it is no different than having the arguments passed in your example (below) vs a yaml file
docker run -p 8080:80 --name devbest theovster/hello-devbest
even developers at docker say it should be used for any long running containers since it offers better management and flexibility. there's a reason init.d and systemd exist. it is 100x easier to configure and manage a service/application versus manually passing args/params on the cli.

Users who are viewing this thread