Tuesday, May 15, 2018

Understanding Layered Architecture of Docker Container Images


Building Docker Image From Scratch” is easy way to build any kind of containerization image. Let’s get bit more deep into it and understand what it is doing at the low level. Everyone knows in Linux everything is considered as file only. The whole operating system is nothing but a collection of files and folders.

In the previous post, we have seen there are 2 steps required as per “Dockerfile” and same has been embedded in the container layer file system. As a result we have got a container image which is nothing but a collection of layered filesystem.

Container images are templates from which containers are created. These images are not just one monolithic block, but are composed of many layers. The first layer in the image is also called the base layer:



Each layer is mapping with one command and this command is nothing but a file which will be stacked in this image. The all layers of container images are immutable or read only which means once created can’t be changed but we can delete it. In case we want to use the content of one layer in another layer, in that case we have to copy it from layer and use it in new layer. Each layer only contains the delta of changes in regard to the previous set of layers. The content of each layer is mapped to a special folder on the host system, which is usually a subfolder of "/var/lib/docker/."

When the docker engine creates a container from these images, it adds writable layer on the top of immutable or read only layers like as shown in below image:



By doing this, same immutable image can be used across various applications just by adding single writable docker layer.

As I have already mentioned, the image layers are immutable and to reuse the existing files and folders docker uses the copy-on-write strategy. With this strategy, If a layer uses a file or folder that is available in one of the low-lying layers, then it just uses it. If, on the other hand, a layer wants to modify, say, a file from a low-lying layer, then it first copies this file up to the target layer and then modifies it. Below is the snapshot of copy-on-write strategy:



As per above image, second layer want to modify file 1 which is present in base layer. Second layer will copy file 1 from base layer and then modified it. So top layer will use file 1 will be copied from layer 1 and file 2 will be copied from base layer.

Click Here To Read Rest Of The Post...

Sunday, May 13, 2018

Fun with cURL


I have been working on Linux based machines since I was 15 years old. Yeah, I once wiped out Windows 98 from my own Desktop and tried to install Red Hat Linux (back in late 90s). It was a bit of challenge to install Linux at that time and we used to have competition among school friends for installing Linux.

Well Linux has come a long way and it can now be installed very easily and in different forms – VM, dockers, bare-metal install, cloud etc etc. We leave the installation here and let’s move ahead with some fun part. As and when we grew with Linux, we starting learning some command line tools like ‘pwd’ ‘as present working directory’, ‘cat’ prints the content of a file, ‘df -h’ tells the storage details. When combined together or written as a script, they can do wonders.

One such command is ‘curl (cURL)’, can also be read as ‘Client URL’. cURL is essentially a tool which can be used to transfer data using various protocols such as HTTP, HTTPS, FTP, FTPS, SCP, SFTP, TFTP, LDAP, DAP, DICT, TELNET, FILE, IMAP, POP3, SMTP and RTSP.

Let’s see what fun cURL brings to us
1) Get the Weather Report If by any chance we need to check weather from a terminal window, cURL comes handy here Lets check Singapore’s weather on terminal.

'curl wttr.in/singapore'
The command is 'curl wttr.in/location'
Replace location with city-name of your choice. cURL can fetch the forecast from its web frontend 'wttr.in'. All it needs is the location for which you want the forecast.

Another fun feature is to check moon phases 'curl wttr.in/Moon'

2) Download files

Usually we download files using a browser, but what if we don’t have access to a browser, but still needs to download the file Although cURL isn’t a popular choice for simultaneous downloads (wget is recommended instead), we can still use it for that purpose by combining its powerful options (switches). For this we will need a direct link to the file. In this example, we will try to download Ubuntu cloud images and direct link for that is (https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img)

       

[root@seed-srv01 ~]# curl -O -C - https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  277M  100  277M    0     0  3775k      0  0:01:15  0:01:15 --:--:-- 4048k
[root@seed-srv01 ~]#

The uppercase O switch makes cURL to save the file in the same filename as defined in the link. See below.

       

[root@seed-srv01 ~]# ll
total 301380
-rw-------. 1 root root      1682 May  9 17:22 anaconda-ks.cfg
-rw-r--r--. 1 root root   1684382 May 11 14:53 junos-openconfig-x86-32-0.0.0.9.tgz
-rw-r--r--. 1 root root 291438592 May 13 21:08 xenial-server-cloudimg-amd64-disk1.img
[root@seed-srv01 ~]#


If we use lowercase ‘o’ we can define a customized filename to the file being downloaded. See below.

       

[root@seed-srv01 ~]# curl -o xenial.img -C - https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  277M  100  277M    0     0   161k      0  0:29:24  0:29:24 --:--:--  172k
[root@seed-srv01 ~]#

3) Check for a website's Availability

Imagine a website you need to visit suddenly stops working. What would you do? You might google for it and keep trying again. Or you could just fire up the terminal and run cURL.

'curl -Is https://www.website.com -L'

The uppercase I switch (-I) checks the HTTP header of a web page, and the -L (location) option is added to make cURL follow redirections. This means you don’t have to type the full Facebook URL; just write facebook.com and cURL will take care of the rest thanks to -L. If there are any redirections, they will be displayed with their own HTTP status.

       

[root@seed-srv01 ~]# curl -Is http://www.catchoftheday.com -L
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Sun, 13 May 2018 15:55:52 GMT
Content-Type: text/html; charset=iso-8859-1
Connection: keep-alive
Location: https://www.catchoftheday.com.au/
X-Powered-By: PleskLin

HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://www.catch.com.au/
Connection: keep-alive

HTTP/1.1 200 OK
Date: Sun, 13 May 2018 15:55:55 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Server: Apache
X-Frame-Options: SAMEORIGIN
Set-Cookie: PHPSESSID=nh63e07ko1k87rvmbn838n3456; expires=Sun, 27-May-2018 15:55:55 GMT; Max-Age=1209600; path=/; domain=www.catch.com.au; HttpOnly
X-Frame-Options: SAMEORIGIN
Set-Cookie: cgu=a262e423f46b3f1d40e83fe0b37d267fb4c7598a; expires=Mon, 13-May-2019 15:55:55 GMT; Max-Age=31536000; path=/; HttpOnly
Set-Cookie: device_view=full; expires=Wed, 13-Jun-2018 15:55:55 GMT; Max-Age=2678400; path=/; HttpOnly
Set-Cookie: ccx=1%3D0; path=/; HttpOnly
Vary: User-Agent
Cache-Control: no-cache

If we see '200 OK', that means everything is fine and '301 Moved Permanently' means the site was redirected to a different URL. In the above example, it was redirected twice.

4) Expand Shorten URLs Sometimes we get shorter URL and by looking at that we doesn’t come to know what is the actual URL it is referring to. Well cURL got you covered here. Try below.

       
[root@seed-srv01 ~]# curl -sIL https://goo.gl/fb/wouqaw
HTTP/1.1 301 Moved Permanently
Strict-Transport-Security: max-age=63072000; includeSubDomains; preload
Content-Type: text/html; charset=UTF-8
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Date: Sun, 13 May 2018 16:16:56 GMT
Location: http://feeds.feedburner.com/~r/Mplsvpn/~3/atuDdq3nvBM/building-docker-image-from-scratch.html?utm_source=feedburner&utm_medium=twitter&utm_campaign=shivlu
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
Server: GSE
Alt-Svc: hq=":443"; ma=2592000; quic=51303433; quic=51303432; quic=51303431; quic=51303339; quic=51303335,quic=":443"; ma=2592000; v="43,42,41,39,35"
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding

HTTP/1.1 301 Moved Permanently
Location: http://www.mplsvpn.info/2018/05/building-docker-image-from-scratch.html?utm_source=feedburner&utm_medium=twitter&utm_campaign=Feed%3A+Mplsvpn+%28MPLSVPN%29
Content-Type: text/html; charset=UTF-8
Date: Sun, 13 May 2018 16:16:57 GMT
Expires: Sun, 13 May 2018 16:16:57 GMT
Cache-Control: private, max-age=0
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Server: GSE
Transfer-Encoding: chunked
Accept-Ranges: none
Vary: Accept-Encoding

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Expires: Sun, 13 May 2018 16:16:57 GMT
Date: Sun, 13 May 2018 16:16:57 GMT
Cache-Control: private, max-age=0
Last-Modified: Sun, 13 May 2018 11:53:05 GMT
ETag: "dfad59ae-e2f2-4e54-9adc-f3ef2c46bcac"
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Content-Length: 0
Server: GSE

cURL can also download files from the shorter URL, however make sure the shorter URL actually points to the file.

'curl -L -o file.pdf https://goo.gl/abcdef'

5) Find your External IP address

On terminal firing ‘ifconfig’ displays our local IP address, but sometimes we need to know our external IP address. There are many services which work with cURL.

       
curl ipinfo.io
curl -s http://whatismyip.akamai.com
curl ifconfig.me


Above command will tell us about our own IP address. If we need more info about some IP address, we can put following.

       
curl ipinfo.io/ipaddress 

[root@seed-srv01 ~]# curl ipinfo.io/1.1.1.1
{
  "ip": "1.1.1.1",
  "hostname": "1dot1dot1dot1.cloudflare-dns.com",
  "city": "Research",
  "region": "Victoria",
  "country": "AU",
  "loc": "-37.7000,145.1830",
  "postal": "3095",
  "org": "AS13335 Cloudflare, Inc."
}[root@seed-srv01 ~]#

6) Check Cryptocurrency rates

There are many people invest in cryptocurrency or at least thought of investing in the cryptocurrency. Well, why not check the rates while you are on the terminal

The command is 'curl rate.sx'
If you need to know about a specific currently we need to run it like ‘curl rate.sx/btc’. In this example we will see the rates and trends on Bitcoin

Hope this was useful and some fun.

Click Here To Read Rest Of The Post...

Building Docker Image From Scratch


Beginners Guide to dockers part 1, has covered the architecture of dockers. Along with this, so far, I have covered different types of dockers installation, how to download, install and delete docker image and Docker Beginners Guide - Troubleshooting

This post is more focused on creating a docker image and it can be used anywhere in your project basis on the requirements.

Lets create a new folder in windows directory called create-image and change the current directory to create-image.
        
PS C:\Lab\create-image>


Now create a new file called Dockerfile.txt in the current directory with below mentioned commands.
        
PS C:\Lab\create-image> cat .\Dockerfile.txt
FROM centos:7
RUN yum install -y wget
PS C:\Lab\create-image>


Let's create a new image by using Dockerfile.txt created in the previous step.
        
PS C:\Lab\create-image> docker image build -t my-new-image -f ./Dockerfile.txt .


Below is the output after running the above command.
        
PS C:\Lab\create-image> docker image build -t my-new-image -f ./Dockerfile.txt .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM centos:7
7: Pulling from library/centos
Digest: sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16
Status: Downloaded newer image for centos:7
 ---> e934aafc2206
Step 2/2 : RUN yum install -y wget
 ---> Running in cfc91e766858
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: ftp.cuhk.edu.hk
 * extras: ftp.cuhk.edu.hk
 * updates: ftp.cuhk.edu.hk
Resolving Dependencies
--> Running transaction check
---> Package wget.x86_64 0:1.14-15.el7_4.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch             Version                   Repository      Size
================================================================================
Installing:
 wget           x86_64           1.14-15.el7_4.1           base           547 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 547 k
Installed size: 2.0 M
Downloading packages:
Public key for wget-1.14-15.el7_4.1.x86_64.rpm is not installed
warning: /var/cache/yum/x86_64/7/base/packages/wget-1.14-15.el7_4.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f
4a80eb5: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) "
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-4.1708.el7.centos.x86_64 (@CentOS)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : wget-1.14-15.el7_4.1.x86_64                                  1/1
install-info: No such file or directory for /usr/share/info/wget.info.gz
  Verifying  : wget-1.14-15.el7_4.1.x86_64                                  1/1

Installed:
  wget.x86_64 0:1.14-15.el7_4.1

Complete!
Removing intermediate container cfc91e766858
 ---> 4a991aace711
Successfully built 4a991aace711
Successfully tagged my-new-image:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and director
ies added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions f
or sensitive files and directories.


If you remeber, in the Dockerfile created in the previous step has two steps, the same can be found in the above output also. Let's do he postmartem of the above output. The first thing the builder does is package the files in the current build context and sends the resulting .tar file to the Docker daemon.
        
Sending build context to Docker daemon  2.048kB


Now we have the next output mentioned in step1/2. It will pull the centos from the docker registry if not available locally
        
Step 1/2 : FROM centos:7
7: Pulling from library/centos
Status: Downloaded newer image for centos:7


Below is the shortend output of next output mentioned in step2/2. It will run the "yum" command as mentioned in Dockerfile and download the wget package. Finally it will remove the inter mediator container and finally you can see the container name at the end "4a991aace711"
        

Step 2/2 : RUN yum install -y wget
 ---> Running in cfc91e766858
Loaded plugins: fastestmirror, ovl
Determining fastest mirrors
 * base: ftp.cuhk.edu.hk
 * extras: ftp.cuhk.edu.hk
 * updates: ftp.cuhk.edu.hk
Resolving Dependencies
--> Running transaction check
---> Package wget.x86_64 0:1.14-15.el7_4.1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package        Arch             Version                   Repository      Size
================================================================================
Installing:
 wget           x86_64           1.14-15.el7_4.1           base           547 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 547 k
Installed size: 2.0 M
Downloading packages:
Running transaction
  Installing : wget-1.14-15.el7_4.1.x86_64                                  1/1
install-info: No such file or directory for /usr/share/info/wget.info.gz
  Verifying  : wget-1.14-15.el7_4.1.x86_64                                  1/1

Installed:
  wget.x86_64 0:1.14-15.el7_4.1

Complete!
Removing intermediate container cfc91e766858
 ---> 4a991aace711
Successfully built 4a991aace711
Successfully tagged my-new-image:latest


Finally you can check your image by running "docker images" command.
        
PS C:\Lab\create-image> docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
my-new-image               latest              4a991aace711        5 minutes ago       263MB



Click Here To Read Rest Of The Post...