-
Notifications
You must be signed in to change notification settings - Fork 0
Home
LXC (Linux Containers) is an OS-level virtualization technology that allows the creation and running of multiple isolated Linux virtual environments (VE) on a single control host. These isolation levels or containers can be used to either sandbox specific applications, or to emulate an entirely new host. LXC uses Linux’s cgroups functionality, which was introduced in version 2.6.24 to allow the host CPU to better partition memory allocation into isolation levels called namespaces. Note that a VE is distinct from a virtual machine (VM), as we will see below.
Docker, previously called dotCloud, was started as a side project and only open-sourced in 2013. It is really an extension of LXC’s capabilities. This it achieves using a high-level API that provides a lightweight virtualization solution to run processes in isolation. Docker is developed in the Go language and utilizes LXC, cgroups, and the Linux kernel itself. Since it’s based on LXC, a Docker container does not include a separate operating system; instead, it relies on the operating system’s own functionality as provided by the underlying infrastructure. So Docker acts as a portable container engine, packaging the application and all its dependencies in a virtual container that can run on any Linux server.
What is the difference between VM like XEN, KVM, VMware, etc..? XEN, KVM, VMware, etc.. need hypervisor whereas LXC do not use any hypervisor which reduces the footprint of LXC. Deployment time for LXC is very very less as compared to VM. We can go on and on ..But as an administrator now you know you need less resources and time to run an LXC container. So that is good news and after reading the above explanation about docker, sysadmin should be more interested in learning container as compared to docker 👍
#docker run bash This means you will run one application. Normally docker will not have init/upstart available...
- Install Required Packages:
Package Name and Usage
LXC : Main Linux Container debootstrap: Is necessary in order to build Debian-based containers libvirt: will provide basic networking management, such us bridge, NAT and DHCP lxc-templates: Template scripts to create a container of ubuntu, oracle Linux, etc
List of templates:
[root@gcpadman-laptop ~]# ls -lH /usr/share/lxc/templates/*
-rwxr-xr-x 1 root root 12973 Jun 30 01:11 /usr/share/lxc/templates/lxc-alpine
-rwxr-xr-x 1 root root 13713 Jun 30 01:11 /usr/share/lxc/templates/lxc-altlinux
-rwxr-xr-x 1 root root 11090 Jun 30 01:11 /usr/share/lxc/templates/lxc-archlinux
-rwxr-xr-x 1 root root 12159 Jun 30 01:11 /usr/share/lxc/templates/lxc-busybox
-rwxr-xr-x 1 root root 29503 Jun 30 01:11 /usr/share/lxc/templates/lxc-centos
-rwxr-xr-x 1 root root 10374 Jun 30 01:11 /usr/share/lxc/templates/lxc-cirros
-rwxr-xr-x 1 root root 19739 Jun 30 01:11 /usr/share/lxc/templates/lxc-debian
-rwxr-xr-x 1 root root 17890 Jun 30 01:11 /usr/share/lxc/templates/lxc-download
-rwxr-xr-x 1 root root 49600 Jun 30 01:11 /usr/share/lxc/templates/lxc-fedora
-rwxr-xr-x 1 root root 28384 Jun 30 01:11 /usr/share/lxc/templates/lxc-gentoo
-rwxr-xr-x 1 root root 13868 Jun 30 01:11 /usr/share/lxc/templates/lxc-openmandriva
-rwxr-xr-x 1 root root 15932 Jun 30 01:11 /usr/share/lxc/templates/lxc-opensuse
-rwxr-xr-x 1 root root 41992 Jun 30 01:11 /usr/share/lxc/templates/lxc-oracle
-rwxr-xr-x 1 root root 11570 Jun 30 01:11 /usr/share/lxc/templates/lxc-plamo
-rwxr-xr-x 1 root root 19250 Jun 30 01:11 /usr/share/lxc/templates/lxc-slackware
-rwxr-xr-x 1 root root 26862 Jun 30 01:11 /usr/share/lxc/templates/lxc-sparclinux
-rwxr-xr-x 1 root root 6862 Jun 30 01:11 /usr/share/lxc/templates/lxc-sshd
-rwxr-xr-x 1 root root 25602 Jun 30 01:11 /usr/share/lxc/templates/lxc-ubuntu
-rwxr-xr-x 1 root root 11439 Jun 30 01:11 /usr/share/lxc/templates/lxc-ubuntu-cloud
dnf install lxc lxc-templates lxc-extra debootstrap libvirt
##Start “libvirtd” service:
systemctl list-unit-files | grep libvirtd
##Configure network bridge:
Edit the file /etc/lxc/default.conf and change the parameter ‘lxc.network.link’ from ‘lxcbr0’ to ‘virbr0’:
[root@gcpadman-laptop ~]# cat /etc/lxc/default.conf
lxc.network.type = veth
lxc.network.link = virbr0 (By default it will be lxcbr0)
lxc.network.flags = up
lxc.network.hwaddr = 00:16:3e:xx:xx:xx
##Creating container
lxc-create -n container_name -t container_template <Release can be specifed>
[root@gcpadman-laptop ~]# lxc-create -n CN01 -t ubuntu
Checking cache download in /var/cache/lxc/precise/rootfs-amd64 ...
Copy /var/cache/lxc/precise/rootfs-amd64 to /var/lib/lxc/CN01/rootfs ...
Copying rootfs to /var/lib/lxc/CN01/rootfs ...
Generating locales...
en_US.UTF-8... up-to-date
Generation complete.
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Timezone in container is not configured. Adjust it manually.
##
# The default user is 'ubuntu' with password 'ubuntu'!
# Use the 'sudo' command to run tasks as root in the container.
##
##Container Administration
-
Listing all container
[root@gcpadman-laptop gcpadman]# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 CN01 STOPPED 0 - - - guru RUNNING 0 - 192.168.122.220, 192.168.122.224 - guru-clone STOPPED 0 - - - guru01 RUNNING 0 - 192.168.122.2 -
-
Stopping Container
lxc-stop -n <container name> [root@gcpadman-laptop gcpadman]# lxc-stop -n guru01 [root@gcpadman-laptop gcpadman]# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 CN01 STOPPED 0 - - - guru RUNNING 0 - 10.39.48.168, 192.168.122.224 - guru-clone STOPPED 0 - - - guru01 STOPPED 0 - - - <<< This Container is stopped
-
Starting and Connecting to container console
-
Start Container
lxc-start -n <Container Name> [root@gcpadman-laptop gcpadman]# lxc-start -n CN01 [root@gcpadman-laptop gcpadman]# lxc-ls -f NAME STATE AUTOSTART GROUPS IPV4 IPV6 CN01 RUNNING 0 - 192.168.122.65 - guru RUNNING 0 - 192.168.122.220, 192.168.122.224 - guru-clone STOPPED 0 - - - guru01 STOPPED 0 - - -
-
Connect to Console
lxc-console -n <Container Name> [root@gcpadman-laptop gcpadman]# lxc-console -n CN01 Connected to tty 1 Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself Ubuntu 12.04.5 LTS CN01 tty1 CN01 login:
Note: Use “Ctrl + a” followed by “q” to exit console
-
Query information about a container
lxc-info -n <Container Name> [root@gcpadman-laptop gcpadman]# lxc-info -n CN01 Name: CN01 State: RUNNING PID: 9969 IP: 192.168.122.65 CPU use: 0.96 seconds BlkIO use: 168.00 KiB Memory use: 6.88 MiB KMem use: 2.78 MiB Link: vethLRS3UE TX bytes: 4.02 KiB RX bytes: 65.04 KiB Total bytes: 69.05 KiB
Apart from this, you can try bellow query info options/switches;
Options :
-n, --name=NAME NAME of the container
-c, --config=KEY show configuration variable KEY from a running container
-i, --ips shows the IP addresses
-p, --pid shows the process id of the init container
-S, --stats shows usage stats
-H, --no-humanize shows stats as raw numbers, not humanized
-s, --state shows the state of the container
-
In Linux/Unix world "top" command is one very useful tool for a lazy admin and yes container does have a top like command to monitor the resource utilization by the container.
[root@gcpadman-laptop gcpadman]#lxc-top Container CPU CPU CPU BlkIO Mem KMem Name Used Sys User Total Used Used CN01 0.98 0.64 0.29 168.00 KB 7.03 MB 2.86 MB guru 6.97 2.93 3.35 27.16 MB 44.71 MB 4.37 MB TOTAL 2 of 2 7.96 3.57 3.64 27.32 MB 51.74 MB 7.23 MB
-
Installing package in the Ubuntu container
root@guru:~# apt-get install screen Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: byobu The following NEW packages will be installed: screen 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 611 kB of archives. After this operation, 1,077 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ precise/main screen amd64 4.0.3-14ubuntu8 [611 kB] Fetched 611 kB in 9s (63.0 kB/s) Selecting previously unselected package screen. (Reading database ... 12702 files and directories currently installed.) Unpacking screen (from .../screen_4.0.3-14ubuntu8_amd64.deb) ... Processing triggers for ureadahead ... Setting up screen (4.0.3-14ubuntu8) ...
-
Cloning Container
lxc-copy creates copies of existing containers. Copies can be complete clones of the original container. In this case, the whole root filesystem of the container is simply copied to the new container. Or they can be snapshots, i.e. small copy-on-write copies of the original container. In this case, the specified backing storage for the copy must support snapshots. This currently includes aufs, btrfs, lvm (lvm devices do not support snapshots of snapshots.), overlay, and zfs.
Before lxc-copy container must be stopped else you will get an error, check the example below;
[root@gcpadman-laptop gcpadman]# lxc-copy -n CN01 -N CN02
lxc-copy: lxccontainer.c: do_lxcapi_clone: 3056 error: Original container (CN01) is running
clone failed
Stopping Container "CN01"
[root@gcpadman-laptop gcpadman]# lxc-stop -n CN01
Cloning/Copying CN01
[root@gcpadman-laptop gcpadman]# lxc-copy -n CN01 -N CN02
Let us check all containers
[root@gcpadman-laptop gcpadman]# lxc-ls -f
NAME STATE AUTOSTART GROUPS IPV4 IPV6
CN01 STOPPED 0 - - -
CN02 STOPPED 0 - - - <<<< Here we have our new clone CN01
guru RUNNING 0 - 192.168.122.220, 192.168.122.224 -
guru-clone STOPPED 0 - - -
guru01 STOPPED 0 - - -
Note: Please read all the man pages. These commands are documented in man pages and have got a detailed explanation.
##Cgroup
-
Now, let us learn how to define resource to LXC like what we use to do in Virtualbox, XEN and KVM
In the case of LXC, it is managed using cgroups.
Cgroup can be used to limit container memory limits, CPU usage in terms of cores and shares, and swap file usage.
Let us check the memory usage of our container CN01. To test this I am using stress tool to generate CPU and Memory load on our container CN01:
-
Generating load in the CN01 using stress
root@CN01:~# stress --vm 2 --vm-bytes 1G --timeout 1000s stress: info: [1755] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
-
Check the Memory usage using the lxc-info command as below;
[root@gcpadman-laptop ~]# lxc-info -n CN01 -S CPU use: 1133.56 seconds BlkIO use: 16.61 GiB Memory use: 1.34 GiB <Now our stress program is using more then 1GB of memory. KMem use: 3.99 MiB Link: vethGX80CP TX bytes: 125.60 KiB RX bytes: 254.10 KiB Total bytes: 379.70 KiB
-
Now let us limit the memory resource to 500MB for the container CN1
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 memory.limit_in_bytes 500M
-
Note: I was not able to limit the resource when my application was already using 1+gb of memory and I had to stop my stress program. I was getting below error
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 memory.limit_in_bytes 500M
lxc-cgroup: lxc_cgroup.c: main: 103 failed to assign '500M' value to 'memory.limit_in_bytes' for 'CN01'
Now I have started "stress" in CN01 and let us check memory usage;
[root@gcpadman-laptop ~]# lxc-info -n CN01 -S
CPU use: 1515.58 seconds
BlkIO use: 29.19 GiB
Memory use: 499.93 MiB
KMem use: 3.82 MiB
Link: vethGX80CP
TX bytes: 126.46 KiB
RX bytes: 268.13 KiB
Total bytes: 394.59 KiB
Now memory usage has been limited to 500MB even if stress program is running to consume 1GB memory.
-
All this information is in /sys-fs of the controller. Please do some research;
[root@gcpadman-laptop lxc]# ls -l /sys/fs/cgroup/memory/lxc | grep ^d drwxr-xr-x 2 root root 0 Aug 2 20:10 CN01 drwxr-xr-x 2 root root 0 Aug 2 20:10 guru
##Note: Carefully check the path in the "ls -l". "Cgroup and Memory" in the sys filesystem. We are checking memory!!
-
There are two directories "CN01" and "guru". These are the two containers running currently on my laptop. Let us go inside and do some research
I hope one who is reading this doc can understand simple code below;
[root@gcpadman-laptop CN01]# ls -l memory.usage_in_bytes -r--r--r-- 1 root root 0 Aug 2 19:54 memory.usage_in_bytes
Let me give you memory usage o/p in MB;
[root@gcpadman-laptop CN01]# echo "`cat memory.usage_in_bytes`/1024/1024" | bc 499
Let us check the hard limit file;
[root@gcpadman-laptop CN01]# ls -l | grep memory.limit -rw-r--r-- 1 root root 0 Aug 2 20:04 memory.limit_in_bytes
Our limit is 500MB
[root@gcpadman-laptop CN01]# echo "`cat memory.limit_in_bytes`/1024/1024" | bc 500
-
CPU resource management using cgroups
Display the CPU cores to a container
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 cpuset.cpus
0-3
To restrict a container to cores 0 and 1, you would enter a command such as the following:
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 cpuset.cpus 0,1
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 cpuset.cpus
0-1
To change a container's share of CPU time and block I/O access, you would enter:
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 cpu.shares 10
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 blkio.weight 500
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 cpu.shares
10
[root@gcpadman-laptop ~]# lxc-cgroup -n CN01 blkio.weight
500
###Note:
These shares are not linked to any physical quantity but instead just represent relative allocations of CPU resources, meaning a container with more shares gets higher CPU access priority. The numbers used are completely arbitrary though, so giving one container 10 and another 20 is the same as giving them 1000 and 2000 respectively, as all it tells us is that the second container has twice the CPU share priority. Just ensure you are consistent with your scale between containers.
Once you've changed the cgroup limits in the config file, you'll need to shutdown and restart the container for the changes to take effect.
For demonstration purposes, I am going to use two Containers WWW-Production and WWW-Development. The WWW-Production server should be given more resources as compared to WWW-Development. Let us try to achieve this and do some load test :)
Both the container are running
[root@gcpadman-laptop ~]# lxc-ls -f | grep WWW
WWW-Development RUNNING 0 - 192.168.122.176 -
WWW-Production RUNNING 0 - 192.168.122.200 -
Running above stress command on both the containers. Both are using the same amount of CPU and we do not want that. We like to give more CPU share to production Container
Container CPU CPU CPU BlkIO Mem KMem
Name Used Sys User Total Used Used
WWW-Development 236.63 69.91 166.96 137.17 MB 270.98 MB 3.36 MB
WWW-Production 245.08 72.51 173.31 141.26 MB 1.24 GB 3.32 MB
TOTAL 2 of 2 481.71 142.42 340.27 278.43 MB 1.51 GB 6.68 MB
Now let us assign core 0 to Dev and core 0 to 3 to Production Container
[root@gcpadman-laptop lxc]#lxc-cgroup -n WWW-Production cpuset.cpus 0,3
[root@gcpadman-laptop lxc]#lxc-cgroup -n WWW-Development cpuset.cpus 0
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Production cpuset.cpus
0,3
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Development cpuset.cpus
0
Set the CPU share
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Production cpu.shares 20
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Development cpu.shares 5
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Production cpu.shares
20
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Development cpu.shares
5
Set block IP usage:
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Production blkio.weight 100
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Production blkio.weight
100
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Development blkio.weight 50
[root@gcpadman-laptop lxc]# lxc-cgroup -n WWW-Development blkio.weight
50
Check the resource usage:
I am trying to run the process with 4 core which is failing in WWW-Development which was running before. Now it can only run with 1 core;
ubuntu@WWW-Development:~$ stress --cpu 4 --vm 2 --vm-bytes 1G --timeout 10000s
stress: info: [1729] dispatching hogs: 4 cpu, 0 io, 2 vm, 0 hdd
stress: FAIL: [1729] (416) <-- worker 1733 got signal 9
stress: WARN: [1729] (418) now reaping child worker processes
stress: FAIL: [1729] (452) failed run completed in 40s
Resource Usage:
##After
Container CPU CPU CPU BlkIO Mem KMem
Name Used Sys User Total Used Used
WWW-Development 2649.48 770.02 1873.10 8.71 GB 299.97 MB 4.19 MB
WWW-Production 3730.94 1101.96 2640.60 142.44 MB 1.07 GB 3.32 MB
TOTAL 2 of 2 6380.42 1871.98 4513.70 8.85 GB 1.37 GB 7.52 MB
##Before
Container CPU CPU CPU BlkIO Mem KMem
Name Used Sys User Total Used Used
WWW-Development 236.63 69.91 166.96 137.17 MB 270.98 MB 3.36 MB
WWW-Production 245.08 72.51 173.31 141.26 MB 1.24 GB 3.32 MB
TOTAL 2 of 2 481.71 142.42 340.27 278.43 MB 1.51 GB 6.68 MB
###Note:
Do not consider numbers between before and after results. Compare the amount of %usage difference between two outputs. There is a significant difference in Block IO and I have limited the memory to WWW-Development
-
Container Autostart
To Auto Start a Linux Container after system reboot the below parameter will have to be added to "config" file under /var/lib/lxc/< "Container Name" >/ directory as below;
[root@gcpadman-laptop WWW-Production]# grep start config lxc.start.auto = 1 lxc.start.delay = 10 [root@gcpadman-laptop WWW-Production]# pwd /var/lib/lxc/WWW-Production
-
Passing devices to a running container
Adding sdb1 (My USB drive on the controller) - You do not see any sdb1 block special chr file in /dev root@CN01:~# ls -l /dev/sdb1 ls: cannot access /dev/sdb1: No such file or directory - Now let me attach my USB device sdb1 to container CN01 from controller node; [root@gcpadman-laptop lib]# lxc-device -n CN01 add /dev/sdb1 [root@gcpadman-laptop lib]# echo $? 0 - Check availability of sdb1 in CN01 root@CN01:~# ls -l /dev/sdb1 brw-r----- 1 root root 8, 17 Aug 3 14:29 /dev/sdb1 root@CN01:~# mount /dev/sdb1 /mnt root@CN01:~# df -hP /mnt Filesystem Size Used Avail Use% Mounted on /dev/sdb1 3.8G 11M 3.8G 1% /mnt
-
Static DHCP IP to the guest
-
btrfs and LXC