Difference between revisions of "LXC"
(Make normal general description) |
|||
Line 49: | Line 49: | ||
$ lxc-start -n test-lxc | $ lxc-start -n test-lxc | ||
− | = | + | = Checkpoint and restore an LXC Container = |
− | ==[[Installation]]== | + | == Preparations == |
− | The | + | You only need to [[Installation | install]] the crtools. |
− | ==Example== | + | |
+ | == Dump and restore == | ||
+ | Dumping and restoring an LXC contianer means -- dumping a subtree of processes starting from container init plus all kinds of namespaces. | ||
+ | Restoring is symmetrical. The way LXC container works imposes some more requirements on crtools usage. | ||
+ | |||
+ | * You need to use the <code>--evasive-devices</code> option to handle <code>/dev/log</code> users (there's a bug in LXC code) | ||
+ | * In order to properly isolate container from unwanted networking communication during checkpoint/restore you should provide a script for locking/unlocking the container network (see below) | ||
+ | * When restoring a container with veth device you may specify a name for the host-side veth device | ||
+ | * In order to checkpoint and restore alive TCP connections you should use the <code>--tcp-established</code> option | ||
+ | |||
+ | Typically a container dump command will look like | ||
+ | <pre> | ||
+ | crtools dump | ||
+ | --evasive-devices # handle /dev/log usage bug | ||
+ | --tcp-established # allow for TCP connections dump | ||
+ | -n net -n mnt -n ipc -n pid # dump all the namespaces container uses | ||
+ | --action-script "net-script.sh" # use net-script.sh to lock/unlock networking | ||
+ | -D dump/ -o dump.log # set images dir to dump/ and put logs into dump.log file | ||
+ | -t ${init-pid} # start dumping from task ${init-pid}. It should be container's init | ||
+ | </pre> | ||
+ | and restore command like | ||
+ | <pre> | ||
+ | crtools restore | ||
+ | --evasive-devices | ||
+ | --tcp-established | ||
+ | -n net -n mnt -n ipc -n pid | ||
+ | --action-script "net-script.sh" | ||
+ | --veth-pair eth0=${veth-name} # when restoring a veth link use ${veth-name} for host-side device end | ||
+ | --root ${path} # path to container root. It should be a root of a (bind)mount | ||
+ | -D data/ -o restore.log | ||
+ | -t ${init-pid} | ||
+ | </pre> | ||
+ | |||
+ | We also find it useful to use the <code>--restore-detached</code> option for restore to make contianer reparent to init rather than hanging on a crtools process launched from shell. Another useful option is the <code>--pidfile</code> one -- you will be able to find out the host-side pid of a container init after restore. | ||
+ | |||
+ | == Example == | ||
We have [http://git.criu.org/?p=crtools.git;a=tree;f=test/app-emu/lxc;hb=HEAD an application test] to test dump/restore of a Linux Container. | We have [http://git.criu.org/?p=crtools.git;a=tree;f=test/app-emu/lxc;hb=HEAD an application test] to test dump/restore of a Linux Container. | ||
Revision as of 13:38, 18 September 2012
Prepare a Linux Container
Requirements
- A console should be disabled (lxc.console = none)
- udev should not run inside containers ($ mv /sbin/udevd{,.bcp})
Preparing a host environment
- Mount cgroupfs
$ mount -t cgroup c /cgroup
- Create a network bridge
# cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes DELAY=5 NM_CONTROLLED=n $ cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" NM_CONTROLLED="no" ONBOOT="yes" BRIDGE=br0
Create and start a container
- Download an OpenVZ template and extract it.
curl http://download.openvz.org/template/precreated/centos-6-x86_64.tar.gz | tar -xz -C test-lxc
- Create config files
$ cat ~/test-lxc.conf lxc.console=none lxc.utsname = test-lxc lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.mount = /root/test-lxc/etc/fstab lxc.rootfs = /root/test-lxc-root/
$ cat /root/test-lxc/etc/fstab none /root/test-lxc-root/dev/pts devpts defaults 0 0 none /root/test-lxc-root/proc proc defaults 0 0 none /root/test-lxc-root/sys sysfs defaults 0 0 none /root/test-lxc-root/dev/shm tmpfs defaults 0 0
- Register the container
$ lxc-create -n test-lxc -f test-lxc.conf
- Start the container
$ mount --bind test-lxc test-lxc-root/ $ lxc-start -n test-lxc
Checkpoint and restore an LXC Container
Preparations
You only need to install the crtools.
Dump and restore
Dumping and restoring an LXC contianer means -- dumping a subtree of processes starting from container init plus all kinds of namespaces. Restoring is symmetrical. The way LXC container works imposes some more requirements on crtools usage.
- You need to use the
--evasive-devices
option to handle/dev/log
users (there's a bug in LXC code) - In order to properly isolate container from unwanted networking communication during checkpoint/restore you should provide a script for locking/unlocking the container network (see below)
- When restoring a container with veth device you may specify a name for the host-side veth device
- In order to checkpoint and restore alive TCP connections you should use the
--tcp-established
option
Typically a container dump command will look like
crtools dump --evasive-devices # handle /dev/log usage bug --tcp-established # allow for TCP connections dump -n net -n mnt -n ipc -n pid # dump all the namespaces container uses --action-script "net-script.sh" # use net-script.sh to lock/unlock networking -D dump/ -o dump.log # set images dir to dump/ and put logs into dump.log file -t ${init-pid} # start dumping from task ${init-pid}. It should be container's init
and restore command like
crtools restore --evasive-devices --tcp-established -n net -n mnt -n ipc -n pid --action-script "net-script.sh" --veth-pair eth0=${veth-name} # when restoring a veth link use ${veth-name} for host-side device end --root ${path} # path to container root. It should be a root of a (bind)mount -D data/ -o restore.log -t ${init-pid}
We also find it useful to use the --restore-detached
option for restore to make contianer reparent to init rather than hanging on a crtools process launched from shell. Another useful option is the --pidfile
one -- you will be able to find out the host-side pid of a container init after restore.
Example
We have an application test to test dump/restore of a Linux Container.
This test contains two scripts: run.sh and network-script.sh.
The script run.sh is a main script, which executes crtools two times for dumping and restoring CT. This scripts contains actual options for crtools.
The script network-script.sh is used to lock and unlock CT's network. During dump a state of CT should not be changed, so CR tools freezes processes and executes an external script to freeze network. On restore CR tools restores states of processes and resumes the CT, which includes resume of processes and network). An external script is used to unlock CT's network.