Line 7: |
Line 7: |
| == Docker 1.10 == | | == Docker 1.10 == |
| | | |
− | The easiest way to try CRIU and Docker together is to install [this pre-compiled version of Docker](https://github.com/boucher/docker/releases/tag/v1.10_2-16-16-experimental). It's based on Docker 1.10, and built with the `DOCKER_EXPERIMENTAL` build tag. | + | The easiest way to try CRIU and Docker together is to install [this pre-compiled version of Docker](https://github.com/boucher/docker/releases/tag/v1.10_2-16-16-experimental). It's based on Docker 1.10, and built with the <code>DOCKER_EXPERIMENTAL</code> build tag. |
| | | |
− | To install, download the `docker-1.10.0-dev` binary to your system. You'll need to start a docker daemon from this binary, and then you can use the same binary to communicate with that daemon. To start a docker daemon, run a command something like this: | + | To install, download the <code>docker-1.10.0-dev<code> binary to your system. You'll need to start a docker daemon from this binary, and then you can use the same binary to communicate with that daemon. To start a docker daemon, run a command something like this: |
| | | |
− | `docker-1.10.0-dev daemon -D --graph=/var/lib/docker-dev --host unix:///var/run/docker-dev.sock`
| + | docker-1.10.0-dev daemon -D --graph=/var/lib/docker-dev --host unix:///var/run/docker-dev.sock |
| | | |
| The *graph* and *host* options will prevent colliding with an existing installation of Docker, but you can replace your existing docker if desired. In another shell, you can then connect to that daemon: | | The *graph* and *host* options will prevent colliding with an existing installation of Docker, but you can replace your existing docker if desired. In another shell, you can then connect to that daemon: |
| | | |
− | `docker-1.10.0-dev --host unix:///var/run/docker-dev.sock run -d busybox top`
| + | docker-1.10.0-dev --host unix:///var/run/docker-dev.sock run -d busybox top |
| | | |
| === Dependencies === | | === Dependencies === |
| | | |
− | In addition to downloading the binary above (or compiling one yourself), you need *CRIU* installed on your system, with at least version 2.0. You also need some shared libraries on your system. The most likely things you'll need to install are *libprotobuf-c* and *libnl-3*. Here's an output of `ldd` on my system: | + | In addition to downloading the binary above (or compiling one yourself), you need *CRIU* installed on your system, with at least version 2.0. You also need some shared libraries on your system. The most likely things you'll need to install are *libprotobuf-c* and *libnl-3*. Here's an output of <code>ldd</code> on my system: |
| | | |
− | ```
| + | # ldd `which criu` |
− | # ldd `which criu` | + | linux-vdso.so.1 => (0x00007ffc09fda000) |
− | linux-vdso.so.1 => (0x00007ffc09fda000)
| + | libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd28b2c7000) |
− | libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd28b2c7000)
| + | libprotobuf-c.so.0 => /usr/lib/x86_64-linux-gnu/libprotobuf-c.so.0 (0x00007fd28b0b7000) |
− | libprotobuf-c.so.0 => /usr/lib/x86_64-linux-gnu/libprotobuf-c.so.0 (0x00007fd28b0b7000)
| + | libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd28aeb2000) |
− | libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd28aeb2000)
| + | libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007fd28ac98000) |
− | libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007fd28ac98000)
| + | libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd28a8d3000) |
− | libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd28a8d3000)
| + | /lib64/ld-linux-x86-64.so.2 (0x000056386bb38000) |
− | /lib64/ld-linux-x86-64.so.2 (0x000056386bb38000)
| + | libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fd28a5cc000) |
− | libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fd28a5cc000)
| |
− | ```
| |
| | | |
| === checkpoint === | | === checkpoint === |
Line 39: |
Line 37: |
| First, we create container: | | First, we create container: |
| | | |
− | `docker run -d --name looper --security-opt seccomp:unconfined busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'`
| + | docker run -d --name looper --security-opt seccomp:unconfined busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done' |
| | | |
| You can verify the container is running by printings its logs: | | You can verify the container is running by printings its logs: |
| | | |
− | `docker logs looper`
| + | docker logs looper |
| | | |
| If you do this a few times you'll notice the integer increasing. Now, we checkpoint the container: | | If you do this a few times you'll notice the integer increasing. Now, we checkpoint the container: |
| | | |
− | `docker checkpoint looper`
| + | docker checkpoint looper |
| | | |
| You should see that the process is no longer running, and if you print the logs a few times no new logs will be printed. | | You should see that the process is no longer running, and if you print the logs a few times no new logs will be printed. |
Line 53: |
Line 51: |
| === restore === | | === restore === |
| | | |
− | Like `checkpoint`, `restore` is a top level command in this version of Docker. Continuing our example, let's restore the same container: | + | Like *checkpoint*, *restore* is a top level command in this version of Docker. Continuing our example, let's restore the same container: |
| | | |
− | `docker restore looper`
| + | docker restore looper |
| | | |
| If we then print the logs, you should see they start from where we left off and continue to increase. | | If we then print the logs, you should see they start from where we left off and continue to increase. |
Line 61: |
Line 59: |
| ==== Restoring into a *new* container ==== | | ==== Restoring into a *new* container ==== |
| | | |
− | Beyond the straightforward case of checkpointing and restoring the same container, it's also possible to checkpoint one container, and then restore the checkpoint into a completely different container. Right now that is done with the `--force` option, in conjunction with the `--image-dir` option. Here's a slightly revised example from before: | + | Beyond the straightforward case of checkpointing and restoring the same container, it's also possible to checkpoint one container, and then restore the checkpoint into a completely different container. Right now that is done with the <code>--force</code> option, in conjunction with the <code>--image-dir</code> option. Here's a slightly revised example from before: |
| | | |
− | ```
| + | $ docker run -d --name looper2 --security-opt seccomp:unconfined busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done' |
− | $ docker run -d --name looper2 --security-opt seccomp:unconfined busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done' | |
| | | |
− | # wait a few seconds to give the container an opportunity to print a few lines, then | + | # wait a few seconds to give the container an opportunity to print a few lines, then |
− | $ docker checkpoint --image-dir=/tmp/checkpoint1 looper2 | + | $ docker checkpoint --image-dir=/tmp/checkpoint1 looper2 |
| | | |
− | $ docker create --name looper-force --security-opt seccomp:unconfined busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done' | + | $ docker create --name looper-force --security-opt seccomp:unconfined busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done' |
| | | |
− | $ docker restore --force=true --image-dir=/tmp/checkpoint1 looper-force | + | $ docker restore --force=true --image-dir=/tmp/checkpoint1 looper-force |
| | | |
− | ```
| |
| | | |
− | You should be able to print the logs from `looper-force` and see that they start from wherever the logs of `looper` end. | + | You should be able to print the logs from <code>looper-force</code> and see that they start from wherever the logs of <code>looper</code> end. |
| | | |
| === usage === | | === usage === |
| | | |
− | ```
| + | # docker checkpoint --help |
− | # docker checkpoint --help | |
| | | |
− | Usage: docker checkpoint [OPTIONS] CONTAINER | + | Usage: docker checkpoint [OPTIONS] CONTAINER |
| | | |
− | Checkpoint one or more running containers | + | Checkpoint one or more running containers |
| | | |
− | --help Print usage
| + | --help Print usage |
− | --image-dir directory for storing checkpoint image files
| + | --image-dir directory for storing checkpoint image files |
− | --leave-running leave the container running after checkpoint
| + | --leave-running leave the container running after checkpoint |
− | --work-dir directory for storing log file
| + | --work-dir directory for storing log file |
− | ```
| |
| | | |
− | ```
| |
− | # docker restore --help
| |
| | | |
− | Usage: docker restore [OPTIONS] CONTAINER
| + | # docker restore --help |
| | | |
− | Restore one or more checkpointed containers
| + | Usage: docker restore [OPTIONS] CONTAINER |
| | | |
− | --force bypass checks for current container state
| + | Restore one or more checkpointed containers |
− | --help Print usage
| + | |
− | --image-dir directory to restore image files from
| + | --force bypass checks for current container state |
− | --work-dir directory for restore log
| + | --help Print usage |
− | ```
| + | --image-dir directory to restore image files from |
| + | --work-dir directory for restore log |
| | | |
| == Docker 1.12 == | | == Docker 1.12 == |
| | | |
− | More detailed instructions on running checkpoint/restore with Docker in version 1.12 will be coming in the future, but in the meantime, you must build the version of Docker available in the `docker-checkpoint-restore` branch of `@boucher`'s fork of Docker, [available here](https://github.com/boucher/docker/tree/docker-checkpoint-restore). Make sure to build with the env `DOCKER_EXPERIMENTAL=1`. | + | More detailed instructions on running checkpoint/restore with Docker in version 1.12 will be coming in the future, but in the meantime, you must build the version of Docker available in the *docker-checkpoint-restore* branch of *@boucher*'s fork of Docker, [available here](https://github.com/boucher/docker/tree/docker-checkpoint-restore). Make sure to build with the env <code>DOCKER_EXPERIMENTAL=1</code>. |
| | | |
− | The command line interface has changed from the 1.10 version. `docker checkpoint` is now an umbrella command for a few checkpoint operations. To create a checkpoint, use the `docker checkpoint create` command, which takes `container_id` and `checkpoint_id` as non-optional arguments. Example: | + | The command line interface has changed from the 1.10 version. <code>docker checkpoint</code> is now an umbrella command for a few checkpoint operations. To create a checkpoint, use the <code>docker checkpoint create</code> command, which takes <code>container_id</code> and <code>checkpoint_id</code> as non-optional arguments. Example: |
| | | |
− | `docker checkpoint create my_container my_first_checkpoint`
| + | docker checkpoint create my_container my_first_checkpoint |
| | | |
− | Restoring a container is now performed just as an option to `docker start`. Although typically you may create and start a container in a single step using `docker run`, under the hood this is actually two steps: `docker create` followed by `docker start`. You can also call `start` on a container that was previously running and has since been stopped or killed. That looks something like this: | + | Restoring a container is now performed just as an option to <code>docker start</code>. Although typically you may create and start a container in a single step using <code>docker run</code>, under the hood this is actually two steps: <code>docker create</code> followed by <code>docker start</code>. You can also call <code>start</code> on a container that was previously running and has since been stopped or killed. That looks something like this: |
| | | |
− | `docker start --checkpoint my_first_checkpoint my_container`
| + | docker start --checkpoint my_first_checkpoint my_container |
| | | |
| | | |
Line 136: |
Line 129: |
| === OverlayFS === | | === OverlayFS === |
| | | |
− | There is a bug in OverlayFS that reports the wrong mnt_id in /proc/<pid>/fdinfo/<fd> and the wrong symlink target path for /proc/<pid>/<fd>. Fortunately, these bugs have been fixed in the kernel v4.2-rc2. The following small kernel patches fix the mount id and symlink target path issue:
| + | There is a bug in OverlayFS that reports the wrong mnt_id in /proc/<pid>/fdinfo/<fd> and the wrong symlink target path for /proc/<pid>/<fd>. Fortunately, these bugs have been fixed in the kernel v4.2-rc2. The following small kernel patches fix the mount id and symlink target path issue: |
| | | |
| * {{torvalds.git|155e35d4da}} by David Howells | | * {{torvalds.git|155e35d4da}} by David Howells |