Difference between revisions of "Performance research"
Wenhuizhang (talk | contribs) m (typo correction) |
|||
(11 intermediate revisions by one other user not shown) | |||
Line 30: | Line 30: | ||
11 openat(AT_FDCWD, "/proc/$pid/map_files", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4 | 11 openat(AT_FDCWD, "/proc/$pid/map_files", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4 | ||
− | === | + | == Restore == |
+ | |||
+ | === Fork vs VMA restore === | ||
+ | |||
+ | We restore task's mappings before it goes forking to handle COW. This effectively serializes forking. | ||
+ | |||
+ | === Restoring VMAs === | ||
+ | |||
+ | There are 4 stages in VMA restore. Relative times of each are below | ||
+ | |||
+ | * Reading images 1% | ||
+ | * Mapping huge premap area << 1% | ||
+ | * (Re-)mapping sub-areas 73% | ||
+ | * Filling area with data 26% | ||
+ | |||
+ | The 3rd stage has two parts. With timings: | ||
+ | |||
+ | * Opening filemap fd 85% | ||
+ | * Maping vma 15% | ||
− | |||
+ | === Opening files for mappings === | ||
− | == | + | The <code>get_filemap_fd()</code> opens new fd every time. If a file is mapped several |
+ | times (e.g. -- a library) we can share one fd for that. | ||
+ | |||
+ | === Staging === | ||
− | + | When restoring a single task CRIU uses [[stages of restoring]] which slows things down. Need either special-care the single task restore, or introduce fine-grained locking for such things. | |
− | + | [[Category: Development]] | |
+ | [[Category: Thinkers]] |
Latest revision as of 21:18, 22 February 2023
Written here are performance issues found.
Timing stats of live migration of a small container with 11 tasks is
- Total time ~3.5 seconds
- Frozen time ~3.0 seconds
- Pre-dump stages ~0.5 seconds each
- Restore time ~1.9 seconds
- Images transfer time ~0.3 seconds
Below is the list of issues found
Dump[edit]
Surprisingly, but the mem-drain time is not the biggest. It's "only" ~0.02 seconds. There are places in code that take longer.
parse_smaps
[edit]
Time spent in this routine is up to 0.2 seconds on dump. This one exploits /proc heavily. For a container with 11 tasks the syscall stats look like
834 read 1451 fstat 1462 close 1642 openat
while opens and stats happen on
193 openat(4, "map-symlink", O_RDONLY) = -1 ENOENT (No such file or directory) 1438 openat(4, "map-symlink", O_RDONLY) = 5 11 openat(AT_FDCWD, "/proc/$pid/map_files", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 4
Restore[edit]
Fork vs VMA restore[edit]
We restore task's mappings before it goes forking to handle COW. This effectively serializes forking.
Restoring VMAs[edit]
There are 4 stages in VMA restore. Relative times of each are below
- Reading images 1%
- Mapping huge premap area << 1%
- (Re-)mapping sub-areas 73%
- Filling area with data 26%
The 3rd stage has two parts. With timings:
- Opening filemap fd 85%
- Maping vma 15%
Opening files for mappings[edit]
The get_filemap_fd()
opens new fd every time. If a file is mapped several
times (e.g. -- a library) we can share one fd for that.
Staging[edit]
When restoring a single task CRIU uses stages of restoring which slows things down. Need either special-care the single task restore, or introduce fine-grained locking for such things.