Changes

Jump to navigation Jump to search
1,775 bytes removed ,  20:33, 19 January 2020
Move Optimizing the pre-dump algorithm to the GSoC 2019 page
Line 33: Line 33:  
The option to keep log() calls intact might be in pre-compilation pass of the sources. In this pass each <code>log(fmt, ...)</code> call gets translated into a call to a binary log function that saves <code>fmt</code> identifier copies all the args ''as is'' into the log file. The binary log decode utility, required in this case, should then find the fmt string by its ID in the log file and print the resulting message.
 
The option to keep log() calls intact might be in pre-compilation pass of the sources. In this pass each <code>log(fmt, ...)</code> call gets translated into a call to a binary log function that saves <code>fmt</code> identifier copies all the args ''as is'' into the log file. The binary log decode utility, required in this case, should then find the fmt string by its ID in the log file and print the resulting message.
   −
   
'''Links:'''
 
'''Links:'''
 
* [[Better logging]]
 
* [[Better logging]]
Line 68: Line 67:  
'''Details:'''
 
'''Details:'''
 
* Skill level: intermediate (+linux kernel)
 
* Skill level: intermediate (+linux kernel)
* Language: C
  −
* Mentor: Pavel Emelianov <xemul@virtuozzo.com>
  −
* Suggested by: Pavel Emelianov <xemul@virtuozzo.com>
  −
  −
=== Optimize the pre-dump algorithm ===
  −
  −
'''Summary:''' Optimize the pre-dump algorithm to avoid pinning to many memory in RAM
  −
  −
Current [[CLI/cmd/pre-dump|pre-dump]] mode is used to write task memory contents into image
  −
files w/o stopping the task for too long. It does this by stopping the task, infecting it and
  −
draining all the memory into a set of pipes. Then the task is cured, resumed and the pipes'
  −
contents is written into images (maybe a [[page server]]). Unfortunately, this approach creates
  −
a big stress on the memory subsystem, as keeping all memory in pipes creates a lot of unreclaimable
  −
memory (pages in pipes are not swappable), as well as the number of pipes themselves can be huge, as
  −
one pipe doesn't store more than a fixed amount of data (see pipe(7) man page).
  −
  −
A solution for this problem is to use a sys_read_process_vm() syscall, which will mitigate
  −
all of the above. To do this we need to allocate a temporary buffer in criu, then walk the
  −
target process vm by copying the memory piece-by-piece into it, then flush the data into image
  −
(or page server), and repeat.
  −
  −
Ideally there should be sys_splice_process_vm() syscall in the kernel, that does the same as
  −
the read_process_vm does, but vmsplices the data
  −
  −
'''Links:'''
  −
* [[Memory pre dump]]
  −
* https://github.com/checkpoint-restore/criu/issues/351
  −
* [[Memory dumping and restoring]], [[Memory changes tracking]]
  −
* [http://man7.org/linux/man-pages/man2/process_vm_readv.2.html process_vm_readv(2)] [http://man7.org/linux/man-pages/man2/vmsplice.2.html vmsplice(2)] [https://lkml.org/lkml/2018/1/9/32 RFC for splice_process_vm syscall]
  −
  −
'''Details:'''
  −
* Skill level: advanced
   
* Language: C
 
* Language: C
 
* Mentor: Pavel Emelianov <xemul@virtuozzo.com>
 
* Mentor: Pavel Emelianov <xemul@virtuozzo.com>
277

edits

Navigation menu