Fdinfo engine

From CRIU
Revision as of 10:24, 21 September 2016 by Xemul (talk | contribs) (Xemul moved page Fdinfo-engine to Fdinfo engine: No dash in name)
Jump to navigation Jump to search
  1. Shared fds are distributed between tasks using scm_rights. To do this we have 3 stages -- send, open and receive and they all are strictly ordered to avoid lockups when tasks wait for each other for fds (before sending scm a task stops on a futex waiting for receiver to create the receiving socket)
  2. Pipes (and fifos), unix sockets and TTYs generate two fds in their ->open callbacls, the 2nd one can conflict with some other fd the task restores and (!) this "2nd one" may require sending to some other task. This imposes another requirement on the 3-stages engine described above.
  3. Some actions can only be done only after file is created, served out and moved to proper position. E.g. epoll configuration and scheduling TCP repair off. Thus the ->post_open call :( and separate queue for epoll fds.
  4. Slave TTYs can only be restored after respective master peers. Take into account the issue #2, this results in 3rd queue for slave TTYs.
  5. CTTYs should be __created__ after all other TTYs are created, configured and served out. Thus separate stages (not only queue) for CTTYs.