Fdinfo engine

Revision as of 09:06, 25 April 2017 by Ktkhai (talk | contribs)

Masters and slaves

  1. A file may be referred by several file descriptors. The descriptors may belong to a single process or to several processes.
  2. Group of descriptors referring to the same file is called shared. One of the descriptors is named master, others are slaves.
  3. Every descriptor is discribed via struct fdinfo_list_entry (fle).
  4. One process opens a master fle of a file, while other processes, sharing the file, obtain it using scm_rights. See send_fds() and receive_fds() for the details.

Per-process files restore

Every file types is described via structure file_desc. We sequentially call file_desc::ops::open(struct file_desc *d, int *new_fd) method for every master file of a process until all masters are restored. The open methods may return three values:

  • 0 -- restore of the master file is successefuly finished;
  • 1 -- restore is in progress or it can't be started yet, because of it depends on another files, so the method should be called once again;
  • -1 -- restore failed.

Right after a file is open at first time, the open method must return fd value in new_fd argument. This allows the common code to send this master to other processes to reopen the master as a slave as soon as possible. The same time, returning of not-negative new_fd does not mean, that the master is restored. The open() callback may return not-negative new_fd and "1" as return value at the same time.

Example. Restore of connected unix socket by open() method.

  • 1)Open a socket, write its file descriptor to new_fd and return 1.
  • 2)Check if peer socket is open and bound. If it's not so, then return 1 and repeat step "2" in next time.
  • 3)Connect to the peer and return 0.

Note: it's also possible to go to step "2" right after new_fd is written.

Notes

  1. Pipes (and fifos), unix sockets and TTYs generate two fds in their ->open callbacls, the 2nd one can conflict with some other fd the task restores and (!) this "2nd one" may require sending to some other task. This imposes another requirement on the 3-stages engine described above.
  2. Some actions can only be done only after file is created, served out and moved to proper position. E.g. epoll configuration and scheduling TCP repair off. Thus the ->post_open call :( and separate queue for epoll fds.
  3. Slave TTYs can only be restored after respective master peers. Take into account the issue #2, this results in 3rd queue for slave TTYs.
  4. CTTYs should be __created__ after all other TTYs are created, configured and served out. Thus separate stages (not only queue) for CTTYs.