[Devel] [RFC rh7 v2] ve/tty: vt -- Implement per VE support for virtual consoles

Vladimir Davydov vdavydov at parallels.com
Fri Jul 31 06:22:58 PDT 2015


On Fri, Jul 31, 2015 at 03:47:42PM +0300, Cyrill Gorcunov wrote:

> Now we have two scenarios:
> 
> 1) Master peer opened and slave peer has been opened
>    too.
> 
>    Here on both peers remove() helper will be called
>    by tty layer, which would clean tty map entries,
>    thus our lookup will give us back solid results
> 
> 2) Master peer opened by slave peer is not, and now
>    master is closing
> 
>    Here remove() helper is called for master slave,
>    and tty map cleans the entry (the slave peer
>    still is present inside map). Note when remove()
>    starts it takes tty_mutex, so that lookup and
>    remove can't run simultaneously but it queues
>    to cleanup both peers (master and slave) and
>    cleanup() itself may be called for slave first
>    and for master at second, ie in reverse order.
> 
>    Thus we need somehow to be sure that at least
>    masters map is consistent when lookup is called
>    (once ttys are queued for release they carry
>     "exiting" bits).

Sorry, but it's really difficult to understand what you mean here. Could
you please write it more technically, I mean:

CPU0		CPU1
----		----
BLAH
		BLAH-BLAH
OUCH

> 
>    Once I introduce order in cleanup() routine
>    I can be sure that

Anyway, your solution is unacceptable AFAICS, because you make one work
wait for another, which will lead to a dead lock if they both get queued
to the same worker.

> 
>    a) master map always carries valid pointer,
>       it's either empty (if no peer exists and
>       we have to allocate new ones) either the
>       pointer is valid and can be tested for
>       "alive" state
> 
>    b) slave map may point to tty which is being
>       freed, thus before before anything else
>       we need to test if master peer is not
>       existing
> 
> Something like that. If you see some better
> idea how to carry consistency with tty pairs
> please letme know.
> 



More information about the Devel mailing list