[Devel] [RFC rh7 v4] ve/tty: vt -- Implement per VE support for console and terminals

Vladimir Davydov vdavydov at parallels.com
Mon Aug 24 03:34:18 PDT 2015


On Fri, Aug 21, 2015 at 06:22:18PM +0300, Cyrill Gorcunov wrote:

> >> +static envid_t vtty_get_context(void)
> >> +{
> >> +	BUILD_BUG_ON(sizeof(current_veid) < sizeof(envid_t));
> >> +	lockdep_assert_held(&tty_mutex);
> >> +
> >> +	if (likely(current_veid == VTTY_USE_EXEC_VEID))
> > 
> > Nit: likely is redundant here, it's not a hot path.
> > 
> > BTW, since vttys should never be used by ve0 we could use 0 instead of
> > VTTY_USE_EXEC_VEID. Then the body of this function would look like this:
> > 
> > 	return current_veid ?: get_exec_env()->veid;
> > 
> > (I like ?: with the first choice omitted :-)
> 
> Yeah, I though about it but it would be a bit vague I think.
> If you prefer though I can switch to such scheme but need
> to put a big fat comment about zero number.

Up to you. I don't really care.

> 
> >> +static struct tty_struct *vtty_lookup(struct tty_driver *driver,
> >> +				      struct inode *inode, int idx)
> >> +{
> >> +	vtty_map_t *map = vtty_map_lookup(vtty_get_context());
> >> +	struct tty_struct *tty;
> >> +
> >> +	WARN_ON_ONCE((driver != vttym_driver) &&
> >> +		     (driver != vttys_driver));
> >> +
> >> +	if (!vtty_match_index(idx))
> >> +		return ERR_PTR(-EIO);
> >> +
> >> +	vtty_printk("driver %s index %d\n", driver->driver_name, idx);
> >> +	/*
> >> +	 * Nothing ever been opened yet, allocate a new
> >> +	 * tty map together with both peers from the scratch
> >> +	 * in install procedure.
> >> +	 */
> >> +	if (!map)
> >> +		return NULL;
> >> +
> >> +	/*
> >> +	 * XXX: Strictly speaking there should not
> >> +	 * be requests for master peers from
> >> +	 * inside of container (ie lookup for
> >> +	 * vttym_driver). Even vtty_open_master
> >> +	 * may simply fetch tty->link by self
> >> +	 * after lookup for slave returned valid
> >> +	 * tty pointer. But I leave it this way
> >> +	 * simply because not sure yet how to
> >> +	 * c/r containers with live connection
> >> +	 * from nodes and this provides an easier
> >> +	 * way for testing.
> >> +	 */
> > 
> > I wonder if we need to handle this open-master-from-ct case now. This
> > won't work if there's no slave, because there is no working install
> > method for master, so we will have to write new code anyway if we need
> > it. May be, better just zap it for now, and only introduce hunks like
> > this when they are really necessary?
> 
> I'm not sure yet how we would handle restore case, that's why such
> comment is present. But opening master from inside of container will
> require more kernel code anyway so I'll drop this comment.
> 
> >> +	tty = map->vttys[idx];
> >> +	if (tty) {
> >> +		if (vtty_is_exiting(tty) || vtty_is_exiting(tty->link))
> > 
> > 1. There is absolutely no point in checking this for 'tty', because
> >    tty_reopen, which is normally called right after ->lookup, already
> >    takes care of it.
> 
> This check is redundant in case of opening pair from container but
> needed when we open pair from node. If you prefer I move it into
> vtty_open_master.

Yes, please. Let's make vtty_lookup slave-only. This way the code will
be easier to understand IMO.

> 
> > 
> > 2. What's the point in checking TTY_HUPPING and TTY_LDISC_CHANGING for
> >    'tty->link'? We are opening 'tty' here.
> 
> Because if the link is started hupping or changing line discipline
> there is no much sence to return such pair, it's almost dead. Simply
> notify the user that there is no such tty present at moment and make
> it repeat the attempt.

Why does the generic lookup method (tty_driver_lookup_tty), which is
used e.g. by legacy pty, not check that then?

> 
> > 
> >> +			tty = ERR_PTR(-ENXIO);
> > 
> > Just curious, why ENXIO? Why not EIO?
> 
> You know at first i took it from vt code but I think indeed
> -eio will be more appropriate, because ENXIO usually stands
> that there some request over nonexisting device which is not
> true for this case. Thanks!
> 
> > Anyway, returning an error here means tty_open failure. Now suppose
> > somebody attempts to open slave vtty while the pair is being destroyed.
> > A failure would be unexpected then. AFAIU we should return NULL here for
> > tty_open to create a new pair.
> 
> Which gives you a way to spam the kernel. Simply start open/close ttys
> in a cycle and there is a small window which allow to create more than
> 12 ttys as far as I can tell.

Why is it bad?

> Instead with the approach above we don't allow to open pair while
> previous session is not complete. Am I wrong?

If a process inside a container closes and then immediately reopens a
vtty, it does not expect to get -EIO simply because it is advancing too
fast.



More information about the Devel mailing list