[CRIU] [PATCH 1/2] tty: Write unread pty buffers on post dump stage

Pavel Emelyanov xemul at virtuozzo.com
Fri May 13 07:28:00 PDT 2016


>>> @@ -800,6 +815,22 @@ static int restore_tty_params(int fd, struct tty_info *info)
>>>  	return userns_call(do_restore_tty_parms, UNS_ASYNC, &p, sizeof(p), fd);
>>>  }
>>>  
>>> +static void pty_restore_queued_data(struct tty_info *info, int fd)
>>
>> Why not abort restore in case of incomplete write?
> 
> Because I think it's not a tragic -- tty layer never was one
> with guaranteed delivery, moreover in case of real data walking
> over modem wires it's up to application to resend data if lost.
> I mean, sure I can exit with error here but personally I think
> we can continue simply as we do now -- print warning and that's
> all.
> 
> If you prefer I can add error here and abort.

No, you can leave it as is, but add a code comment explaining this.

>>
>>> @@ -1238,6 +1272,26 @@ static int tty_setup_slavery(void * unused)
>>>  	struct tty_info *info, *peer, *m;
>>>  
>>>  	/*
>>> +	 * Setup links for PTY terminal pairs.
>>> +	 */
>>> +	list_for_each_entry(info, &all_ttys, list) {
>>> +		if (!is_pty(info->driver) || info->link)
>>> +			continue;
>>> +		peer = info;
>>> +		list_for_each_entry_continue(peer, &all_ttys, list) {
>>> +			if (!is_pty(peer->driver) || peer->link)
>>> +				continue;
>>> +			if (peer->tie->pty->index == info->tie->pty->index) {
>>
>> Matching of tty's by indexes is already done below in this function. Is
>> this different?
> 
> Yes, it's different -- here we link peers between eachother, via indices,
> in turn tty_lookup_data searched for tty _id_ not index.
> 
>   tty-master->index  <-> tty-slave->index
>   |                                   |
>   \                                  /
>    data-id                    data-id
>      queued-data        queued-data

Please, write a code comment describing this difference.

>>> +static struct tty_data_entry *tty_lookup_data(struct tty_info *info)
>>> +{
>>> +	struct tty_data_entry *td;
>>> +
>>> +	list_for_each_entry(td, &all_tty_data_entries, list) {
>>> +		if (td->tde->tty_id == info->tie->id)
>>> +			return td;
>>> +	}
>>
>> Any reason why data is put into global list and is then picked by
>> ttys instead of tty itself is searched at data collect stage and
>> receives data in its list_head?
> 
> Not sure I follow this moment. Queued data are saved in records inside
> separate image, and it's collected automatically, into one list.

Yup. My point is that -- why not collect this data into respective tty's
list immediately? We already have list of all tty to lookup tty from,
so we'd avoid introducing yet another one.

> Then,
> we collect tty info, and at this moment we lookup for linked data and
> assign if needed. Mind to elaborate?
> 
>>> +
>>>  static int collect_one_tty(void *obj, ProtobufCMessage *msg, struct cr_img *i)
>>>  {
>>>  	struct tty_info *info = obj;
>>
>>> +
>>> +static void tty_do_writeback_queued_data(struct tty_dump_info *dinfo)
>>> +{
>>> +#define __dinfo_write_reblock(__d)							\
>>> +	do {										\
>>> +		if (write((__d)->link->lfd, (__d)->tty_data,				\
>>> +			  (__d)->tty_data_size) != (__d)->tty_data_size)		\
>>> +			pr_perror("Can't writeback to tty (%#x)\n", (__d)->id);		\
>>> +		tty_reblock((__d)->link->id, (__d)->link->lfd, (__d)->link->flags);	\
>>> +	} while (0)
>>
>> Why not having this as static-inline function?
> 
> Shorter.

Please, implement as a function.

>>
>>> +
>>> +	if (dinfo->tty_data)
>>> +		__dinfo_write_reblock(dinfo);
>>> +	if (dinfo->link->tty_data)
>>> +		__dinfo_write_reblock(dinfo->link);
>>> +#undef __dinfo_write_reblock
>>> +}
>>> +
>>> +/*
>>
>>> +	if (ret || opts.final_state != TASK_DEAD) {
>>> +		list_for_each_entry(dinfo, &all_ptys, list)
>>> +			tty_do_writeback_queued_data(dinfo);
>>> +	}
>>> +
>>> +#define __tty_dinfo_clean(__d)		\
>>> +	close_safe(&(__d)->lfd),	\
>>> +	xfree((__d)->tty_data),		\
>>> +	list_del(&(__d)->list),		\
>>> +	xfree((__d))
>>
>> The same.
> 
> Shorter too, but ok will do as inliners.

Thanks!



More information about the CRIU mailing list