[CRIU] [PATCH 06/19] pstree: Bind CoreEntry to pstree item
Pavel Emelyanov
xemul at parallels.com
Wed Feb 27 07:21:59 EST 2013
On 02/27/2013 04:14 PM, Cyrill Gorcunov wrote:
> On Wed, Feb 27, 2013 at 03:55:17PM +0400, Pavel Emelyanov wrote:
>> On 02/26/2013 05:08 PM, Cyrill Gorcunov wrote:
>>>
>>> In future we will need to gather task registers pretty early which
>>> implies preliminary allocation of the CoreEntry entries. So we bind
>>> CoreEntry array to pstree item and allocate/free it with pstree
>>> lifetime.
>>>
>>> (This is because when parasite daemon mode will be implemented we
>>> no longer able to fetch registers, since task will be running in
>>> daemon code).
>>
>> We don't fetch registers for master thread even now:
>>
>> int get_task_regs(pid_t pid, CoreEntry *core, const struct parasite_ctl *ctl)
>> {
>> struct xsave_struct xsave = { };
>> user_regs_struct_t regs = {-1};
>>
>> struct iovec iov;
>> int ret = -1;
>>
>> pr_info("Dumping GP/FPU registers ... ");
>>
>> if (ctl)
>> regs = ctl->regs_orig;
>> else {
>> if (ptrace(PTRACE_GETREGS, pid, NULL, ®s)) {
>> pr_err("Can't obtain GP registers for %d\n", pid);
>> goto err;
>> }
>> }
>>
>> why not simply pull threads regs in the same manner?
>
> Could you please enlight me here?
My question is -- we do have code, that fetches regs for master threads
at some time in the dump beginning. Why do we need some complex code
rework to fetch sub-threads regs as well in the same place?
> We do (with these patches applied)
>
> static int collect_regs_seized(struct pstree_item *item)
> {
> unsigned int i;
> int ret;
>
> for (i = 0; i < item->nr_threads; i++) {
> pid_t pid = item->threads[i].real;
> ret = get_task_regs(pid, item->core[i], NULL);
> if (ret) {
> pr_err("Can't obtain regs for thread %d\n", pid);
> return -1;
> }
> }
>
> return 0;
> }
>
> here get_task_regs called with NULL and original registers for all
> threads (including thread leader) is fetched. I didn't drop ctl
> argument since I don't know if we might need it in future. IOW
> registers for all threads in pstree item are fetched once item
> is seized. No?
>
>>> + BUG_ON(!core);
>>
>> unneeded
>
> OK
>
>>>
>>> + if (pstree_alloc_cores(item))
>>> + goto err_close;
>>
>> Why not allocating cores at the same place where the respective pstree item is?
>
> pstree items are used in both -- checkpoint and restore stages, while for
> checkpoint stages CoreEntry will be allocated, they are not needed on restore
> moment, so I tried to make code somehow more separated. Still, indeed I'll
> review if I can allocate cores together with pstree item creation, thanks!
>
>>> @@ -1147,7 +1106,8 @@ static int dump_task_threads(struct parasite_ctl *parasite_ctl,
>>> continue;
>>> }
>>>
>>> - if (dump_task_thread(parasite_ctl, &item->threads[i]))
>>> + if (dump_task_thread(parasite_ctl, &item->threads[i],
>>> + item->core[i]))
>>> return -1;
>>> }
>>>
>>> @@ -1354,7 +1314,7 @@ static int dump_one_task(struct pstree_item *item)
>>> goto err_cure;
>>> }
>>>
>>> - ret = dump_task_core_all(pid, &pps_buf, &misc,
>>> + ret = dump_task_core_all(pid, pstree_find_core(item, pid), &pps_buf, &misc,
>>
>> Linear search is bad here. The pstree_item can point directly to the desired core.
>
> Sure, will update, thanks.
>
>>> +int pstree_alloc_cores(struct pstree_item *item)
>>> +{
>>> + unsigned int i;
>>> + bool is_alive;
>>> +
>>> + is_alive = (item->state != TASK_DEAD);
>>> + BUG_ON(!is_alive && item->nr_threads > 1);
>>
>> If we allocate cores together with pstree-items this bug-on becomes excessive
>
> ok, thanks!
>
> Cyrill
> .
>
More information about the CRIU
mailing list