[Devel] [PATCH rh7 1/2] vdso: virtualized monotonic gettime through vdso
Andrey Ryabinin
aryabinin at virtuozzo.com
Wed May 24 09:34:04 PDT 2017
On 05/24/2017 07:29 PM, Dmitry Safonov wrote:
> On 05/24/2017 06:21 PM, Andrey Ryabinin wrote:
>> We already have infrastructure for virtualized vdso, however we use
>> it only to change LINUX_VERSION_NAME in container. Simply store container's
>> start time - ve->start_timespec in vdso variable - VDSO64_ve_start_timespec,
>> and use it in __vdso_clock_gettime() to calculate container's monotonic time.
>>
>> Make uts_arch_setup_additional_pages()/uts_prep_vdso_pages_locked() to always
>> setup new vdso, since previous policy to setup vdso only if uts_ns->name.release
>> wouldn't work for virtualized __vdso_clock_gettime()
>>
>> https://jira.sw.ru/browse/PSBM-66451
>> Signed-off-by: Andrey Ryabinin <aryabinin at virtuozzo.com>
>
> Well, that's all looks good, but I've two questions:
>
> 1. Where VDSO64_ve_start_timespec is set? I do not see that part.
>
in uts_arch_setup_additional_pages() and uts_prep_vdso_pages_locked()
> 2. The part with unconditional creation of vdso pages array in
> uts_arch_setup_additional_pages()/uts_prep_vdso_pages_locked()
> will result in each exec() creating new vdso for all tasks in CT.
> So, that will result in n*8kB additional memory, where n is
> number of exec()'ed tasks in CTs.
> I'm not sure how much can be n, so that may be OK.
> But can we find a way to make it p*8kB, where p - is nr. of CTs?
Nope, vdso pages created only once, and attached to uts_ns. Once pages created
all follow up execs will reuse them. So it's one vdso per uts_ns.
>
> As far as I can see, vdso pages array was also copied for UTS
> virtualization where it was needed previously, so (2) question may be
> not related to this patches set, but for the way how it is done
> already. FWIW, what I mean here - it would be worth to have one copy
> of vdso pages per-CT.
>
> May be I'm trying to rearrange the deck chairs on the Titanic and
> that increase is just insignificant.
>
>> ---
More information about the Devel
mailing list