[CRIU] [PATCH 2/2] restore: Restore as many pages at once as possible

Pavel Emelyanov xemul at parallels.com
Tue Nov 10 00:47:54 PST 2015


On 11/10/2015 12:14 AM, Christopher Covington wrote:
> Hi Pavel,
> 
> On 11/09/2015 07:46 AM, Pavel Emelyanov wrote:
>> When the VMA being restored is not COW-ed we read pages from images
>> one-by-one which results in suboptimal pages.img access. Fix this
>> by reading as many pages from iamge at once as possible withing the
>> active pagemap and VMA.
>>
>> Signed-off-by: Pavel Emelyanov <xemul at parallels.com>
>> ---
>>  cr-restore.c | 28 ++++++++++++++++++++++++----
>>  1 file changed, 24 insertions(+), 4 deletions(-)
>>
>> diff --git a/cr-restore.c b/cr-restore.c
>> index 8770f57..e06570d 100644
>> --- a/cr-restore.c
>> +++ b/cr-restore.c
>> @@ -468,8 +468,8 @@ static int restore_priv_vma_content(void)
>>  				ret = pr.read_pages(&pr, va, 1, buf);
>>  				if (ret < 0)
>>  					goto err_read;
>> -				va += PAGE_SIZE;
>>  
>> +				va += PAGE_SIZE;
>>  				nr_compared++;
>>  
>>  				if (memcmp(p, buf, PAGE_SIZE) == 0) {
>> @@ -477,15 +477,35 @@ static int restore_priv_vma_content(void)
>>  					continue;
>>  				}
>>  
>> +				nr_restored++;
>>  				memcpy(p, buf, PAGE_SIZE);
>>  			} else {
>> -				ret = pr.read_pages(&pr, va, 1, p);
>> +				int nr, j;
>> +
>> +				/*
>> +				 * Try to read as many pages as possible at once.
>> +				 *
>> +				 * Within the current pagemap we still have
>> +				 * nr_pages - i pages (not all, as we might have
>> +				 * switched VMA above), within the current VMA
>> +				 * we have at most (vma->end - current_addr) bytes.
>> +				 */
>> +
>> +				nr = min(nr_pages - i, (vma->e->end - va) / PAGE_SIZE);
> 
> Using page_size() instead of PAGE_SIZE would be nice for AArch64 (and PowerPC
> if they add 4K page support to CRIU). On AArch32 and x86, page_size() is
> #define'd to PAGE_SIZE so you shouldn't see any slowdown there.

OK, but the rest of the code around here is using PAGE_SIZE.
If I change this single line then the whole logic would break.

-- Pavel



More information about the CRIU mailing list