[CRIU] [PATCH 1/2] pagemap: Introduce pagemap cache

Pavel Emelyanov xemul at parallels.com
Wed Feb 12 02:35:58 PST 2014


On 02/12/2014 02:27 PM, Cyrill Gorcunov wrote:
> On Wed, Feb 12, 2014 at 02:01:43PM +0400, Pavel Emelyanov wrote:
>>> +obj-y	+= pagemap.o
>>
>> pagemap-cache.o
> 
> ok
> 
>>> +
>>> +typedef struct {
>>> +	u64		pme;
>>> +} pagemap_entry_t;
>>
>> All the existing code operates with just u64. Plz, do the same.
> 
> ok
> 
>>> +
>>> +extern int pmc_init(pmc_t *pmc, pid_t pid, unsigned long max_pages);
>>> +extern int pmc_get_pfn_index(pmc_t *pmc, struct vma_area *vma, unsigned long addr, size_t *index);
>>
>> Pagemap cache should return the pagemap_entry, not the index under which the
>> caller may find the desired value in some other place.
> 
> ok
> 
>>> +
>>> +int pmc_init(pmc_t *pmc, pid_t pid, unsigned long max_pages)
>>> +{
>>> +	pmc_reset(pmc);
>>> +
>>> +	pmc->nr_pages	= max(max_pages, PAGEMAP_MAX_PAGES);
>>
>> 2MB granularity is more than enough. Can you show numbers proving the opposite?
> 
> I can't show you the numbers, because I've not tested other scenarions. BUT
> if you have VMA greater than 2M eventually instead of 1 read you'll have
> a number of reads. What's the point in such slowdowning? Again we're doing
> general pagemap-cache which should work at least NOT worse than plain
> read over each vma.

I think, that making (size / 2M) instead of (size / 4K) reads gives us
noticeable performance boost. Moving further to 1 read doesn't give any
measurable advantage, but heavily increases memory consumption.

>>> +
>>> +	list_for_each_entry(v, &head->list, list) {
>>
>> This should be list_for_each_entry_continue to avoid problems pointed by Andrey.
> 
> Yes, thanks.
> 
>>> +
>>> +	pr_debug("Cache filled (%p - %p)\n", (void *)pmc->start, (void *)pmc->end);
>>> +	return pmc_get_pfn_index(pmc, vma, addr, index);
>>
>> No implicitly 1-level-only recursion, please.
> 
> ok
> .
> 




More information about the CRIU mailing list