[Devel] [PATCH 2/2] fuse: wait for writeback in fuse_file_fallocate() -v2

Maxim Patlasov mpatlasov at parallels.com
Thu Aug 29 09:27:30 PDT 2013


Hi,

08/29/2013 07:41 PM, Miklos Szeredi пишет:
> On Fri, Aug 16, 2013 at 03:30:27PM +0400, Maxim Patlasov wrote:
>> The patch fixes a race between mmap-ed write and fallocate(PUNCH_HOLE):
>>
>> 1) An user makes a page dirty via mmap-ed write.
>> 2) The user performs fallocate(2) with mode == PUNCH_HOLE|KEEP_SIZE
>>     and <offset, size> covering the page.
>> 3) Before truncate_pagecache_range call from fuse_file_fallocate,
>>     the page goes to write-back. The page is fully processed by fuse_writepage
>>     (including end_page_writeback on the page), but fuse_flush_writepages did
>>     nothing because fi->writectr < 0.
>> 4) truncate_pagecache_range is called and fuse_file_fallocate is finishing
>>     by calling fuse_release_nowrite. The latter triggers processing queued
>>     write-back request which will write stale data to the hole soon.
>>
>> Changed in v2 (thanks to Brian for suggestion):
>>   - Do not truncate page cache until FUSE_FALLOCATE succeeded. Otherwise,
>>     we can end up in returning -ENOTSUPP while user data is already punched
>>     from page cache. Use filemap_write_and_wait_range() instead.
> The problem with fuse_wait_on_writeback() is starvation.  You could have the
> page range continually being dirtied and written back and fallocate() livelocked
> in fuse_wait_on_writeback() for ever AFAICS.

Yes, I agree. I thought being infinitely dirtied is impossible if 
i_mutex is held, but now I understand it's not true for mmap-ed writes. 
I need to think more about it (livelock avoidance).

>
> So having a barrier like FUSE_NOWRITE is good but then we need to take care of
> throwing away the truncated part of the queue.  But that should be doable by
> passing the truncated range explicitly to fuse_release_nowrite().

Yes, I considered this approach, but splitting a fuse request into two 
in fuse_send_writepage() made me sick. What if allocation fails?

Thanks,
Maxim



More information about the Devel mailing list