[Users] Maximum JFS file (not filesystem) size with the 2.6.27 openvz patched kernel

Dwight Schauer dschauer at gmail.com
Fri Sep 11 09:43:30 EDT 2009


Scott,

Thanks. It turns out this is a hard drive problem and the read is
failing at that point in the file.

Which is very close to the 63.876 GBs mentioned here:

http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.baseadmn/doc/baseadmndita/fs_jfs2.htm

I used ddrescue to recover the file, but the tar file was
unrecoverable from that point on.

Dwight

On Tue, Sep 8, 2009 at 4:41 AM, Scott Dowdle <dowdle at montanalinux.org> wrote:
> Dwight,
>
> According to the wikipedia file system comparison page (http://en.wikipedia.org/wiki/Comparison_of_file_systems), JFS has a maximum file size of 4 PiB.
>
> ----- "Dwight Schauer" <dschauer at gmail.com> wrote:
>
>> Hello fellow OpenVZers,
>>
>> This is not really and OpenVZ situation perse, but an openvz patched
>> kernel was involved.
>>
>> I tarred up a couple filesystems and piped the tar stream through ssh
>> to a remote computer (hardware node running the OpenVZ patched 2.6.27
>> kernel) where I dd'ed it to a file. This a common backup method I've
>> been using for a few years now if I'm going to wipe a system and
>> start
>> over.
>>
>> I'm using JFS on the arch linux based hardware node that was being
>> copied to.
>>
>> The resulting file ended up being 137G (which is about right based on
>> the source filesystem usage).
>>
>> du --human --total a4b4.tar
>> 137G    a4b4.tar
>> 137G    total
>>
>> However, I can only restore from 63G of the tar ball, so I attempted
>> to see how much could be read.
>>
>> dd if=a4b4.tar of=/dev/null
>> dd: reading `a4b4.tar': Input/output error
>> 123166576+0 records in
>> 123166576+0 records out
>> 63061286912 bytes (63 GB) copied, 1193.69 s, 52.8 MB/s
>>
>> There were no critical files in that tar ball that are not kept
>> elsewhere, that is not the issue. At this point I can consider what
>> is
>> past the 63G point in the tarball to unrecoverable, which is fine.
>>
>> I tried skipping the first 63GB, but that does not work.
>>
>> dd if=a4b4.tar skip=123166576 of=/dev/null
>> dd: reading `a4b4.tar': Input/output error
>> 0+0 records in
>> 0+0 records out
>> 0 bytes (0 B) copied, 27.2438 s, 0.0 kB/s
>>
>> It seems like it took a while to figure out that it could not perform
>> this operation.
>>
>> Yeah, I know, I could have used bzip and made 2 separate files, I
>> could have used rsync -av, I could have checked tarball before wiping
>> the source files systems, etc, that is not the point here. Now that I
>> know that JFS on my setup has a 63GB file size limit, I know now to
>> accommodate for that in the future.
>>
>> I'm mainly just curious on how the system could write a larger file
>> than it can read.
>>
>> Dwight
>> _______________________________________________
>> Users mailing list
>> Users at openvz.org
>> https://openvz.org/mailman/listinfo/users
>
> --
> Scott Dowdle
> 704 Church Street
> Belgrade, MT 59714
> (406)388-0827 [home]
> (406)994-3931 [work]
> _______________________________________________
> Users mailing list
> Users at openvz.org
> https://openvz.org/mailman/listinfo/users
>



More information about the Users mailing list