[Users] Maximum JFS file (not filesystem) size with the 2.6.27 openvz patched kernel

Dwight Schauer dschauer at gmail.com
Mon Sep 7 23:59:20 EDT 2009


Hello fellow OpenVZers,

This is not really and OpenVZ situation perse, but an openvz patched
kernel was involved.

I tarred up a couple filesystems and piped the tar stream through ssh
to a remote computer (hardware node running the OpenVZ patched 2.6.27
kernel) where I dd'ed it to a file. This a common backup method I've
been using for a few years now if I'm going to wipe a system and start
over.

I'm using JFS on the arch linux based hardware node that was being copied to.

The resulting file ended up being 137G (which is about right based on
the source filesystem usage).

du --human --total a4b4.tar
137G    a4b4.tar
137G    total

However, I can only restore from 63G of the tar ball, so I attempted
to see how much could be read.

dd if=a4b4.tar of=/dev/null
dd: reading `a4b4.tar': Input/output error
123166576+0 records in
123166576+0 records out
63061286912 bytes (63 GB) copied, 1193.69 s, 52.8 MB/s

There were no critical files in that tar ball that are not kept
elsewhere, that is not the issue. At this point I can consider what is
past the 63G point in the tarball to unrecoverable, which is fine.

I tried skipping the first 63GB, but that does not work.

dd if=a4b4.tar skip=123166576 of=/dev/null
dd: reading `a4b4.tar': Input/output error
0+0 records in
0+0 records out
0 bytes (0 B) copied, 27.2438 s, 0.0 kB/s

It seems like it took a while to figure out that it could not perform
this operation.

Yeah, I know, I could have used bzip and made 2 separate files, I
could have used rsync -av, I could have checked tarball before wiping
the source files systems, etc, that is not the point here. Now that I
know that JFS on my setup has a 63GB file size limit, I know now to
accommodate for that in the future.

I'm mainly just curious on how the system could write a larger file
than it can read.

Dwight


More information about the Users mailing list