I recently bought 2 new 500GB hard drives and am in the process of migrating my RAID array to double its size to 2TB. The current 1TB over 3 drives was 100% full, so it was something that had to be done. The expansion took about 26 hours on my 3ware 9650SE-8LPML controller, which is quite good from what I see of other peoples experiences. Of course, all important data was backed up before the RAID expansion.
The primary issue I had after the expansion was the proper partitioning of the free space. When the RAID expansion was complete, I now had 1TB of used space using the JFS filesystem that was the original partition, and 1TB of unused, unpartitioned free space. Attempts to use Gparted to expand the partition failed due to a known bug that prevents partitions over 1TB. I tried to use Parted, the CLT tool that Gparted is based on, to expand the filesystem but was unable to do so. This seemingly left me with two options, either use multiple smaller partitions, or to reformat and restore the data from the backups. I decided that smaller partitions would be a lot of extra work in terms of keeping tabs on the free space on each partition (and may require more frequent partition modifications) which is something that should not be necessary. Restoring from backups was not welcome due to the time involved. Deciding that those options were not acceptable, I was determined to find the solution I wanted, having a 2TB partition without going through a backup restore process. If anything, it would be worth doing simply on the principle of the matter: that having large partitions on a modern filesystem that supports volumes up to 32000TB should not be a problem. Several Google marathons and man page studies allowed me to successfully perform the operation that I wanted.
My main problem was a lack of understanding of the way that
fdisk actually works. With
fdisk, you can delete and recreate a partition without actually destroying the filesystem that lives on the partition. I am not sure exactly what the limitations are, but it seems that as long as you do not change the starting point of the partition, the filesystem will remain there. In my case, the drive in question (/dev/sda) had a 1TB partition starting at the beginning of the disk (/dev/sda1) with 1TB of unpartitioned free space residing after the JFS partition (created by the RAID expansion). I deleted the JFS partition (/dev/sda1), and recreated the partition, but this time using the full 2TB of space. In the tests that I did, the free space MUST come after the partition. I wrote the partition table, and them mounted the drive. All the data was still there, but the JFS partition was only showing up as 1TB with the command dh -h. This was rather concerning until I realized the state that the drive was in.
fdisk only edits the partition table, not the actual filesystem. When it writes out the partition table, it is just redefining where the partition begins and ends and does not touch the filesystem. Thus, although I had expanded the partition, I did not expand the filesystem. Doing some more searching, I found this article that gave me the last piece of information I needed. The command
mount -o remount,resize /mount/point tells the JFS filesystem to expand to fill the entire space of the partition. The command is unique to JFS and will not work on other filesystems because the options passed are used by the JFS kernel driver. When you issue the mount command, note that the partition must be mounted.
Perhaps most frustrating is the lack of data on the JFS filesystem. It does not seem to be very popular in the Linux community, most of whom use ext2/ext3. Those that do use an alternative filesystem tend to use ReiserFS with a small number using XFS. In the limited, unscientific testing I have done, JFS performed very well performance wise and used far lower CPU than ext3 or ReiserFS. JFS has worked out well for me so far and, despite my initial difficulty in expanding the filesystem, is the filesystem I will use in the future.