I recently bought 2 new 500GB hard drives and am in the process of migrating my RAID array to double its size to 2TB. The current 1TB over 3 drives was 100% full, so it was something that had to be done. The expansion took about 26 hours on my 3ware 9650SE-8LPML controller, which is quite good from what I see of other peoples experiences. Of course, all important data was backed up before the RAID expansion.
The primary issue I had after the expansion was the proper partitioning of the free space. When the RAID expansion was complete, I now had 1TB of used space using the JFS filesystem that was the original partition, and 1TB of unused, unpartitioned free space. Attempts to use Gparted to expand the partition failed due to a known bug that prevents partitions over 1TB. I tried to use Parted, the CLT tool that Gparted is based on, to expand the filesystem but was unable to do so. This seemingly left me with two options, either use multiple smaller partitions, or to reformat and restore the data from the backups. I decided that smaller partitions would be a lot of extra work in terms of keeping tabs on the free space on each partition (and may require more frequent partition modifications) which is something that should not be necessary. Restoring from backups was not welcome due to the time involved. Deciding that those options were not acceptable, I was determined to find the solution I wanted, having a 2TB partition without going through a backup restore process. If anything, it would be worth doing simply on the principle of the matter: that having large partitions on a modern filesystem that supports volumes up to 32000TB should not be a problem. Several Google marathons and man page studies allowed me to successfully perform the operation that I wanted.
My main problem was a lack of understanding of the way that fdisk
actually works. With fdisk
, you can delete and recreate a partition without actually destroying the filesystem that lives on the partition. I am not sure exactly what the limitations are, but it seems that as long as you do not change the starting point of the partition, the filesystem will remain there. In my case, the drive in question (/dev/sda) had a 1TB partition starting at the beginning of the disk (/dev/sda1) with 1TB of unpartitioned free space residing after the JFS partition (created by the RAID expansion). I deleted the JFS partition (/dev/sda1), and recreated the partition, but this time using the full 2TB of space. In the tests that I did, the free space MUST come after the partition. I wrote the partition table, and them mounted the drive. All the data was still there, but the JFS partition was only showing up as 1TB with the command dh -h. This was rather concerning until I realized the state that the drive was in. fdisk
only edits the partition table, not the actual filesystem. When it writes out the partition table, it is just redefining where the partition begins and ends and does not touch the filesystem. Thus, although I had expanded the partition, I did not expand the filesystem. Doing some more searching, I found this article that gave me the last piece of information I needed. The command mount -o remount,resize /mount/point
tells the JFS filesystem to expand to fill the entire space of the partition. The command is unique to JFS and will not work on other filesystems because the options passed are used by the JFS kernel driver. When you issue the mount command, note that the partition must be mounted.
Perhaps most frustrating is the lack of data on the JFS filesystem. It does not seem to be very popular in the Linux community, most of whom use ext2/ext3. Those that do use an alternative filesystem tend to use ReiserFS with a small number using XFS. In the limited, unscientific testing I have done, JFS performed very well performance wise and used far lower CPU than ext3 or ReiserFS. JFS has worked out well for me so far and, despite my initial difficulty in expanding the filesystem, is the filesystem I will use in the future.
Hi,
I found your post from a google search and thought I’d share the right way to get this done (assuming your statement about gpartd bug is accurate, since I don’t use the tool and am planning a single fs on my array).
1) make a new array with your (free) new disks. don’t partition it — it’s what is causing your issue. NB: if using software raid (mdadm) use the “-e 1.2” command in the create option to enable arrays bigger than 2TB.
2) jfs_mkfs $new_disks (yes you can make a file system with a raid array without a partiton)
3) copy old tb array to new tb array.
4) stop old array
5) add old array disks to new array and let it rebuild.
6) mount the jfs volume (yes at this point there is 1tb free)
7) remount the jfs volume with the “resize” option (as you previously discussed).
8) all done ;>
if one were starting anew, and assuming you are using only one fs on the array, you would:
1) make a new array.
2) skip the partition step. This is key ;>
3) jfs_mkfs
4) use the array and then add new disks later and continue with step 7 above.
to prevent further issues later in your array’s life I would suggest redoing the array now if you have current backups. NB when restoring from backup, mount using the “no integrity” option to increase restore speed then remount with the integrity keyword to re-enable the fs log ;>
hth
Sam S.
According to the bugzilla, the bug that I encountered has since been resolved, so next time I want to expand the fileserver (I can put 3 more drives in), I’ll give it a go. I did not realize that you could create a filesystem without a partition either. I may have to play with this method on my test box and see how it goes.
Hey thanks rob, I just got that drive into my raid.
PS. This page is like the second entry for resizing jfs file systems in the google.
–Citizen