= NetApp Aggregate Expansion =
**Summary**: How to expand the aggregates on a Netapp Filer. \\
**Date**: Around 2015 \\
**Refactor**: 7 March 2025: Checked links and formatting. \\
{{tag>netapp}}
When you're running ontap 8.1 you can finally create or upgrade your aggregates to 64 bits. There is only one supported way to upgrade your aggregate to 64 bits and that is by adding enough disks to make the amount of space exceed 16 TB. This article is just me going through the article and doing some extra steps afterwards, to maximize performance for the newly created free space.
= Check Configuration =
These are my aggregates, I'm going to upgrade aggr0:
filer01> aggr status
Aggr State Status Options
aggr0 online raid_dp, aggr root
redirect
32-bit
aggr1_SATA online raid_dp, aggr nosnap=on
32-bit
aggr2_SATA online raid_dp, aggr raidsize=16
32-bit
These are my raid groups, (and why is that important? Read [[netappcaveat|this article]]):
> Note that I removed the info regarding aggr1 and aggr2 since I'm not going to do anything with these aggregates.
filer01> sysconfig -r
Aggregate aggr0 (online, raid_dp, redirect) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.16 0a 1 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0a.32 0a 2 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.16 0c 1 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.32 0c 2 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.17 0a 1 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.17 0c 1 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.33 0a 2 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.33 0c 2 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.18 0a 1 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.18 0c 1 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.34 0a 2 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.34 0c 2 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.27 0a 1 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.19 0c 1 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.35 0a 2 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.35 0c 2 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg1 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.20 0c 1 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0a.36 0a 2 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.36 0c 2 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.20 0a 1 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.21 0c 1 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.37 0a 2 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.37 0c 2 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.21 0a 1 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.22 0c 1 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.38 0a 2 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.38 0c 2 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.22 0a 1 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.23 0c 1 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.39 0a 2 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.39 0c 2 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.23 0a 1 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg2 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.40 0a 2 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0c.24 0c 1 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.24 0a 1 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.40 0c 2 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.41 0a 2 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.25 0c 1 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.25 0a 1 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.41 0c 2 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.44 0a 2 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.26 0c 1 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.26 0a 1 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.42 0c 2 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.43 0a 2 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.27 0c 1 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.42 0a 2 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.28 0a 1 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg3 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.19 0a 1 3 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
parity 0a.29 0a 1 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.28 0c 1 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.29 0c 1 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.43 0c 2 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.44 0c 2 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.54 0c 3 6 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.53 0c 3 5 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.52 0c 3 4 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.45 0a 2 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.50 0c 3 2 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.49 0c 3 1 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.48 0c 3 0 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.55 0c 3 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.45 0c 2 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.60 0c 3 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg4 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.59 0c 3 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0c.58 0c 3 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.57 0c 3 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.56 0c 3 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
Pool0 spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block checksum
spare 0c.51 0c 3 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
spare 0a.48 0a 3 0 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.49 0a 3 1 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.50 0a 3 2 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.51 0a 3 3 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.52 0a 3 4 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.53 0a 3 5 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.54 0a 3 6 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.55 0a 3 7 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.56 0a 3 8 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.57 0a 3 9 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.58 0a 3 10 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.59 0a 3 11 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0c.61 0c 3 13 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 1a.60 1a 3 12 FC:B 0 ATA 7200 635555/1301618176 635858/1302238304
spare 1d.28 1d 1 12 FC:A 0 ATA 7200 635555/1301618176 635858/1302238304
spare 1a.84 1a 5 4 FC:B 0 ATA 7200 847555/1735794176 847827/1736350304
spare 1d.86 1d 5 6 FC:A 0 ATA 7200 847555/1735794176 847827/1736350304
As you can see there are usually 16 disks in a raid group, except for raid group 4 which has only 4 disks. This raid group is a so called hotspot and brings down the overall performance of teh filer. There are 14 spare disks of the same type (FCAL) available, of which two need to remain a spare disk (that's a best practice by NetApp). So 12 disks will be added to raid group 4.
= Upgrade =
First, to be sure everything is correct you could try to add the disks without upgrading the aggregate to 64bits. This will result in an error:
filer01> aggr add aggr0 12
File system size 18.15 TB exceeds maximum 15.99 TB
Addition of disks failed. To proceed with this operation, and upgrade the aggregate from 32-bit format to 64-bit format, follow these steps:
1. Run the "aggr add" command with the "-64bit-upgrade check" option to determine whether there is enough free space for the 64-bit upgrade.
2. Resolve any space issues identified by the "-64bit-upgrade check" option.
3. Run the "aggr add" command with the "-64bit-upgrade normal" option to add the disks and trigger 64-bit upgrade on the aggregate aggr0.
aggr add: Cannot add specified disks to the aggregate because aggregates in 32-bit block format cannot be larger than 16TB.
Then perform an upgrade check. Note that this is an time consuming process. For the configuration you find above (plusminus 14 TB) this took over two hours. You also get no prompt, but if you loose your connection you can reconnect, you still have no prompt and output will still be shown:
filer01> aggr add aggr0 -64bit-upgrade check 12
File system size 18.15 TB exceeds maximum 15.99 TB
Checking for additional space required to upgrade all writable 32-bit
volumes in aggregate aggr0 (Ctrl-C to interrupt)......
Upgrading a volume to 64-bit consumes additional free space in the volume. The
following table shows the space usage after each volume is upgraded to 64-bit:
Volume Name Total Used Available Capacity
----------- ----- ---- --------- --------
vol0 26GB 5379MB 20GB 20%
Volume_1 222GB 216GB 5643MB 97%
Volume_2 400GB 336GB 63GB 84%
Volume_3 1843GB 1524GB 318GB 82%
Adding the specified disks and upgrading the aggregate to
64-bit will add 2868GB of usable space to the aggregate.
To initiate the 64-bit upgrade of aggregate aggr0, run this
command with the "normal" option.
Then, if everything is normal, start the actual upgrade. Note that at our site this also took two hours:
filer01> aggr add aggr0 -64bit-upgrade normal 12
File system size 18.15 TB exceeds maximum 15.99 TB
Checking for additional space required to upgrade all writable 32-bit
volumes in aggregate aggr0 (Ctrl-C to interrupt)......
== Check Progress =
You can see the status of the aggregate:
filer01> aggr status aggr0
Aggr State Status Options
aggr0 online raid_dp, aggr root
growing
redirect
32-bit
Volumes: vol0, Volume_1, Volume_2, Volume_3
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal, block checksums
RAID group /aggr0/plex0/rg1: normal, block checksums
RAID group /aggr0/plex0/rg2: normal, block checksums
RAID group /aggr0/plex0/rg3: normal, block checksums
RAID group /aggr0/plex0/rg4: normal, block checksums
And if you go into advanced mode you can see the upgrade process status:
filer01> priv set advanced
Warning: These advanced commands are potentially dangerous; use
them only when directed to do so by IBM
personnel.
filer01*> aggr 64bit-upgrade status aggr0 -all
Volume Format Scanner Status % Completed Time to Completion Progress
aggr0 upgrading running 8 26157 fbn 164838400, inode 10878 of 32781, private
vol0 64-bit
Volume_1 64-bit
Volume_2 64-bit
Volume_3 upgrading running 1 26567 fbn 109903980, inode 2608501 of 33554409, public
When done the previous command looks like this:
filer01*> aggr 64bit-upgrade status aggr0 -all
Volume Format Scanner Status % Completed Time to Completion Progress
aggr0 64-bit
vol0 64-bit
Volume_1 64-bit
Volume_2 64-bit
Volume_3 64-bit
And you can now also see that the aggregate status has changed to 64 bits:
filer01> aggr status
Aggr State Status Options
aggr0 online raid_dp, aggr root
redirect
64-bit
aggr1_SATA online raid_dp, aggr nosnap=on
32-bit
aggr2_SATA online raid_dp, aggr raidsize=16
32-bit
Also, raid group 4 now has 16 disks just like the others, and there are only 2 FCAL disks left as spare:
filer01> sysconfig -r
Aggregate aggr0 (online, raid_dp, redirect) (block checksums)
Plex /aggr0/plex0 (online, normal, active, pool0)
RAID group /aggr0/plex0/rg0 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.16 0a 1 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0a.32 0a 2 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.16 0c 1 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.32 0c 2 0 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.17 0a 1 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.17 0c 1 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.33 0a 2 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.33 0c 2 1 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.18 0a 1 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.18 0c 1 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.34 0a 2 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.34 0c 2 2 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.27 0a 1 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.19 0c 1 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.35 0a 2 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.35 0c 2 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg1 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.20 0c 1 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0a.36 0a 2 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.36 0c 2 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.20 0a 1 4 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.21 0c 1 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.37 0a 2 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.37 0c 2 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.21 0a 1 5 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.22 0c 1 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.38 0a 2 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.38 0c 2 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.22 0a 1 6 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.23 0c 1 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.39 0a 2 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.39 0c 2 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.23 0a 1 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg2 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.40 0a 2 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0c.24 0c 1 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.24 0a 1 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.40 0c 2 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.41 0a 2 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.25 0c 1 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.25 0a 1 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.41 0c 2 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.44 0a 2 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.26 0c 1 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.26 0a 1 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.42 0c 2 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.43 0a 2 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.27 0c 1 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.42 0a 2 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.28 0a 1 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg3 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.19 0a 1 3 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
parity 0a.29 0a 1 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.28 0c 1 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.29 0c 1 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.43 0c 2 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.44 0c 2 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.54 0c 3 6 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.53 0c 3 5 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.52 0c 3 4 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.45 0a 2 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.50 0c 3 2 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.49 0c 3 1 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.48 0c 3 0 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.55 0c 3 7 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.45 0c 2 13 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.60 0c 3 12 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
RAID group /aggr0/plex0/rg4 (normal, block checksums)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.59 0c 3 11 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
parity 0c.58 0c 3 10 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.57 0c 3 9 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0c.56 0c 3 8 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.48 0a 3 0 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.51 0c 3 3 FC:A 0 FCAL 15000 272000/557056000 274845/562884296
data 0a.49 0a 3 1 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0c.61 0c 3 13 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.50 0a 3 2 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.51 0a 3 3 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.52 0a 3 4 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.53 0a 3 5 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.54 0a 3 6 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.55 0a 3 7 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.56 0a 3 8 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
data 0a.57 0a 3 9 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
Pool0 spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block checksum
spare 0a.58 0a 3 10 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 0a.59 0a 3 11 FC:A 0 FCAL 15000 272000/557056000 280104/573653840
spare 1a.60 1a 3 12 FC:B 0 ATA 7200 635555/1301618176 635858/1302238304
spare 1d.28 1d 1 12 FC:A 0 ATA 7200 635555/1301618176 635858/1302238304
spare 1a.84 1a 5 4 FC:B 0 ATA 7200 847555/1735794176 847827/1736350304
spare 1d.86 1d 5 6 FC:A 0 ATA 7200 847555/1735794176 847827/1736350304
= Post Upgrade/Expansion Tasks =
When you've added disks to an aggregate you'll notice almost immediately that the performance of te filer will drop. Since the newly added disks are empty, and WAFL will write preferebly to empty disks (to distribute load) the new disks get all the new data. And new data is more accessed than old data, so the newly added disks are a so called hot spot. You must now reallocate the data to prevent this situation.
> Note: This is a lengthy process depending on the size of your volumes and will degrade performance even more. At our site we had some serious issues because of this and we had to stop it (meaning it will have to be done all over) and find a maintenance window of over 30 hours where we could live with degraded performance.
> Note2: This really impacted high IO VMs, so really be carefull.
For every volume in the aggregate run this command:
filer01> reallocate start -f /vol/Volume_1
Reallocation scan will be started on '/vol/Volume_1'.
Monitor the system log for results.
Monitor the status like this:
filer01> reallocate status -v
Reallocation scans are on
/vol/Volume_1:
State: Reallocating: public inofile, block 190656 of 1597829 (11%)
Flags: doing_force,whole_vol
Threshold: 4
Schedule: n/a
Interval: n/a
Optimization: n/a
Note that running the reallocate command is not possible for volumes with snapshots since these are readonly. Also note that you cannot suspend this process. You need to schedule this before that's possible:
filer01> reallocate quiesce /vol/Volume_1
Unable to quiesce reallocation on '/vol/Volume_1': scan runs once-only.
Also note, that after running this process all of our performance issues that has arisen since the adding of storage and starting reallocation processes were gone.