Table of Contents

NetApp Data Planning

Summary: A show and tell about several data functions on NetApp Filers.
Date: Around 2015
Refactor: 7 March 2025: Checked links and formatting.

With the introduction of all kind of options in their filers NetApp provides several ways to plan for your data consumption. In this article I'll discuss a few of the most confusing options and clarify them and their interaction with each other by showing an example.

Fractional Reserve

Note: Since Operation Manager 3.7 this option might be referred to as 'Overwrite Reserved Space'.

* Default = 100%

First check the current setting:
prd-filer2*> vol options VOL_NAME
nosnap=off, nosnapdir=off, minra=on, no_atime_update=on, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=off, maxdirsize=41861, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off

Than set the option like this:
prd-filer2*> vol options VOL_NAME fractional_reserve 20

Fractional reserve is a volume option that determines how much space Data ONTAP will reserve for Snapshot overwrite data for LUNs and space-reserved files to be used after all other space in the volume is used.

Snap Reserve

This variable is set at the volume level and is set as a percentage of the volume. Data ONTAP removes the defined percentage of volume from being available for configuring LUNs or for file usage with CIFS or NFS. As Snapshot copies need space, they consume space in the snap reserve area. By default after the snap reserve area is filled, the Snapshot copies start to take space from the general volume. Of course, because of WAFL write anywhere technology, snap reserve doesn’t actually reserve specific physical blocks for Snapshot usage and can be thought of as a logical space accounting mechanism.

Interaction Between SnapShots, Fractional Reserve, Snap Reserve, Volumes and LUNs

To understand the interaction between snapshots, reserved space, and volumes/LUN read the example below. In this example 'lun reserve', 'fractional_reserve', and 'snap reserve' all have been left at their default values.

netappdataplanning-interaction01.jpg


netappdataplanning-interaction02.jpg


netappdataplanning-interaction03.jpg


netappdataplanning-interaction04.jpg


Autosize

Note: This option is also known as Autogrow

* Default growth increment = 5% of volume size at creation

This volume setting (available in Data ONTAP 7.1 later) defines whether a volume should automatically grow to avoid filling up to capacity. This option is available only for flexible volumes. It is possible to define how fast the volume should grow with the '-i' option. The default growth increment is 5% of the volume size at creation. It is also possible to define how large the volume is allowed to grow to with the '-m' option. If volume autosize is enabled, the default maximum size to grow to is 120% of the original volume size.

Viewing volume autosize option:
acc-filer1> vol autosize vol0
Volume autosize is currently ON for volume 'vol0'.
The volume is set to grow to a maximum of 80 GB, in increments of 2 GB.

Setting volume autosize option:
acc-filer1> vol autosize vol0 -m 80g -i 2g on
vol autosize: Flexible volume 'vol0' autosize settings UPDATED.

Viewing results when resizing volumes when autosize is on:
acc-filer1> vol size vol0 80g
vol size: Flexible volume 'vol0' size set to 80g.
vol size: Warning: Flexible volume 'vol0' autosize policy with limit (83886080KB) is overridden by this new volume size setting. Recommend disabling the autosize policy.
acc-filer1> vol size vol0 50g
vol size: Flexible volume 'vol0' size set to 50g.
vol size: Warning: Flexible volume 'vol0' autosize limit (83886080KB) is more than the new size of the volume.
acc-filer1> vol autosize vol0
Volume autosize is currently ON for volume 'vol0'.
The volume is set to grow to a maximum of 80 GB, in increments of 2 GB.

Autodelete

This volume setting (available in Data ONTAP 7.1 and later) allows Data ONTAP to delete Snapshot copies if a threshold is met. This threshold is called a 'trigger' and can be set so that Snapshot copies will be automatically deleted under one of the following conditions:


It is strongly recommended to set the trigger to volume.
The order in which Snapshot copies are deleted is determined by the following three options:


The algorithm to determine which Snapshot copy to delete will first look for a Snapshot that does not lie in the 'defer_delete' criteria and use the 'delete_order' to determine whether to delete the oldest or more recent Snapshot. If no such Snapshot copy is found, the 'defer_delete' criteria will be ignored in the selection process. If a Snapshot copy is still not available for deletion, then the SnapMirror and dump Snapshot copies will be targeted depending on the 'commit' option. Snapshot copies will stop being deleted when the free space in the trigger criteria reaches the value of the target_free_space variable, which defaults to 80%.
If both autosize and autodelete are enabled and the autodelete trigger is set to 'volume', the 'try_first' volume option will determine whether a volume grow or Snapshot copy delete will be attempted first.

Best Practice With Snapshot enabled

Autogrow Configuration

Option Setting
guarantee volume
LUN reservation enabled
fractional_reserve 0%
snap_reserve 0%
autodelete volume / oldest_first
autogrow on
try_first volume_grow

The big advantage of this configuration is that it takes advantage of using the free space in the aggregate as a shared pool of available space. Since a guarantee of volume is being used, a different level of thin provisioning per application can easily be achieved through individually sizing the volumes and tuning how much each volume is allowed to grow. Space usage is also very easy to monitor and understand by simply looking at the volume and aggregate usage.
As with all configurations using a shared free space, the volumes are not 100% independent of one another since they are competing for that space when they need to grow. Meaning that if other volumes have already consumed the space in the aggregate it might not be possible for a volume to grow.
Care needs to be taken if clones (FlexClone or LUN clones) are being created as Snapshot copies which are backing this clones cannot currently be deleted by the autodelete functionality. This risk can be avoided if the volume is sized at 2X or greater of the LUN size and there is only one locked snapshot.

Autodelete Configuration

Option Setting
guarantee volume
LUN reservation enabled
fractional_reserve 0%
snap_reserve 0%
autodelete volume / oldest_first
autogrow off
try_first snap_delete

This configuration is the same as the autogrow configuration with the removal of the autogrow functionality. This configuration makes it a priority to keep the LUNs accessible over maintaining Snapshot copies. If the volume starts to fill up Snapshot copies will be deleted to allow the LUNs to remain online. One of the advantages of this configuration is that it is easy to monitor and understand the space usage by just monitoring the volume space usage. The volumes are also independent of each other, meaning each application can be tuned independently. This configuration also has the advantage that only the oldest Snapshot copies are deleted while the most recent ones are maintained. The disadvantage of this configuration is that it doesn’t use the available space in the aggregate as a shared pool of available space.
As with the previous configuration, care needs to be taken if clones (FlexClone or LUN clones) are being created as Snapshot copies which are backing these clones cannot currently be deleted by the autodelete functionality.

Setting Options

These are the commands on the commandline of the filer to set the options from the best practices guideline we use:

vol options <volname> fractional_reserve 0
snap autodelete <volname> trigger volume
snap autodelete <volname> delete_order oldest_first
snap autodelete <volname> defer_delete none
snap autodelete <volname> target_free_space 10
snap autodelete <volname> on
vol options <volname> try_first volume_grow
vol autosize <volname> -m 80g -i 2g on
Note: Please note that the volume name is case sensitive.


Note: These are the available options for snap autodelete:

snap autodelete <vol-name> <option> <value>...
Supported options and corresponding values:
        commitment              try | disrupt | destroy
        trigger                 volume | snap_reserve | space_reserve
        target_free_space       1-100
        delete_order            oldest_first | newest_first
        defer_delete            scheduled | user_created | prefix | none
        prefix                  <string>
        destroy_list            lun_clone | vol_clone | cifs_share | none

Snapshots Configuration

The best practice outlined above is for volumes with snapshots enabled. In our environment that counts for all volumes containing boot luns, and for the vol0 volumes. Also volumes with snapmirror enabled will have snapshots although those snapshots will be automatically maintained and do not need further configuration.

This is the snapshot configuration as we use it:

The weekly snapshot might grow considerately over time so monitor the amount of space occupied by the snapshots.