Tuesday 21 February 2012

NSO-163 Brain dump


NCDA NSO-163 revision notes



Commands

Sysconfig –r shows disk raid groups

Sysconfig –v shows storage system disk firmware

Sysconfig –a shows storage systems ip information

Disk  –o hostname

Disk –s match disk id

Disk –n show unowned disks

Disk –v will list all disks

Disk –a any storage

Disk assign 0ad.2 fas2020 assigns the disk 0ad.2 to fas2020

Sync mirror

Cf giveback fails back over to th failed system once it is back up

Cf giveback –f forces the give back but terminates dumps, cifs sessions, parity checks raid reconstructions and raid scrubs

Cf forcegiveback will force a giveback but could cause loss of data use only as a last resort

Cf giveback will fail if ifconfig emulates failed filer

Halt –f will halt the filer without takeover

Cf takeover will takeover a filer

Cf forcetaker use this if the cf takeover fails

Sysconfig –v show disk shelf and ontap versions

Sync mirror – mirrors volumes and aggregates disk need to be evenly divided between the two plexes. Disk need to come from different pools. Needs the same number of disks in each pool. Disks are slected on bytes per sector then on disk size. If there is no equivalent size disk the filer will grab a larger disk and down size it.

Sysconfig –r will determine if the plexes are configured correctly (will show the plexes online normal

Aggr create aggr1 –m 6 will create a new mirrored aggregate

Sysconfig –r

Vol status –r

Aggr status –r

Vol create volx –m 6 will create a new mirrored volume

-n will preview the output and show what disks ontap has chosen

Fcal (fibre card slots) slots 1-7 pool 0 slots 8-11 pool 1

0a 0b pool0 0c 0d pool 1

Sync mirror needs to be licensed on both hosts

Split mirror command will split the mirrored volumes creating 2 new unmirrored volumes

Aggr split aggr1\plexname aggr_split

Vol split vol1\plexname aggr1_split

Mirror needs to be online operational before you do the split

To resync the mirror

Vol mirror vol1 –v vol2 sure yes

Sysconfig –r checks spare disks are in the correct pools

Stretched metro cluster

Fabric attached metro cluster can be up to 30km apart

Stretch metro cluster license needs cluster cluster_remote syncmirror_local needs 6.4.2 or higher

Max stretch cluster 500m with om3 cables 300m with fc cables all 2gb

270m om3 cables at 4gb

Fabric metro cluster max 100km 2gb

With 4gb 55km

Fabric have 2 banks a 4 quadrants in each fibre switch if host is plugged into bank 1 it owns disks plugged into bank 2

Remote plexes use disk pool 1

Hardware needed for syncmirror disks shelves, adapters cabling

Cluster config checker will check license mismatch options mismatch incorrect network ontap version mismatch cf-config-checker-cgi





Other

WAFL overhead 10% (Write Anywhere File Layout)

Default vol snap reserve 20%

Default aggregate reserve 5%

Vol options vol0 nosnapdir on will deny access to snapshots to users

Options nfs.hide_snapshot on will hide the snapshots in an nfs volume

Options cifs.show_snapshot off will hide cifs shares to users

Vol status verify a volume is online

Snap list vol0 – lists all snapshots on vol0

Snap restore –t file /vol/vol0 –s vol0_snap /vol/vol0 – will restore file to vol0 from snapshot vol0_snap

Snap restore –r  will change the destination directory

Snapmirror volume restore can do source volumes but cannot do destination volumes

Run df on destination to check data size



Snapshot copies

Snapshot copies are a point in time copy of a volume or aggregate and read only

255 snap shot copies

Snap default schedule vol0: 0 2 6 @ 8, 12, 16, 20

Default snap reserve is 20%

To check snap shot rate of change use the snap delta command - snap delta vol0

Allow cifs snap access – options cifs.show_snapshots on

Df –h shows snap reserve

Snap sched vol0 0 0 0 no snap sched

Vol options vol0 nosnap on disables snap schedule

Snap create

Snap delete

Snap delete –a will delete all snaps in that volume



Snaprestore

Use snaprestore if files are virus infected need to revert to a software baseline, need to restore 300mb of data and only have 10mb spare.

Snaprestore does not restore snapshot schedules, volume options, raid group size, max files per volume

To restore a file use the following command snaprestore –t file –s vol0_snap /vol/newvol

Cannot reverse a snaprestore



Snapmirror

It is recommended not to change the visibility_interval the default is 3 seconds.

Edit snapmirror.conf file to change mirror from async to sync or semi-sync

Vol0 fas2020:vol0 – sync (sync mirror)

Vol fas2020:vol0 – semi-sync (semi-sync mirror)

Vol fas2020:vol0 fas2021 – 0-55/5 (async)

Changes to the snamirror.comf file on the destination will disrupt the mirror and fail.

Options snapmirror.log.enable on will enable snapmirror logging

/etc/logs/snapmirror stores the snapmirror logs on the source root volume

Fas1:vol1 fas2:vol2 – 0-55/5 * * * this is an async schedule not supported for sync or semi-sync

Fas1:vol1 fas2:vol2 semi-sync

Snapmirror initialize –L to convert the destination volume into a snaplock volume

When visibility_interval is reached a snap is automatically taken

Snapmirror initialize –s source vol dest vol – starts mirror transfer

Snapmirror status –l shows the status of the mirror

Snapmirror rsync –s dest vol source vol – will re-sync the mirror overwriting dest vol

Snap mirror break vol0

Snap mirror quiesce vol0

Snap mirror break

Snap mirror resync vol0 (after break)

Snap mirror release vol0 (Permanent break)

Snapmirror limitations

Update will fail when limit is reached each transfer beyond their limit will retry every minute

Destination Snapmirror commands

Snapmirror initialize

Vol restrict dest vol

Snapmirror break

Snapmirror resync

Snapmirror update

Source snapmirror commands

Snapmirror break

Snapmirror resync

Snapmirror status

Options snapmirror.access on

Throttling network

Per transfer change the kbs in snapmirror.conf

Dynamic throttling lets you change the kbs rate whilst the mirror is active snapmirror throttle 10kbs dest host dest vol

System wide throttling

Options replication.throttle.enable on enables system wide throttling

Options replication.throttle.incoming.max_kbs

Options replication.throttle.outgoing.max_kbs

2ms RTT for sync

5ms RTT for semi-sync

Volume Snap mirror

One snapmirror license is required for source and destination, sync and semi-sync require an additional license snapmirror_sync

It can only replicate the same type volumes Traditional to traditional or Flex volumes to Flex volumes. Destination and source need to have the same version of ontap. Destination volume needs to be the same size or larger than the source volume.

Volume snap mirror supports Async, sync and semi-sync

Can be initialized through a tape device

Source volume is online and writable Destination volume is read only

Baseline transfer is started by creating a non root restricted volume on the destination.

All data in the snap shot copies is replicated down to the destination

Schedules updates the mirror current snapshot is compared with the previous snapshot.  

Qtree Snapmirror

Qtree snapmirrors can do replications from traditional volumes to flexible volumes. And the source and destination

Supports async only, destination must have 5% extra space, Destination qtree cannot be /etc, Deep directory structure and large number of small files may impact performance.

Snap mirror optimization

Try to stagger update schedule

Snap mirror uses available cpu cycles

100% cpu utilisation does not mean that the system performance is degraded

You can monitor cpu using operations manager perf advisor and sysstat

Snapmirror may conflict with other processes and may impact response times reduce this by:

Using flexshare for the system

Schedule snap mirror for when nfs and cifs traffic is low

Reduce update frequency

Update controller

Network distance causes latency

Limit bandwidth using throttling (options replication.throttle.enable)

Use dedicated network for snapmirror

Use multi pathing for load balancing

Look for network prblems

Snapvault

Basic unit for snapvault is the qtree

Increase concurrent transfers by installing nearstore on the destination

The intial transfer will setup the relation between the source and destination

Open tcp ports 10000 and 10566

With ontap 7.3 you can install the sv_ontap_pri and sv_ontap_sec on the same system

To start a snapvault incremental restore – snapvault restore –r –s

Snapvault update – can only be run on the destination system

Snapvault uses retention periods to expire worm snapshots

Snapvault status –c (show qtree config parameters)

Snapvault log files are located /etc/logs/snapmirror

Snap vault start –r



Open Systems Snap Vault

The directory is the basic unit for ossv

To install open systems snapvault post check use svinstallcheck to install use

Snapvault snaps for snaplock vol are not auto deleted according to retention schedule

OOSV backups are only schedueled on the secondary system.

Snapvault restore –s will restore a vm with config files.

Snaplock

2 types of snaplock snaplock compliance and snaplock enterprise (for self regulated requirements)

You can delete snaplock enterprise volumes with unexpired WORM (Write Once Read Many)

Protection Manager

Protection manager and provisioning manager run I nthe netapp management console

Protection manager uses the same database as operations manager

No comments: