Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

Scanning new LUNs from Celerra/VNX File

Once you have provisioned a new LUN (or Symmetrix device), you have to scan for this LUN from Celerra/VNX file components to make use of it at the File side – for making filesystem and then CIFS share/ NFS export. Here we are discussing the simple steps to scan the new LUN.

1. From the GUI – Unisphere Manager

Read more

Login to the Unisphere manager console by entering the control station IP address in web browser. Select the system you have to scan for new LUN from top left drop-down. Navigate to System > System Information tab.You will be given with a Rescan All Storage Systems button there which will do the rescan for you.

RescanOnce rescan is completed, the devices/LUNs will be visible under the disks and space will be available as the potential storage for the respective pool (Storage Pool for file).

 

 

 2. Now via CLI

From the CLI we have to scan for the new LUN on all data movers. We will use the command server_devconfig. We can run the command for each DM (data mover) separately starting with the standby one first. The syntax for a dual data mover system will be,

server_devconfig server_3 -create -scsi -all      # for standby DM

server_devconfig server_2 -create -scsi -all      # for primary DM

This is the recommended way but I have never heard of any issue occurred while scanning across all DMs at a time.For a multi DM system if we want to scan for all data movers in single command, the only change will be just ALL in the place of the server name.

server_devconfig ALL -create -scsi -all      # for all DMs

After the successful scanning you can find the new devices/LUNs at the bottom of the  output of nas_disks -list command.

$nas_disks -l

id   inuse  sizeMB    storageID-devID           type     name          servers

1     y      11263    CK1234512345-0000    CLSTD   root_disk     1,2

============== Shortened output ==============

18    n     51200    CK1234512345-0010    CLSTD   d18             1,2

 

You can verify the increased space of the respective pool by running the nas_pool -size [Pool Name].

Hope this post helped you. For more Celerra/VNX posts click here

 

Basic healthcheck commands for EMC Celerra/VNX

For the Celerra or a VNX file/Unified system we can verify the system health by running the nas_checkup command.  This will do all the checks including the file and block hardware components, configuration checks – NTP, DNS etc…

A sample output is given below,

[[email protected] ~]$ nas_checkup

Check Version:  7.1.72-1 Read more

Check Command:  /nas/bin/nas_checkup
Check Log    :  /nas/log/checkup-run.123456-654321.log

————————————-Checks————————————-
Control Station: Checking statistics groups database………………….. Pass
Control Station: Checking if file system usage is under limit………….. Pass
Control Station: Checking if NAS Storage API is installed correctly…….. Pass
Control Station: Checking if NAS Storage APIs match……………………  N/A
Control Station: Checking if NBS clients are started………………….. Pass
Control Station: Checking if NBS configuration exists…………………. Pass
Control Station: Checking if NBS devices are accessible……………….. Pass
Control Station: Checking if NBS service is started…………………… Pass
Control Station: Checking if PXE service is stopped…………………… Pass
Control Station: Checking if standby is up……………………………  N/A
Control Station: Checking integrity of NASDB…………………………. Pass
Control Station: Checking if primary is active……………………….. Pass
Control Station: Checking all callhome files delivered………………… Pass
Control Station: Checking resolv conf……………………………….. Pass

 


Control Station: Checking if NAS partitions are mounted……………….. Pass
Control Station: Checking ipmi connection……………………………. Pass
Control Station: Checking nas site eventlog configuration……………… Pass
Control Station: Checking nas sys mcd configuration…………………… Pass
Control Station: Checking nas sys eventlog configuration………………. Pass
Control Station: Checking logical volume status………………………. Pass
Control Station: Checking valid nasdb backup files……………………. Pass
Control Station: Checking root disk reserved region…………………… Pass
Control Station: Checking if RDF configuration is valid………………..  N/A
Control Station: Checking if fstab contains duplicate entries………….. Pass
Control Station: Checking if sufficient swap memory available………….. Pass
Control Station: Checking for IP and subnet configuration……………… Pass
Control Station: Checking auto transfer status……………………….. Fail
Control Station: Checking for invalid entries in etc hosts…………….. Pass
Control Station: Checking for correct filesystem mount options…………. Pass
Control Station: Checking the hard drive in the control station………… Pass
Control Station: Checking if Symapi data is present…………………… Pass
Control Station: Checking if Symapi is synced with Storage System………. Pass
Blades         : Checking boot files………………………………… Pass
Blades         : Checking if primary is active……………………….. Pass
Blades         : Checking if root filesystem is too large……………… Pass
Blades         : Checking if root filesystem has enough free space……… Pass
Blades         : Checking network connectivity……………………….. Pass
Blades         : Checking status……………………………………. Pass
Blades         : Checking dart release compatibility………………….. Pass
Blades         : Checking dart version compatibility………………….. Pass
Blades         : Checking server name……………………………….. Pass
Blades         : Checking unique id…………………………………. Pass
Blades         : Checking CIFS file server configuration………………. Pass
Blades         : Checking domain controller connectivity and configuration. Pass
Blades         : Checking DNS connectivity and configuration…………… Pass
Blades         : Checking connectivity to WINS servers………………… Pass
Blades         : Checking I18N mode and unicode translation tables……… Pass
Blades         : Checking connectivity to NTP servers…………………. Warn
Blades         : Checking connectivity to NIS servers…………………. Pass
Blades         : Checking virus checker server configuration…………… Pass
Blades         : Checking if workpart is OK………………………….. Pass
Blades         : Checking if free full dump is available………………. Pass
Blades         : Checking if each primary Blade has standby……………. Pass
Blades         : Checking if Blade parameters use EMC default values……. Info
Blades         : Checking VDM root filesystem space usage………………  N/A
Blades         : Checking if file system usage is under limit………….. Pass
Blades         : Checking slic signature…………………………….. Pass
Storage System : Checking disk emulation type………………………… Pass
Storage System : Checking disk high availability access……………….. Pass
Storage System : Checking disks read cache enabled……………………. Pass
Storage System : Checking disks and storage processors write cache enabled. Pass
Storage System : Checking if FLARE is committed………………………. Pass
Storage System : Checking if FLARE is supported………………………. Pass
Storage System : Checking array model……………………………….. Pass
Storage System : Checking if microcode is supported……………………  N/A
Storage System : Checking no disks or storage processors are failed over… Pass
Storage System : Checking that no disks or storage processors are faulted.. Pass
Storage System : Checking that no hot spares are in use……………….. Warn
Storage System : Checking that no hot spares are rebuilding……………. Warn
Storage System : Checking minimum control lun size……………………. Pass
Storage System : Checking maximum control lun size…………………….  N/A
Storage System : Checking maximum lun address limit…………………… Pass
Storage System : Checking system lun configuration……………………. Pass
Storage System : Checking if storage processors are read cache enabled….. Pass
Storage System : Checking if auto assign are disabled for all luns………  N/A
Storage System : Checking if auto trespass are disabled for all luns…….  N/A
Storage System : Checking storage processor connectivity………………. Pass
Storage System : Checking control lun ownership……………………….  N/A
Storage System : Checking if Fibre Channel zone checker is set up……….  N/A
Storage System : Checking if Fibre Channel zoning is OK………………..  N/A
Storage System : Checking if proxy arp is setup………………………. Pass
Storage System : Checking if Product Serial Number is Correct………….. Pass
Storage System : Checking SPA SPB communication………………………. Pass
Storage System : Checking if secure communications is enabled………….. Pass
Storage System : Checking if backend has mixed disk types……………… Pass
Storage System : Checking for file and block enabler………………….. Pass
Storage System : Checking if nas storage command generates discrepancies… Pass
Storage System : Checking if Repset and CG configuration are consistent…. Pass
Storage System : Checking block operating environment…………………. Pass
Storage System : Checking thin pool usage…………………………….  N/A
Storage System : Checking for domain and federations health on VNX……… Pass

 

 

All the warnings, errors and information will be listed at the  bottom of the output with corrective action if required.

The below commands will help you to collect the necessary information while registering a Service Request etc…

/nas/sbin/model    # to find the VNX/Celerra Model

/nas/sbin/serial    # to find the VNX/Celerra Serial number

nas_server -l       # to list the Data movers and their status.  A sample result is as below.

———————————————–

[email protected] ~]$ nas_server -l

id      type  acl  slot groupID  state  name

1        1    0     2              0    server_2

2        4    0     3              0    server_3

[[email protected] ~]

———————————————–

 /nas/sbin/getreason       # to see the Data movers and Control Station boot status. A sample result is as below.

———————————————–

[[email protected] ~]$ /nas/sbin/getreason

10 – slot_0 primary control station

5 – slot_2 contacted

5 – slot_3 contacted

———————————————–

Control station status should be 10 and Data movers should be 5 with state contacted for a healthy system.

And finally, collecting the logs – the support materials.

/nas/tools/collect_support_materials will help you in collecting the logs. The logs will be saved under /nas/var/emcsupport. The file location and name will be displayed at the bottom of the command output. You can use FTP/SCP tools to copy the file to your desktop.

More in coming posts…

You may refer this post for reading how to collect the SP collects from control station CLI. Hope this post helped you. For more Celerra/VNX posts click here .