Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

Simple LUN allocation steps – VNX

In one of our post earlier, we have seen the allocation steps in VMAX. Now let’s see the case with the mid-range product, EMC VNX. LUN allocation in VNX is quite simple with the Unisphere Manager GUI. Lets see the steps here.

Read more

Creating a LUN : You need to have the information like the Size of the LUN required, the disk type and RAID type (if there are any specific requirement) etc… Based on these requirements, you have to decide the pool to go with. Based on the disk type and RAID type used in different pools, you will have to select the correct pool. From Unisphere, under Storage>LUNs you have to click the Create button.

You have to furnish the data including the Size, Pool (Video below from EMC on Pool creation) etc…in the screen. You will have to select the checkbox depending on whether the LUN need to be created as a Thin/Thick. Once all the fields are filled in, you have to note the LUN-ID and you can submit the form. Done..! You have created the LUN, you can find the new LUN from the list and verify the details..

Adding a new host : Yes, your requirement may be to allocate new LUN to a new host. Once host is connected via fabric and you have done with the zoning, the host connectivity should be visible in Initiators list (Unisphere> Hosts> Initiators). If you have the Unisphere Host Agent installed on the host or if it is an ESXi host, the host gets auto-registered and you will see the host details in the Initiators list.

Else you will see only the new host WWNs in the list. You have to select the WWNs and do a register. You have to fill in the host details (Name and IP) and the ArrayCommpath and failover mode settings. Once the host is registered, you will see the host in the hosts list (Unisphere > Hosts > Hosts).

 

Storage Group : You now have to make the LUN visible to the hosts. Storage Group is the way to do this in VNX/Clariion. You will have to create a new storage group for the hosts (Unisphere > Hosts > Storage Groups). You can name the new storage group to match the host/cluster name for easy identification and then add the hosts to the group.

If there are multiple hosts which will be sharing the LUNs, you have to add the hosts in the storage group. And you also have to add the LUNs to the Storage Group. You have to set the HLU for the LUNs in the SG and have to be careful in giving the HLU. For changing the HLU, you will have to take a downtime as it can not be modified on-the-fly.

Once the LUNs and hosts are added to the Storage Group, you are done with the allocation..! You can now request the host team to do a rescan to see the new LUNs.

Hope this post helped you. For more Celerra/VNX posts click here

 

Scanning new LUNs from Celerra/VNX File

Once you have provisioned a new LUN (or Symmetrix device), you have to scan for this LUN from Celerra/VNX file components to make use of it at the File side – for making filesystem and then CIFS share/ NFS export. Here we are discussing the simple steps to scan the new LUN.

1. From the GUI – Unisphere Manager

Read more

Login to the Unisphere manager console by entering the control station IP address in web browser. Select the system you have to scan for new LUN from top left drop-down. Navigate to System > System Information tab.You will be given with a Rescan All Storage Systems button there which will do the rescan for you.

RescanOnce rescan is completed, the devices/LUNs will be visible under the disks and space will be available as the potential storage for the respective pool (Storage Pool for file).

 

 

 2. Now via CLI

From the CLI we have to scan for the new LUN on all data movers. We will use the command server_devconfig. We can run the command for each DM (data mover) separately starting with the standby one first. The syntax for a dual data mover system will be,

server_devconfig server_3 -create -scsi -all      # for standby DM

server_devconfig server_2 -create -scsi -all      # for primary DM

This is the recommended way but I have never heard of any issue occurred while scanning across all DMs at a time.For a multi DM system if we want to scan for all data movers in single command, the only change will be just ALL in the place of the server name.

server_devconfig ALL -create -scsi -all      # for all DMs

After the successful scanning you can find the new devices/LUNs at the bottom of the  output of nas_disks -list command.

$nas_disks -l

id   inuse  sizeMB    storageID-devID           type     name          servers

1     y      11263    CK1234512345-0000    CLSTD   root_disk     1,2

============== Shortened output ==============

18    n     51200    CK1234512345-0010    CLSTD   d18             1,2

 

You can verify the increased space of the respective pool by running the nas_pool -size [Pool Name].

Hope this post helped you. For more Celerra/VNX posts click here

 

VNX/Celerra – SP Collects from Control Station command line..

Personally, I prefer Control Station CLI to get the SP Collects for a VNX/Celerra with attached Clariion, quicker..! Opening the Unisphere Manager takes time, of course it is Java enabled. Here let us see how this can be done via the CLI.

Logs.Open an SSH/Telnet session to the control station and login. Read more

You have to navigate to /nas/tools. Basic Linux command “cd /nas/tools” will do this. Once you are in tools, there will be a hidden script get_spcollect which is used to collect the SP Collects (will have to use ls -la for listing it as it is a hidden file).

Now we have to use the below command to execute the script.

./.get_spcollect  [don’t miss the dots before and after the /]

This will run the SPCollects script and gather all the logs and create a single SPCOLLECT.zip file. A sample output will be as below.

 

 

[[email protected]_NAME ~]$ cd /nas/tools/
[[email protected]_NAME tools]$ ./.get_spcollect

Generating spcollect zip file for Clariion(s)

Creating spcollect zip file for the Service Processor SP_A. Please wait…

spcollect started to pull out log files(it will take several minutes)…
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
   — truncated output–
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
spcollect zip file SYS_SERIAL_SPA_DATE_TIME-STAMP_data.zip for the Service Processor SP_A was created
Creating spcollect zip file for the Service Processor SP_B. Please wait…

spcollect started to pull out log files(it will take several minutes)…
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
   — truncated output–
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
spcollect zip file SYS_SERIAL_SPB_DATE_TIME-STAMP_data.zip for the Service Processor SP_B was created

Deleting old SPCOLLECT.zip file from /nas/var/log directory…
Old SPCOLLECT.zip deleted
Zipping all spcollect zip files in one SPCOLLECT.zip file and putting it in the /nas/var/log directory…
  adding: SYS_SERIAL_SPA_DATE_TIME-STAMP_data.zip (stored 0%)
  adding: SYS_SERIAL_SPB_DATE_TIME-STAMP_data.zip (stored 0%)
[[email protected]_NAME tools]$

 

 

Now, as mentioned towards the end of the output , the logs – SPCOLLECT.zip will be located at /nas/var/log directory. How can we access it ? I use WinSCP software to collect it via SCP. Enter the IP address/CS name and login credentials. Once the Session is open, navigate to /nas/var/log on the right panel and your required directory on the left. Select the log file and click F5 (or select copy)

WINSCP

 

That’s it..! You have the SPCollects on your desktop. Quite faster , right ? Hope this post helped you. For more Celerra/VNX posts click here