Understanding the VMAX FA/RDF port numbering

The logical numbering of the VMAX ports (FE, RF etc.. ports on the VMAX-3 and VMAX AFA’s) are quite confusing. It is important as we do the zoning and the host integration for these arrays. And yes it really is confusing, for any storage administrator who is not very familiar with.

This post is an attempt to explain the mapping of physical to logical numbering of the FA/RDF ports on a VMAX system.

Read more

The physical numbering of the SLICs (IO Modules) are as in the above snip and is quite straightforward. The modules are starting from the Slot 0 to Slot 10 with the Management modules (MMCS on the first engine/ MM on the remaining engines) on the left most. Few slots will be used for Vault to Flash modules which will have no ports on them. The slot# and the type of module in it may vary slightly with the addition of compression modules in AFA arrays. The SLICs 2,3,8 and 9 are important from an admin point of view as these will have either FA/RDF modules in. Even though the Back-end modules will have physical ports and will have connectivity, it is not of much concern for an admin as it will be configured during the array initialization.

——————- advertisements ——————-

———————————————————-

Here comes the logical numbering. For numbering a port, we should be aware of the director we are referring to and the slot number for the specific module. Considering the above snip as a single engine scenario, the odd director will be director 1 and even director will be director 2. This will be similar for the remaining engines (e.g., For engine-4, the odd director will be director 7 and even will be director 8 ).

——————- advertisements ——————-

———————————————————-

Now, let us assume the SLIC 2 is configured with FA emulation. The ports on director 1 SLIC will be numbered starting with 1d (d for FA emulation). The port numbers will be 1d4,1d5,1d6 and 1d7. You will have to keep this image in mind or will have to make a note of it to have the logical numbering for each SLIC. For RDF emulation, the numbering will have an e in it (e for RDF). Let us assume, the SLIC 8 is of RDF emulation. The 3rd port (the last port with numbering starting from 0 as in first pic) on SLIC 8 on even director will be 2e27. 2 for 2nd director, e for RDF emulation and 27 is the logical number.

Hopefully that was not that tough and it helped you. You may try various combinations for your practice. To start with, what will be the logical number for an FA port odd director SLIC 8 and port 2 ?

You may find more EMC VMAX posts here. Please use the comments section for any queries/suggestions.

Hitachi VSP Auto Dump Collection

We had posted previously on log collection from Hitachi unified storage and EMC VNX/Cellera storage arrays. In the same way, let us see how we can gather Auto Dump from Hitachi VSP storage arrays.

An Auto dump is a support log which is a mandate to analyse any kind of issue occurred on a Hitachi VSP Storage system. It is necessary to have Auto Dump collected during or shortly after an issue occurrence.

Read more

Normal Auto Dump can be collected by the customer and for detailed Auto dump you can take assistance from an HDS engineer.

For collecting an Auto Dump we need to have the SVP access, we can take an RDP session into the SVP. Basically, SVP is a Windows Vista system. Login to the SVP using the credentials.

Once you are logged in to the SVP you may either able to see the SVP console or the storage navigator Web console. If it is a storage navigator web console then go to “Maintenance” tab and select “Maintenance component General” It will open SVP console for you.

In the console, you can navigate to Auto Dump and you can click on the auto dump option to collect the log. You will be taken to a new window and you will be prompted to enter the target file location.

Once auto dump is started, it may take 30-45 minutes to complete.  Once it is completed you can navigate to C:\DKCxxx\TMP and you will find a file named hdcp.tgz last modified today (or the date you run the auto dump).

You can copy the file to your local PC or any server where internet connection is available.

Once we have files in our local system, we can upload the same to Hitachi Technical Upload Facility (TUF). This will require a valid Hitachi support case ID.

Hope this helped you. Feel free to provide your feedback in the comments section.

Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

ISILON basic commands

Here in this post, we are discussing a few basic isilon commands. Some of which comes to our help in our daily administration tasks for managing and monitoring the isilon array. You may refer the Celerra/VNX health check steps also we discussed in one of our previous posts.

Here are some of the isilon commands.

isi status : Displays the status of the cluster,nodes and events etc… You can use various options including -r (for displaying raw size), -D (for detailed info) -n (info for specific node -n <node id>)

isi_for_array : For running various commands against specific nodes. -n for a specific node and

isi events : There are many options with the ‘events’ command including isi events list (to list all the events) isi events cancel (to cancel the events) isi events quiet (to quiet the events). You can setup event notifications also using the isi events command.

Read more

isi devices : To view and change the cluster devices status. There are plenty of options with the devices command.

isi devices –device <Device>  : where Device can be a drive or an entire node. –action option is used to perform any specific option on the devices (–action <action>) including smartfail, stopfail, status, add, format etc…

isi firmware status : is used to list the isilon firmware type and versions.

isi nfs exports : NFS exports command is used for various isilon NFS operations including export creation, listing/viewing modifying etc… Below are a list of sub-commands.

1. isi nfs exports create –zone <zone name> –root-clients=host1, host2 –read-write-clients=host2, host3 –path=<path>

2. isi nfs exports view <export ID> –zone=<zone name>

3. isi nfs exports modify <export ID> –zone=<zone name> –add-read-write-clients host4

isi smb share : This command is used to create, list/view, modify etc… operations on SMB shares. Sample sub-commands –

1. isi smb shares create <share name> –path=/ifs/data/SMBpath –create-path –browsable=true –zone <zone name> –description=”Test share”

2. isi smb shares delete <share name> –zone=<zone name>

isi quota quotas : this command is used for various quota operations including creation/deletion/modification etc…

Hope this post helped you. Please feel free to comment if you have any questions. We will discuss more detailed commands in a later post.

 

Way to gather simple trace from Hitachi Unified Storage

You may refer our previous post on VNX/Cellera log collection. Here we can see how to gather support logs from Hitachi Unified Storage (HUS 110, HUS 130 & HUS 150).

A “Simple Trace” is needed for analysis of all issues relating to the Hitachi storage systems. The trace can be obtained by the customer. It is critical to gather a trace as soon as possible after a problem is detected. This is to prevent log data wrapping and loss of critical information.

Read more

If a performance problem is being experienced, take one simple trace as soon as possible, while performance is affected. This can assist greatly in finding the root cause

 

Hitachi modular storage (HUS/AMS) log collection is a very simple task.

We can login to the WEB GUI using the controller IP Address (type the controller IP Address in a web browser and press enter). both HUS controllers have IP Address. Then in the left panel of the WEB GUI we can find “Simple Trace” Under Trace.

HUS Log Collection

Click on the simple trace and it will pop-up a screen, here will take some time to get it generated. We can monitor the percentage from the pop-up screen. Once the fetching completed we can download it to our local system. There may be multiple files we have to download for complete information from the same pop-up.

The file name will be as follows  smpl_trc1_systemserialnumber_yyyymmdd.dat 

Once we have files in our local system, we can upload the same to Hitachi Technical Upload Facility (TUF) Here we requires valid support case ID from Hitachi.

 

 

Simple LUN allocation steps – VNX

In one of our post earlier, we have seen the allocation steps in VMAX. Now let’s see the case with the mid-range product, EMC VNX. LUN allocation in VNX is quite simple with the Unisphere Manager GUI. Lets see the steps here.

Read more

Creating a LUN : You need to have the information like the Size of the LUN required, the disk type and RAID type (if there are any specific requirement) etc… Based on these requirements, you have to decide the pool to go with. Based on the disk type and RAID type used in different pools, you will have to select the correct pool. From Unisphere, under Storage>LUNs you have to click the Create button.

You have to furnish the data including the Size, Pool (Video below from EMC on Pool creation) etc…in the screen. You will have to select the checkbox depending on whether the LUN need to be created as a Thin/Thick. Once all the fields are filled in, you have to note the LUN-ID and you can submit the form. Done..! You have created the LUN, you can find the new LUN from the list and verify the details..

Adding a new host : Yes, your requirement may be to allocate new LUN to a new host. Once host is connected via fabric and you have done with the zoning, the host connectivity should be visible in Initiators list (Unisphere> Hosts> Initiators). If you have the Unisphere Host Agent installed on the host or if it is an ESXi host, the host gets auto-registered and you will see the host details in the Initiators list.

Else you will see only the new host WWNs in the list. You have to select the WWNs and do a register. You have to fill in the host details (Name and IP) and the ArrayCommpath and failover mode settings. Once the host is registered, you will see the host in the hosts list (Unisphere > Hosts > Hosts).

 

Storage Group : You now have to make the LUN visible to the hosts. Storage Group is the way to do this in VNX/Clariion. You will have to create a new storage group for the hosts (Unisphere > Hosts > Storage Groups). You can name the new storage group to match the host/cluster name for easy identification and then add the hosts to the group.

If there are multiple hosts which will be sharing the LUNs, you have to add the hosts in the storage group. And you also have to add the LUNs to the Storage Group. You have to set the HLU for the LUNs in the SG and have to be careful in giving the HLU. For changing the HLU, you will have to take a downtime as it can not be modified on-the-fly.

Once the LUNs and hosts are added to the Storage Group, you are done with the allocation..! You can now request the host team to do a rescan to see the new LUNs.

Hope this post helped you. For more Celerra/VNX posts click here

 

EMC VMAX3 in 3D

EMC introduced VMAX3 series in the mid of 2014, which is an Enterprise Data Service Platform rather than a storage system. My eyes got stuck on a YouTube video from EMC which shows the 3D view of the components and features.

Read more

I wanted to share it with you, which resulted in this post. Here is it for you…

 

VMAX3 introduced 3 new models of VMAX – VMAX 100K, 200K and 400K – with the new HYPERMAX Operating System and Dynamic Virtual Matrix architecture. You may read more from the below links, from experts and EMC Elects..

 

New EMC VMAX³ – Industry’s First Enterprise Data Service Platform – Official Press Release

The VMAX3: Why Enterprise Class is Still Very Relevant – By Jason Nash

EMC Announces Next-Generation VMAX Storage Array – By Dave Henry

Symmetrix offers a new kind of MAXimum Virtualisation (VMAX) -By Rob Koper

VMAX 3 – The all-new Enterprise Data Service Platform..! Part-I – By Vipin V.K

EMC ANNOUNCE VMAX3 – By Roy Mikes

EMC announces its next Generation VMAX Array – By Mark May

EMC Announces VMAX Family Re-architected for Enterprise Data Services and Hybrid Clouds -By StorageReview.com

Just a few posts which I found interesting. You may suggest the links to more value added posts on VMAX3 in the comments section. We will try to add the link here for other’s reference.

You may find more EMC VMAX posts here. Thank You.

 

Brocade SAN switch zoning via CLI

We had discussed zoning in Cisco switch recently, in one of our posts. Now we will discuss the same on a Brocade switch via CLI. As we already discussed, the 3 components (zones, aliases and zoneset) remains the core here also. For reading bit more on this, you may read the previous post.

Now let’s directly come in to the commands for various steps.

Read more

We have the new HBA connected to the switch, we can ensure the successful connectivity by running switchshow command. This will show all the ports and the connected device WWNs, you can check the port number if you are aware of, or by finding the WWN (you may do a grep for the WWN).

Else if you are not aware of the switch and port on fabric on which the HBA is attached to, you may run nodefind. nodefind 10:xx:xx:xx:xx:xx:xx:01  will list the port details.

 

 

Now we can create the alias for the HBA (BForum_HBA1) and the storage port (VNX_SPA3). Below are the commands,

alicreate “BForum_HBA1″,”10:xx:xx:xx:xx:xx:xx:01”
alicreate “VXN_SPA3″,”50:06:xx:xx:xx:xx:xx:02”

For adding a WWN to an existing alias (adding a WWN – 10:xx:xx:xx:xx:xx:xx:02 to the alias BForum_HBA2 for example) you may run,

aliadd “BForum_HBA2″,”10:xx:xx:xx:xx:xx:xx:02”

Now we will be creating the zone for the HBA and storage port,

zonecreate “BForum_HBA1_VNX_SPA3″,”BForum_HBA1;VNX_SPA3”

We can add an alias to an existing zone by running the zoneadd command in similar way as we used aliadd command.

We can create zone config with the below command. This will add the zone to the cfg too..

cfgcreate “BForum_SAN1_CFG”,”BForum_HBA1_VNX_SPA3″

 

 

we should use the cfgadd command to add a new zone to an existing cfg as shown below,

cfgadd “BForum_SAN1_CFG”,”BForum_HBA1_VNX_SPB2

Thus we have the zones created and added to the (existing/new) config. Now we should save the config to memory to ensure this will be loaded in the next reboot of the switch also. The cfgsave command will do it for us.

We can now enable the zone config to make it in effect.

cfgenable BForum_SAN1_CFG

Yes we are all set. The server and storage now should be able to communicate. Some other useful commands are,

cfgshow BForum_SAN1_CFG           #Shows the config BForum_SAN1_CFG in detail

cfgdisable BForum_SAN1_CFG           #Disables the config BForum_SAN1_CFG

cfgremove “BForum_SAN1_CFG”,”BForum_HBA1_VNX_SPB2”           #Removes the zone BForum_HBA1_VNX_SPB2 from config BForum_SAN1_CFG

cfgactvshow            #Shows the current active config

alishow BForum_HBA1    #Shows the alias BForum_HBA1

zoneshow BForum_HBA1_VNX_SPA3   #Shows the zone BForum_HBA1_VNX_SPA3 details

More in coming posts. You may click here for SAN switch related posts. Thanks for reading..

 

Cisco MDS SAN switch Zoning via CLI

Here let’s discuss the steps to complete the zoning of a new server in Cisco MDS FC switch. In our previous post we had discussed the initialization procedure for a new MDS switch – may be helpful for you. The process of zoning will have 3 components, namely aliases, zones and zoneset (or zone configuration).

If you have a Brocade switch, you may refer to this post which explains zoning in a Brocade switch via CLI.

An alias is a name assigned to the WWN numbers which makes it easy to use/remember. WWN numbers Read more

– identity for a device, will have numbers separated by colon (:), for e.g, 10:ab:cd:ef:12:34:56:78 – are harder to be remembered.

A zone will be containing multiple objects which defines a communication path. In a zoning enabled switch, any two WWNs or port which are not having a common zone (which are not part of a single zone together) will not be able to communicate each other. We will create a Zone and will add the objects (WWNs, aliases or ports) to it.

A zoneset or a zone configuration is a collection of a set of zones in a switch/fabric. It makes easy to manage the zones. We will define an active configuration in switch/fabric and will add the zones, which need to be active, to this configuration.
Now let’s discuss the commands.

 

 

First we will create an alias for the new server HBA and the storage port to which it need to communicate.

#conifg -t

BForum_SAN01(config)# fcalias name BForum_HBA1 vsan 20        # This will create an alias with name BForum_HBA1

BForum_SAN01(config-alias)# member pwwn 10:xx:xx:xx:xx:xx:xx:01    # Adds the WWN to this alias

BForum_SAN01(config-alias)#exit

BForum_SAN01(config)# fcalias name VNX_SPA3 vsan 20

BForum_SAN01(config-alias)# member pwwn 50:xx:xx:xx:xx:xx:xx:01

BForum_SAN01(config-alias)#exit

Now we have the aliases ready. We can now create a zone for these two objects and add them. We will create a zone named ‘BForum_HBA1_VNX_SPA3’ which will be containing the host HBA (BForum_HBA1) and the storage port (VNX_SPA3).

BForum_SAN01(config)# zone name BForum_HBA1_VNX_SPA3 vsan 20

BForum_SAN01(config-zone)# member fcalias BForum_HBA1

BForum_SAN01(config-zone)# member fcalias VNX_SPA3

BForum_SAN01(config-zone)# exit

Zone too is ready now. Assuming we don’t have an existing configuration, we will be creating a zone config here. If you are already having a zoneset, you can use the zoneset name here in the below command.

 

 

BForum_SAN01(config)#zoneset name BForum_SAN01_Config VSAN 20

BForum_SAN01(config-zoneset)# member BForum_HBA1_VNX_SPA3

BForum_SAN01(config-zoneset)# exit

Now we have the zoneset created and zones added to it. We are good to activate the new zoneset.

BForum_SAN01(config)# zoneset activate name BForum_SAN01_Config VSAN 20

To verify the active zoneset, you may run the command show active zoneset

In case if you have to deactivate the zoneset, you may run the command,

BForum_SAN01(config)# no zoneset activate name BForum_SAN01_Config VSAN 20

We can save the running config to start-up config by running copy run start command. Now we have the zoning completed for one of the HBA of the new server. We will have to do the zoning for both the HBAs and should use multiple storage ports for redundancy.

You may click here for SAN switch related posts.

Hope this post was helpful for you. More, in coming posts, your thoughts in comments below… 🙂

Simple VMAX device allocation steps

Here we will discuss the basics of VMAX, a new device allocation to a new host. For a new host, we will have to create a Masking View (we will use MV or View later in this post), which will be containing Storage Group ( SG – Containing all the devices to be presented to the host ), Port Group ( PG -Containing the VMAX director ports through which the host will access the devices ) and Initiator Group ( IG – containing the host HBA WWNs ). Lets look more in to the configuration via symcli.

Read more

Let’s assume we are using Thin Devices with a pool named T_Pool_1 already present in our VMAX.We will create 2 TDEVs devices first. The command will be,

symconfigure -sid 1234 -cmd ” create dev count=2, size=54614, emulation=FBA, config=TDEV;” commit -v

The new devices AAAA and AAAB  of 50GB are now created. Now we will bind these to the pool T_Pool_1. The commad we will use,

symconfigure -sid 1234 -cmd “bind tdev AAAA:AAAB to pool T_Pool_1;” commit -nop -v

 

 

Now we are all set to allocate these devices to the server. But it requires the View created. We are assuming the zoning is done and the HBAs are having good connectivity with the VMAX. We can verify this by running the command,

symaccess -sid 1234 list logins -wwn 1xxxxxxxxxxx  # where 1xxxxxxxxxxx is the HBA WWN

If the HBA is logged in, we are good.Now we will create the IG first.

symaccess -sid 1234 -type init create -name BForum_IG -f WWN_file

Where the WWN_file should have the WWNs mentioned in the form,

WWN:1xxxxxxxxxxx  

WWN:2xxxxxxxxxxx  

If we need to a WWN later we can do it by running,

symaccess -sid 1234 -name BForum_IG -type init -wwn 3xxxxxxxxxxx add

Now we have the IG created. The initiators in IG will be listed with their WWN only. If we need to rename to make it human-readable, we can use the command –

symaccess -sid 1234 -wwn 1xxxxxxxxxxx  rename -alias BForum_HBA1/1xxxxxxxxxxx 

The next step, we will create the PG.

symaccess -sid 1234 create -name BForum_PG -type port -dirport 7f:0,9f:0

 

 

Now we have the PG created with the FA port 7F:0 and 9F:0 added to it. Now we are left with the SG. We will create it with the devices AAAA and AAAB added to it,.

symaccess -sid 1234 create -name BForum_SG -type storage -devs AAAA:AAAB 

Yes, we have all the groups created. Now we will create the View,

symaccess -sid 1234 create view -name BForum_MV -ig BForum_IG -pg BForum_PG -sg BForum_SG

That’s it..! The view is set with the host and devices added to it.Now the server will be able to discover the devices. That was easy, right ? Hope you enjoyed it.You may find more EMC VMAX posts here.

 

1 2