EMC ISILON Interview questions

Adding one more post to our interview questions post category, this time for ISILON. We are trying to cover some of the frequently asked questions from the ISILON architecture and configuration areas.

  •  Node and drive types supported :  ISILON supported 3 different types of nodes S-Series, X-Series and NL-Series. S-Series (S210) is high performance node type supports SSD drives. X-Series nodes (X210 and X410) supports upto 6 SSDs and can have remaining slots with HDDs in them. NL-Series (NL-410) nodes supports only one SSD in the system and SATA drives in the remaining slots. This node type is intended for the archiving requirements.

Read more

The system with the recent OneFS versions, also supports All-Flash nodes, Hybrid nodes, Archive nodes and IsilonSD nodes. ISILON All-Flash nodes (F800) can have upto 924 TB in a single 4U node and can grow upto 33PB in one cluster. One node can house upto 60 drives. Hybrid nodes (H400, H500 and H600) supports a mix of SSDs and HDDs. H400 and H500 can have SATA drives and SSDs and H600 supports SSDs and SAS drives. Archive nodes (A200 and A2000) are intended for archiving solutions. A2000 nodes can have 80 slots with 10TB drives only supported. This node is for high-density archiving. A200 is for near-primary archiving storage solutions which supports 2 TB, 4 TB or 8 TB SATA HDDs- a maximum of 60 drives.

IsilonSD is the software only node type which can be installed in customer hardware.

  •  Scale-Out and Scale-Up architecture : The first thing comes with ISILON is the architecture, Scale-Out. With Scale-Out architecture, the processing and capacity will be increased in parallel. As we add a node, both capacity and processing power will be increased for the system. Let’s take the example of VNX for Scale-Up architecture. Here, the processing power (i.e. Storage processors) can not be increased as the system limit is 2 SPs, but we can grow the overall system capacity by adding more DAEs (and disks) to the system supported limit.
  •  Infiniband Switches and types : ISILON makes use of IB switches for the internal communication between the nodes. ISILON now supports 40 GbE switches also with the Gen-6 hardware in addition to the 10GbE IB switches.
  •  SmartConnect and SSIP : [definition from ISILON SmartConnect whitepaper] SmartConnect is a licensable software module of the EMC ISILON OneFS Operating System that optimizes performance and availability by enabling intelligent client connection load balancing and failover support. Through a single host name, SmartConnect enables client connection load balancing and dynamic NFS failover  and failback of client connections across storage nodes to provide optimal utilization of the cluster resources. SmartConnect eliminates the need to install client side drivers, enabling the IT administrator to easily manage large numbers of client with confidence. And in the event of a system failure, file system stability and availability are maintained.

For every SmartConnect zone there will be one SSIP (SmarConnect Service IP), which wil be used for the client connections. SSIP and associated hostname will have the DNS entry and the client requests will come to the cluster/zone via SSIP. The zone redirects the request to the nodes for completion.

  •  SmartPool : SmartPool enables effective tiering of storage nodes within the filesystem. Data – based on the utilization – will be moved across the tiers within the filesystem automatically with seamless application and end user access. Customers can define policies for the data movement for different workflows and node types.
  •  Protection types in ISILON : ISILON cluster can have protection types N+M (where N is the number of data blocks and M is the number of nodes/drives failures the system can tolerate) or N+M:B (where N is the number of data blocks M is the number of drives failures the system can tolerate and B is the number of node failures can be tolerated ), where N>M. In case of a 3-node system, it can have +1 (i.e. 2+1) protection type. Here the system can tolerate 1 drive/node failure without any data loss.
  •  Steps to create an NFS export : Here we have listed the commands to create and list/view the NFS export.

To create the NFS export :
isi nfs exports create –clients=10.20.10.31,10.20.10.32 –root-clients=10.20.10.33,10.20.10.34 –description=”Beginnersforum Test NFS Share” –paths=/ifs/BForum/Test –security-flavors=unix

To list the NFS exports :
isi nfs exports list

To view the NFS export :
isi nfs exports view <export_number>

You can create the NFS export alias and quotas also for the NFS export.

Hope this helped you in learning some ISILON stuff. We will have more questions and details in upcoming posts. For more interview questions posts please click here. Please add any queries and suggestions in comments.

Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

ISILON basic commands

Here in this post, we are discussing a few basic isilon commands. Some of which comes to our help in our daily administration tasks for managing and monitoring the isilon array. You may refer the Celerra/VNX health check steps also we discussed in one of our previous posts.

Here are some of the isilon commands.

isi status : Displays the status of the cluster,nodes and events etc… You can use various options including -r (for displaying raw size), -D (for detailed info) -n (info for specific node -n <node id>)

isi_for_array : For running various commands against specific nodes. -n for a specific node and

isi events : There are many options with the ‘events’ command including isi events list (to list all the events) isi events cancel (to cancel the events) isi events quiet (to quiet the events). You can setup event notifications also using the isi events command.

Read more

isi devices : To view and change the cluster devices status. There are plenty of options with the devices command.

isi devices –device <Device>  : where Device can be a drive or an entire node. –action option is used to perform any specific option on the devices (–action <action>) including smartfail, stopfail, status, add, format etc…

isi firmware status : is used to list the isilon firmware type and versions.

isi nfs exports : NFS exports command is used for various isilon NFS operations including export creation, listing/viewing modifying etc… Below are a list of sub-commands.

1. isi nfs exports create –zone <zone name> –root-clients=host1, host2 –read-write-clients=host2, host3 –path=<path>

2. isi nfs exports view <export ID> –zone=<zone name>

3. isi nfs exports modify <export ID> –zone=<zone name> –add-read-write-clients host4

isi smb share : This command is used to create, list/view, modify etc… operations on SMB shares. Sample sub-commands –

1. isi smb shares create <share name> –path=/ifs/data/SMBpath –create-path –browsable=true –zone <zone name> –description=”Test share”

2. isi smb shares delete <share name> –zone=<zone name>

isi quota quotas : this command is used for various quota operations including creation/deletion/modification etc…

Hope this post helped you. Please feel free to comment if you have any questions. We will discuss more detailed commands in a later post.

 

VNX/Celerra – SP Collects from Control Station command line..

Personally, I prefer Control Station CLI to get the SP Collects for a VNX/Celerra with attached Clariion, quicker..! Opening the Unisphere Manager takes time, of course it is Java enabled. Here let us see how this can be done via the CLI.

Logs.Open an SSH/Telnet session to the control station and login. Read more

You have to navigate to /nas/tools. Basic Linux command “cd /nas/tools” will do this. Once you are in tools, there will be a hidden script get_spcollect which is used to collect the SP Collects (will have to use ls -la for listing it as it is a hidden file).

Now we have to use the below command to execute the script.

./.get_spcollect  [don’t miss the dots before and after the /]

This will run the SPCollects script and gather all the logs and create a single SPCOLLECT.zip file. A sample output will be as below.

 

 

[[email protected]_NAME ~]$ cd /nas/tools/
[[email protected]_NAME tools]$ ./.get_spcollect

Generating spcollect zip file for Clariion(s)

Creating spcollect zip file for the Service Processor SP_A. Please wait…

spcollect started to pull out log files(it will take several minutes)…
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
   — truncated output–
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
spcollect zip file SYS_SERIAL_SPA_DATE_TIME-STAMP_data.zip for the Service Processor SP_A was created
Creating spcollect zip file for the Service Processor SP_B. Please wait…

spcollect started to pull out log files(it will take several minutes)…
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
   — truncated output–
Wait until new _data.zip file size becomes final(it will take several minutes)
Retrieving new _data.zip file…
spcollect zip file SYS_SERIAL_SPB_DATE_TIME-STAMP_data.zip for the Service Processor SP_B was created

Deleting old SPCOLLECT.zip file from /nas/var/log directory…
Old SPCOLLECT.zip deleted
Zipping all spcollect zip files in one SPCOLLECT.zip file and putting it in the /nas/var/log directory…
  adding: SYS_SERIAL_SPA_DATE_TIME-STAMP_data.zip (stored 0%)
  adding: SYS_SERIAL_SPB_DATE_TIME-STAMP_data.zip (stored 0%)
[[email protected]_NAME tools]$

 

 

Now, as mentioned towards the end of the output , the logs – SPCOLLECT.zip will be located at /nas/var/log directory. How can we access it ? I use WinSCP software to collect it via SCP. Enter the IP address/CS name and login credentials. Once the Session is open, navigate to /nas/var/log on the right panel and your required directory on the left. Select the log file and click F5 (or select copy)

WINSCP

 

That’s it..! You have the SPCollects on your desktop. Quite faster , right ? Hope this post helped you. For more Celerra/VNX posts click here