Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

ISILON basic commands

Here in this post, we are discussing a few basic isilon commands. Some of which comes to our help in our daily administration tasks for managing and monitoring the isilon array. You may refer the Celerra/VNX health check steps also we discussed in one of our previous posts.

Here are some of the isilon commands.

isi status : Displays the status of the cluster,nodes and events etc… You can use various options including -r (for displaying raw size), -D (for detailed info) -n (info for specific node -n <node id>)

isi_for_array : For running various commands against specific nodes. -n for a specific node and

isi events : There are many options with the ‘events’ command including isi events list (to list all the events) isi events cancel (to cancel the events) isi events quiet (to quiet the events). You can setup event notifications also using the isi events command.

Read more

isi devices : To view and change the cluster devices status. There are plenty of options with the devices command.

isi devices –device <Device>  : where Device can be a drive or an entire node. –action option is used to perform any specific option on the devices (–action <action>) including smartfail, stopfail, status, add, format etc…

isi firmware status : is used to list the isilon firmware type and versions.

isi nfs exports : NFS exports command is used for various isilon NFS operations including export creation, listing/viewing modifying etc… Below are a list of sub-commands.

1. isi nfs exports create –zone <zone name> –root-clients=host1, host2 –read-write-clients=host2, host3 –path=<path>

2. isi nfs exports view <export ID> –zone=<zone name>

3. isi nfs exports modify <export ID> –zone=<zone name> –add-read-write-clients host4

isi smb share : This command is used to create, list/view, modify etc… operations on SMB shares. Sample sub-commands –

1. isi smb shares create <share name> –path=/ifs/data/SMBpath –create-path –browsable=true –zone <zone name> –description=”Test share”

2. isi smb shares delete <share name> –zone=<zone name>

isi quota quotas : this command is used for various quota operations including creation/deletion/modification etc…

Hope this post helped you. Please feel free to comment if you have any questions. We will discuss more detailed commands in a later post.

 

Way to gather simple trace from Hitachi Unified Storage

You may refer our previous post on VNX/Cellera log collection. Here we can see how to gather support logs from Hitachi Unified Storage (HUS 110, HUS 130 & HUS 150).

A “Simple Trace” is needed for analysis of all issues relating to the Hitachi storage systems. The trace can be obtained by the customer. It is critical to gather a trace as soon as possible after a problem is detected. This is to prevent log data wrapping and loss of critical information.

Read more

If a performance problem is being experienced, take one simple trace as soon as possible, while performance is affected. This can assist greatly in finding the root cause

 

Hitachi modular storage (HUS/AMS) log collection is a very simple task.

We can login to the WEB GUI using the controller IP Address (type the controller IP Address in a web browser and press enter). both HUS controllers have IP Address. Then in the left panel of the WEB GUI we can find “Simple Trace” Under Trace.

HUS Log Collection

Click on the simple trace and it will pop-up a screen, here will take some time to get it generated. We can monitor the percentage from the pop-up screen. Once the fetching completed we can download it to our local system. There may be multiple files we have to download for complete information from the same pop-up.

The file name will be as follows  smpl_trc1_systemserialnumber_yyyymmdd.dat 

Once we have files in our local system, we can upload the same to Hitachi Technical Upload Facility (TUF) Here we requires valid support case ID from Hitachi.

 

 

Simple LUN allocation steps – VNX

In one of our post earlier, we have seen the allocation steps in VMAX. Now let’s see the case with the mid-range product, EMC VNX. LUN allocation in VNX is quite simple with the Unisphere Manager GUI. Lets see the steps here.

Read more

Creating a LUN : You need to have the information like the Size of the LUN required, the disk type and RAID type (if there are any specific requirement) etc… Based on these requirements, you have to decide the pool to go with. Based on the disk type and RAID type used in different pools, you will have to select the correct pool. From Unisphere, under Storage>LUNs you have to click the Create button.

You have to furnish the data including the Size, Pool (Video below from EMC on Pool creation) etc…in the screen. You will have to select the checkbox depending on whether the LUN need to be created as a Thin/Thick. Once all the fields are filled in, you have to note the LUN-ID and you can submit the form. Done..! You have created the LUN, you can find the new LUN from the list and verify the details..

Adding a new host : Yes, your requirement may be to allocate new LUN to a new host. Once host is connected via fabric and you have done with the zoning, the host connectivity should be visible in Initiators list (Unisphere> Hosts> Initiators). If you have the Unisphere Host Agent installed on the host or if it is an ESXi host, the host gets auto-registered and you will see the host details in the Initiators list.

Else you will see only the new host WWNs in the list. You have to select the WWNs and do a register. You have to fill in the host details (Name and IP) and the ArrayCommpath and failover mode settings. Once the host is registered, you will see the host in the hosts list (Unisphere > Hosts > Hosts).

 

Storage Group : You now have to make the LUN visible to the hosts. Storage Group is the way to do this in VNX/Clariion. You will have to create a new storage group for the hosts (Unisphere > Hosts > Storage Groups). You can name the new storage group to match the host/cluster name for easy identification and then add the hosts to the group.

If there are multiple hosts which will be sharing the LUNs, you have to add the hosts in the storage group. And you also have to add the LUNs to the Storage Group. You have to set the HLU for the LUNs in the SG and have to be careful in giving the HLU. For changing the HLU, you will have to take a downtime as it can not be modified on-the-fly.

Once the LUNs and hosts are added to the Storage Group, you are done with the allocation..! You can now request the host team to do a rescan to see the new LUNs.

Hope this post helped you. For more Celerra/VNX posts click here

 

Mirror disk replacement in solaris

Let’s here discuss how to replace the failed root mirror disk in solaris under SVM.

==========================================================

1.Identify the faluty disk and its partition

[email protected] /root>metastat -c
d65 m 29GB d63 d61 (maint)
d63 s 29GB c1t1d0s6
d61 s 29GB c1t0d0s6 (maint) <—— Disk showing in maintanace state and it has to be replaced

d30 m 9.8GB d31 d32 (maint)
d31 s 9.8GB c1t0d0s3 (maint) <——
d32 s 9.8GB c1t1d0s3

d55 m 5.9GB d53 d51 (maint)
d53 s 5.9GB c1t1d0s5
d51 s 5.9GB c1t0d0s5 (maint) <——

Read more

d45 m 7.8GB d43 d41 (maint)
d43 s 7.8GB c1t1d0s4
d41 s 7.8GB c1t0d0s4 (maint) <——
d5 m 7.8GB d3 d1 (maint)
d3 s 7.8GB c1t1d0s0
d1 s 7.8GB c1t0d0s0 (maint) <——

d10 m 7.8GB d11 d12 (maint)
d11 s 7.8GB c1t0d0s1 (maint) <——
d12 s 7.8GB c1t1d0s1

2. So we have identified the disk c1t0d0 as faulty which is showing need maintanace in the metastat output.

3. Confirm the disk is having errors in the iostat output and /var/adm/messages also.

[email protected] /root>iostat -En
c1t0d0 Soft Errors: 173 Hard Errors: 0 Transport Errors: 0
Vendor: HITACHI Product: H101473SCSUN72G Revision: SA23 Serial No: 0810DTE8YA
Size: 73.41GB <73407865856 bytes>

4.identify the boot path

prtconf -vp|grep bootpath

bootpath: ‘/[email protected],600000/[email protected]/[email protected]/[email protected]/[email protected]/[email protected]/[email protected],0:a’

echo|format

c1t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/[email protected],600000/[email protected]/[email protected]/[email protected]/[email protected]/[email protected]/[email protected],0

So the primary boot disk is c1t1d0. And the faulty disk is its mirror.

Steps to do unconfigure the faulty disk before proceeding with the replacement. (its an online activity)

=========================================================================

1. Take all neccessory pre outputs as below.

df -h,metastat -c,metadb,echo | format,cat /etc/vfstab,swap -l

2. Detach the faluty submirros

metadetach d65 d61

metadetach d30 d31

metadetach d55 d51

metadetach d45 d41

metadetach d5 d1

metadetach d10 d11

3. Delete the metadb information from the disk.

# metadb -d /dev/dsk/c1t0d0s7

4.  Use the cfgadm command to display all the disks in the server

cfgadm -al

[email protected] /root>cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c0 scsi-bus connected configured unknown
c0::dsk/c0t0d0 CD-ROM connected configured unknown
c1 scsi-bus connected configured unknown

c1::dsk/c1t0d0 disk connected configured unknown <——- Failed Disk

c1::dsk/c1t1d0 disk connected configured unknown
c1::dsk/c1t2d0 disk connected configured unknown
c2 fc-fabric connected configured unknown
c2::500009720822514c disk connected configured unknown
c3 fc-fabric connected configured unknown
c3::5000097208225168 disk connected configured unknown

On identifying the disk to be removed, unconfigure the disk. You may have to use -f along with -c to forcibly remove the disk in some cases.

cfgadm -c unconfigure c1::dsk/c1t0d0

 

5). Verify the status of the disk in cfgadm -al command. It should show unconfigured and unavailable.

# cfgadm -al

c1::dsk/c1t0d0 connected unconfigured unknown

You can safely remove the disk from the server now.

6.  Request FE to insert the new disk into the disk slot of the server and run the below command.

# devfsadm

You should see the new disk detected in the OS:

 

Steps to configure the newly added disk

==============================

1. cfgadm -c configure c1::dsk/c1t0d0

2. verify the disk is configured

c1::dsk/c1t0d0 available connected configured unknown

2. copy the prtvtoc from the primary disk.

prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s – /dev/rdsk/c1t0d0s2

3. now add the metadb in the disk.

metadb -afc3 /dev/dsk/c1t0d0s7

4.Install the bootblk on slice 0 of the new disk.

installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t0d0s0

5.Update the device ID in the SVM database

metadvadm -u c1t0d0

6. Attach the detach sub mirrors using the metaattach command. The syntax to do so is :

metattach d65 d61

metattach d30 d31

metattach d55 d51

metattach d45 d41

metattach d5 d1

metattach d10 d11

7. You can verify the resync status by using metastat -c command.

 

Thank You !!!

How to create a ufs filesystem on a newly added disk in Solaris

Let’s here discuss how to create a new Filesystem on a newly added disk in Solaris. Below is the scenario shown with examples.

HDD

 

1. Once disk is added in the server ,check if it is visible in OS by running the below command

bash-3.00# echo | format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/[email protected],0/[email protected],1/[email protected]/[email protected],0
Specify disk (enter its number): Specify disk (enter its number):
bash-3.00# Read more

2. If it is not visible in the OS run the below command to scan the newly added hardware.

bash-3.00# devfsadm          ===>(it works only for Solaris 10 and later versions. The older versions you need to perform a reconfigure reboot)

3. Check the format command again to confirm the disk is showing in the OS.

bash-3.00# echo | format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/[email protected],0/[email protected],1/[email protected]/[email protected],0
1. c1t0d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63>
/[email protected],0/pci15ad,[email protected]/[email protected],0
Specify disk (enter its number): Specify disk (enter its number):
bash-3.00#


========================================================================================

4. Now we can partition the disk

bash-3.00# format
Searching for disks…done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
/[email protected],0/[email protected],1/[email protected]/[email protected],0
1. c1t0d0 <DEFAULT cyl 1303 alt 2 hd 255 sec 63>
/[email protected],0/pci15ad,[email protected]/[email protected],0
Specify disk (enter its number): 1
selecting c1t0d0
[disk formatted]

FORMAT MENU:
disk – select a disk
type – select (define) a disk type
partition – select (define) a partition table
current – describe the current disk
format – format and analyze the disk
fdisk – run the fdisk program
repair – repair a defective sector
label – write label to the disk
analyze – surface analysis
defect – defect list management
backup – search for backup labels
verify – read and display labels
save – save new disk/partition definitions
inquiry – show vendor, product and revision
volname – set 8-character volume name
!<cmd> – execute <cmd>, then return
quit
format> format> p                         ===> “P” is used for partition
WARNING – This disk may be in use by an application that has
modified the fdisk table. Ensure that this disk is
not currently in use before proceeding to use fdisk.       ===> if we get this error do fdisk on the disk

format> fdisk
No fdisk table exists. The default partition for the disk is:

a 100% “SOLARIS System” partition

Type “y” to accept the default partition, otherwise type “n” to edit the
partition table.
y
format> p
PARTITION MENU:
0 – change `0′ partition
1 – change `1′ partition
2 – change `2′ partition
3 – change `3′ partition
4 – change `4′ partition
5 – change `5′ partition
6 – change `6′ partition
7 – change `7′ partition
select – select a predefined table
modify – modify a predefined partition table
name – name the current table
print – display the current table
label – write partition map and label to the disk
!<cmd> – execute <cmd>, then return
quit
partition>
partition> print                                     ===> print is used to print the current partition table on the disk available
Current partition table (original):
Total disk cylinders available: 1302 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 – 1301 9.97GB (1302/0/0) 20916630
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 – 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0

partition>


partition> 0                                           ===> creating 0th partition in this disk
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0

Enter partition id tag[unassigned]: ?                     ===> ? mark is for help
Expecting one of the following: (abbreviations ok):
unassigned boot root swap
usr backup stand var
home alternates reserved

Enter partition id tag[unassigned]: var
Enter partition permission flags[wm]: ?
Expecting one of the following: (abbreviations ok):
wm – read-write, mountable
wu – read-write, unmountable
rm – read-only, mountable
ru – read-only, unmountable

Enter partition permission flags[wm]: wm
Enter new starting cyl[1]: ?
Expecting an integer from 0 to 1301
Enter new starting cyl[1]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 4g
partition> print
Current partition table (unnamed):
Total disk cylinders available: 1302 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 var wm 1 – 523 4.01GB (523/0/0) 8401995               ===> we have var partition created in 0th slice now.
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 – 1301 9.97GB (1302/0/0) 20916630
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 – 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0

partition>
partition> label                       ===> label is used to write the partition table with the new entry.
Ready to label disk, continue? yes

partition> q                            ===>q is used to quit the menu.
=======================================================================================


5. Now we can format the disk with UFS file system

bash-3.00# newfs /dev/rdsk/c1t0d0s0                      ===>newfs is the command used to create the filesystem. Default filesystem will be UFS.
newfs: construct a new file system /dev/rdsk/c1t0d0s0: (y/n)? y           ===> we can create filesystem only in rawdisks (rdsk).
Warning: 2998 sector(s) in last cylinder unallocated
/dev/rdsk/c1t0d0s0: 8401994 sectors in 1368 cylinders of 48 tracks, 128 sectors
4102.5MB in 86 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
7472672, 7571104, 7669536, 7767968, 7866400, 7964832, 8063264, 8161696,
8260128, 8358560
bash-3.00#
==========================================================================================
6. Now we can mount the filesystem as /var

bash-3.00# mount /dev/dsk/c1t0d0s0 /var               ===> for mounting you should use “dsk” naming format (/dev/dsk/c1t0d0s0 )
bash-3.00# df -h /var
Filesystem size used avail capacity Mounted on
/dev/dsk/c1t0d0s0 3.9G 4.0M 3.9G 1% /var
bash-3.00#

Now you can make the entry permanent in /etc/vfstab

=============================================================================================

Hope this will help you. For more posts on Solaris you may click here

Thank you!!!

 

EMC VMAX3 in 3D

EMC introduced VMAX3 series in the mid of 2014, which is an Enterprise Data Service Platform rather than a storage system. My eyes got stuck on a YouTube video from EMC which shows the 3D view of the components and features.

Read more

I wanted to share it with you, which resulted in this post. Here is it for you…

 

VMAX3 introduced 3 new models of VMAX – VMAX 100K, 200K and 400K – with the new HYPERMAX Operating System and Dynamic Virtual Matrix architecture. You may read more from the below links, from experts and EMC Elects..

 

New EMC VMAX³ – Industry’s First Enterprise Data Service Platform – Official Press Release

The VMAX3: Why Enterprise Class is Still Very Relevant – By Jason Nash

EMC Announces Next-Generation VMAX Storage Array – By Dave Henry

Symmetrix offers a new kind of MAXimum Virtualisation (VMAX) -By Rob Koper

VMAX 3 – The all-new Enterprise Data Service Platform..! Part-I – By Vipin V.K

EMC ANNOUNCE VMAX3 – By Roy Mikes

EMC announces its next Generation VMAX Array – By Mark May

EMC Announces VMAX Family Re-architected for Enterprise Data Services and Hybrid Clouds -By StorageReview.com

Just a few posts which I found interesting. You may suggest the links to more value added posts on VMAX3 in the comments section. We will try to add the link here for other’s reference.

You may find more EMC VMAX posts here. Thank You.

 

Creating and deleting a hard link in Linux

In one of our recent post (click here to read) we discussed about soft links . Now in this post we will discuss how to create and delete a hard link in Linux.

Hard links are also shortcuts used in Linux, but it is bit different from Soft Links. In case of a hard link, no new Inode will be created. If you are creating a hardlink to a fie with inode number 123456, hard link will use the same Inode. In case of soft links, a new Inode will be created for the link. Data can still be accessed via link even if the original file is deleted or moved to a different location. Read more

Here Let us see the commands and some examples of them.

Syntax :

ln sourcefile linkfile    #ln command is used for hard link creation with this syntax.

Examples :

[[email protected] test]# cat >> testfile  #Created a file named testfile for use in next steps
this is a test file

[[email protected] test]# ls -li

30760 -rw-rw-r– 1 admin admin 20 Jan 3 11:39 testfile #Listing with inode number (30760)

[[email protected] test]#ln testfile testlink #Created a link named testlink

[[email protected] test]# ls -li

30760 -rw-rw-r– 1 admin admin 20 Jan 3 11:39 testfile #Both files with same inode number

30760 -rw-rw-r– 1 admin admin 20 Jan 3 11:39 testlink

[[email protected] test]#more testlink #Source being accessed via link.
this is a test file

 

[[email protected] test]#rm -rf testfile #Deleting the source file

[[email protected] test]#more testlink #Actual content via link even after source deletion.
this is a test file

[[email protected] test]#

Now let us see the link removal. unlink command can be used to remove a link.

Syntax :

unlink linkname

Example

[[email protected] test]#unlink testlink

[[email protected] test]#

Thus we have tried creating and deleting a hard link in Linux. Hope this was helpful for you.

For more posts on Linux please click here. We are happy to have your suggestions/queries in the comments section below.

 

1 2 3 4 5