Vembu BDR Suite v4.0 is now GA

Vembu’s BDR Suite v4.0 is now GA.

Vembu announced the latest version – 4.0 – recently and is now Generally Available for customers. Vembu BDR 4.0 is only available now for fresh installations. An upgrade package will soon be available for existing customers to upgrade their environment to 4.0.

 

There are a lot of exciting new features available with 4.0 including backup of VM’s running on Hyper-V cluster etc… You can click here to read more about the new features in Vembu BDR 4.0.  Read more

You can download the installer here for your environment today.

Also, Vembu is giving away up to 40% discount on their products till the 24th of December. Hurry and enjoy the Thanksgiving-Christmas discount from Vembu and grab your your piece of software. Please refer to the Vembu blog here for more details.

IT Blog Awards by Cisco – Vote now..!

Hurry, vote now for the best IT Blogs in the IT Blogs awards hosted by Cisco..!

About the program:

This is the first ever IT Blogs awards from Cisco for recognizing the contribution by the blogger community, in various categories.

(about the program) from the Cisco website :

The first-ever IT Blog Awards, hosted by Cisco, is our way of recognizing the great community of independent tech bloggers for the passion, creativity, and expertise shared throughout the year. We appreciate your impact on the tech community.
Voting is now open through January 4, 2019.  Winners will receive a Cisco Live US pass.

You can vote for the blogs in different categories and the voting ends on 4th Jan, 2019. Make sure to consider the value, credibility and the consistency of the content while you select a blog as the best in that category.

It is your opportunity now to recognize the bloggers/blogs who are helping the community by providing excellent contents. Do not wait, Vote Now.

We are proud to announce that, we have been chosen as one of the finalists in the Best Group Effort Category. If you feel our contents were of quality, helping the community and at the same time meeting the program guidelines, you can select our blog in best group effort category.

EMC ISILON Interview questions

Adding one more post to our interview questions post category, this time for ISILON. We are trying to cover some of the frequently asked questions from the ISILON architecture and configuration areas.

  •  Node and drive types supported :  ISILON supported 3 different types of nodes S-Series, X-Series and NL-Series. S-Series (S210) is high performance node type supports SSD drives. X-Series nodes (X210 and X410) supports upto 6 SSDs and can have remaining slots with HDDs in them. NL-Series (NL-410) nodes supports only one SSD in the system and SATA drives in the remaining slots. This node type is intended for the archiving requirements.

Read more

The system with the recent OneFS versions, also supports All-Flash nodes, Hybrid nodes, Archive nodes and IsilonSD nodes. ISILON All-Flash nodes (F800) can have upto 924 TB in a single 4U node and can grow upto 33PB in one cluster. One node can house upto 60 drives. Hybrid nodes (H400, H500 and H600) supports a mix of SSDs and HDDs. H400 and H500 can have SATA drives and SSDs and H600 supports SSDs and SAS drives. Archive nodes (A200 and A2000) are intended for archiving solutions. A2000 nodes can have 80 slots with 10TB drives only supported. This node is for high-density archiving. A200 is for near-primary archiving storage solutions which supports 2 TB, 4 TB or 8 TB SATA HDDs- a maximum of 60 drives.

IsilonSD is the software only node type which can be installed in customer hardware.

  •  Scale-Out and Scale-Up architecture : The first thing comes with ISILON is the architecture, Scale-Out. With Scale-Out architecture, the processing and capacity will be increased in parallel. As we add a node, both capacity and processing power will be increased for the system. Let’s take the example of VNX for Scale-Up architecture. Here, the processing power (i.e. Storage processors) can not be increased as the system limit is 2 SPs, but we can grow the overall system capacity by adding more DAEs (and disks) to the system supported limit.
  •  Infiniband Switches and types : ISILON makes use of IB switches for the internal communication between the nodes. ISILON now supports 40 GbE switches also with the Gen-6 hardware in addition to the 10GbE IB switches.
  •  SmartConnect and SSIP : [definition from ISILON SmartConnect whitepaper] SmartConnect is a licensable software module of the EMC ISILON OneFS Operating System that optimizes performance and availability by enabling intelligent client connection load balancing and failover support. Through a single host name, SmartConnect enables client connection load balancing and dynamic NFS failover  and failback of client connections across storage nodes to provide optimal utilization of the cluster resources. SmartConnect eliminates the need to install client side drivers, enabling the IT administrator to easily manage large numbers of client with confidence. And in the event of a system failure, file system stability and availability are maintained.

For every SmartConnect zone there will be one SSIP (SmarConnect Service IP), which wil be used for the client connections. SSIP and associated hostname will have the DNS entry and the client requests will come to the cluster/zone via SSIP. The zone redirects the request to the nodes for completion.

  •  SmartPool : SmartPool enables effective tiering of storage nodes within the filesystem. Data – based on the utilization – will be moved across the tiers within the filesystem automatically with seamless application and end user access. Customers can define policies for the data movement for different workflows and node types.
  •  Protection types in ISILON : ISILON cluster can have protection types N+M (where N is the number of data blocks and M is the number of nodes/drives failures the system can tolerate) or N+M:B (where N is the number of data blocks M is the number of drives failures the system can tolerate and B is the number of node failures can be tolerated ), where N>M. In case of a 3-node system, it can have +1 (i.e. 2+1) protection type. Here the system can tolerate 1 drive/node failure without any data loss.
  •  Steps to create an NFS export : Here we have listed the commands to create and list/view the NFS export.

To create the NFS export :
isi nfs exports create –clients=10.20.10.31,10.20.10.32 –root-clients=10.20.10.33,10.20.10.34 –description=”Beginnersforum Test NFS Share” –paths=/ifs/BForum/Test –security-flavors=unix

To list the NFS exports :
isi nfs exports list

To view the NFS export :
isi nfs exports view <export_number>

You can create the NFS export alias and quotas also for the NFS export.

Hope this helped you in learning some ISILON stuff. We will have more questions and details in upcoming posts. For more interview questions posts please click here. Please add any queries and suggestions in comments.

Understanding the VMAX FA/RDF port numbering

The logical numbering of the VMAX ports (FE, RF etc.. ports on the VMAX-3 and VMAX AFA’s) are quite confusing. It is important as we do the zoning and the host integration for these arrays. And yes it really is confusing, for any storage administrator who is not very familiar with.

This post is an attempt to explain the mapping of physical to logical numbering of the FA/RDF ports on a VMAX system.

Read more

The physical numbering of the SLICs (IO Modules) are as in the above snip and is quite straightforward. The modules are starting from the Slot 0 to Slot 10 with the Management modules (MMCS on the first engine/ MM on the remaining engines) on the left most. Few slots will be used for Vault to Flash modules which will have no ports on them. The slot# and the type of module in it may vary slightly with the addition of compression modules in AFA arrays. The SLICs 2,3,8 and 9 are important from an admin point of view as these will have either FA/RDF modules in. Even though the Back-end modules will have physical ports and will have connectivity, it is not of much concern for an admin as it will be configured during the array initialization.

——————- advertisements ——————-

———————————————————-

Here comes the logical numbering. For numbering a port, we should be aware of the director we are referring to and the slot number for the specific module. Considering the above snip as a single engine scenario, the odd director will be director 1 and even director will be director 2. This will be similar for the remaining engines (e.g., For engine-4, the odd director will be director 7 and even will be director 8 ).

——————- advertisements ——————-

———————————————————-

Now, let us assume the SLIC 2 is configured with FA emulation. The ports on director 1 SLIC will be numbered starting with 1d (d for FA emulation). The port numbers will be 1d4,1d5,1d6 and 1d7. You will have to keep this image in mind or will have to make a note of it to have the logical numbering for each SLIC. For RDF emulation, the numbering will have an e in it (e for RDF). Let us assume, the SLIC 8 is of RDF emulation. The 3rd port (the last port with numbering starting from 0 as in first pic) on SLIC 8 on even director will be 2e27. 2 for 2nd director, e for RDF emulation and 27 is the logical number.

Hopefully that was not that tough and it helped you. You may try various combinations for your practice. To start with, what will be the logical number for an FA port odd director SLIC 8 and port 2 ?

You may find more EMC VMAX posts here. Please use the comments section for any queries/suggestions.

Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

ISILON basic commands

Here in this post, we are discussing a few basic isilon commands. Some of which comes to our help in our daily administration tasks for managing and monitoring the isilon array. You may refer the Celerra/VNX health check steps also we discussed in one of our previous posts.

Here are some of the isilon commands.

isi status : Displays the status of the cluster,nodes and events etc… You can use various options including -r (for displaying raw size), -D (for detailed info) -n (info for specific node -n <node id>)

isi_for_array : For running various commands against specific nodes. -n for a specific node and

isi events : There are many options with the ‘events’ command including isi events list (to list all the events) isi events cancel (to cancel the events) isi events quiet (to quiet the events). You can setup event notifications also using the isi events command.

Read more

isi devices : To view and change the cluster devices status. There are plenty of options with the devices command.

isi devices –device <Device>  : where Device can be a drive or an entire node. –action option is used to perform any specific option on the devices (–action <action>) including smartfail, stopfail, status, add, format etc…

isi firmware status : is used to list the isilon firmware type and versions.

isi nfs exports : NFS exports command is used for various isilon NFS operations including export creation, listing/viewing modifying etc… Below are a list of sub-commands.

1. isi nfs exports create –zone <zone name> –root-clients=host1, host2 –read-write-clients=host2, host3 –path=<path>

2. isi nfs exports view <export ID> –zone=<zone name>

3. isi nfs exports modify <export ID> –zone=<zone name> –add-read-write-clients host4

isi smb share : This command is used to create, list/view, modify etc… operations on SMB shares. Sample sub-commands –

1. isi smb shares create <share name> –path=/ifs/data/SMBpath –create-path –browsable=true –zone <zone name> –description=”Test share”

2. isi smb shares delete <share name> –zone=<zone name>

isi quota quotas : this command is used for various quota operations including creation/deletion/modification etc…

Hope this post helped you. Please feel free to comment if you have any questions. We will discuss more detailed commands in a later post.

 

Simple LUN allocation steps – VNX

In one of our post earlier, we have seen the allocation steps in VMAX. Now let’s see the case with the mid-range product, EMC VNX. LUN allocation in VNX is quite simple with the Unisphere Manager GUI. Lets see the steps here.

Read more

Creating a LUN : You need to have the information like the Size of the LUN required, the disk type and RAID type (if there are any specific requirement) etc… Based on these requirements, you have to decide the pool to go with. Based on the disk type and RAID type used in different pools, you will have to select the correct pool. From Unisphere, under Storage>LUNs you have to click the Create button.

You have to furnish the data including the Size, Pool (Video below from EMC on Pool creation) etc…in the screen. You will have to select the checkbox depending on whether the LUN need to be created as a Thin/Thick. Once all the fields are filled in, you have to note the LUN-ID and you can submit the form. Done..! You have created the LUN, you can find the new LUN from the list and verify the details..

Adding a new host : Yes, your requirement may be to allocate new LUN to a new host. Once host is connected via fabric and you have done with the zoning, the host connectivity should be visible in Initiators list (Unisphere> Hosts> Initiators). If you have the Unisphere Host Agent installed on the host or if it is an ESXi host, the host gets auto-registered and you will see the host details in the Initiators list.

Else you will see only the new host WWNs in the list. You have to select the WWNs and do a register. You have to fill in the host details (Name and IP) and the ArrayCommpath and failover mode settings. Once the host is registered, you will see the host in the hosts list (Unisphere > Hosts > Hosts).

 

Storage Group : You now have to make the LUN visible to the hosts. Storage Group is the way to do this in VNX/Clariion. You will have to create a new storage group for the hosts (Unisphere > Hosts > Storage Groups). You can name the new storage group to match the host/cluster name for easy identification and then add the hosts to the group.

If there are multiple hosts which will be sharing the LUNs, you have to add the hosts in the storage group. And you also have to add the LUNs to the Storage Group. You have to set the HLU for the LUNs in the SG and have to be careful in giving the HLU. For changing the HLU, you will have to take a downtime as it can not be modified on-the-fly.

Once the LUNs and hosts are added to the Storage Group, you are done with the allocation..! You can now request the host team to do a rescan to see the new LUNs.

Hope this post helped you. For more Celerra/VNX posts click here

 

EMC VMAX3 in 3D

EMC introduced VMAX3 series in the mid of 2014, which is an Enterprise Data Service Platform rather than a storage system. My eyes got stuck on a YouTube video from EMC which shows the 3D view of the components and features.

Read more

I wanted to share it with you, which resulted in this post. Here is it for you…

 

VMAX3 introduced 3 new models of VMAX – VMAX 100K, 200K and 400K – with the new HYPERMAX Operating System and Dynamic Virtual Matrix architecture. You may read more from the below links, from experts and EMC Elects..

 

New EMC VMAX³ – Industry’s First Enterprise Data Service Platform – Official Press Release

The VMAX3: Why Enterprise Class is Still Very Relevant – By Jason Nash

EMC Announces Next-Generation VMAX Storage Array – By Dave Henry

Symmetrix offers a new kind of MAXimum Virtualisation (VMAX) -By Rob Koper

VMAX 3 – The all-new Enterprise Data Service Platform..! Part-I – By Vipin V.K

EMC ANNOUNCE VMAX3 – By Roy Mikes

EMC announces its next Generation VMAX Array – By Mark May

EMC Announces VMAX Family Re-architected for Enterprise Data Services and Hybrid Clouds -By StorageReview.com

Just a few posts which I found interesting. You may suggest the links to more value added posts on VMAX3 in the comments section. We will try to add the link here for other’s reference.

You may find more EMC VMAX posts here. Thank You.

 

Creating and deleting a hard link in Linux

In one of our recent post (click here to read) we discussed about soft links . Now in this post we will discuss how to create and delete a hard link in Linux.

Hard links are also shortcuts used in Linux, but it is bit different from Soft Links. In case of a hard link, no new Inode will be created. If you are creating a hardlink to a fie with inode number 123456, hard link will use the same Inode. In case of soft links, a new Inode will be created for the link. Data can still be accessed via link even if the original file is deleted or moved to a different location. Read more

Here Let us see the commands and some examples of them.

Syntax :

ln sourcefile linkfile    #ln command is used for hard link creation with this syntax.

Examples :

[[email protected] test]# cat >> testfile  #Created a file named testfile for use in next steps
this is a test file

[[email protected] test]# ls -li

30760 -rw-rw-r– 1 admin admin 20 Jan 3 11:39 testfile #Listing with inode number (30760)

[[email protected] test]#ln testfile testlink #Created a link named testlink

[[email protected] test]# ls -li

30760 -rw-rw-r– 1 admin admin 20 Jan 3 11:39 testfile #Both files with same inode number

30760 -rw-rw-r– 1 admin admin 20 Jan 3 11:39 testlink

[[email protected] test]#more testlink #Source being accessed via link.
this is a test file

 

[[email protected] test]#rm -rf testfile #Deleting the source file

[[email protected] test]#more testlink #Actual content via link even after source deletion.
this is a test file

[[email protected] test]#

Now let us see the link removal. unlink command can be used to remove a link.

Syntax :

unlink linkname

Example

[[email protected] test]#unlink testlink

[[email protected] test]#

Thus we have tried creating and deleting a hard link in Linux. Hope this was helpful for you.

For more posts on Linux please click here. We are happy to have your suggestions/queries in the comments section below.

 

Creating and removing a Symbolic link (soft link) in Linux

Symbolic links or soft links are like shortcuts in Windows systems. We use these to point/redirect to another file/directory. Links are a mandatory in some cases where we need a shortcut to actual data, in our day-to-day tasks. With soft links, we can even point to a directory – which is not possible with hard links. Hard links are discussed in another post, which can be accessed by clicking here.

Let us see how can we create a soft link, how to use it and how can we remove/delete the link below. Read more

Symbolic link can be created by running the ln -s command. Let us see the syntax and example for creating a link.

Syntax :

ln -s target source(linkname)

Example :

[[email protected] test]#ln -s /home/beginnersforum/linux testlink   #A link to linux directory will be created with name testlink

[[email protected] test]#ls -l

lrwxrwxrwx 1 root root   26 Jan  1 2015  testlink -> /home/beginnersforum/linux

[[email protected] test]

Here in the above example, we have created a link named testlink to the directory /home/beginnersforum/linux. Now accessing the testlink will redirect us to the linux directory as required.

 

Now we will see how we can remove the link,.  The command will be unlink for removing the link. Below are the syntax and example for the command.

Syntax :

unlink linkname

Example :

[[email protected] test]#ls -l

lrwxrwxrwx 1 root root   26 Jan  1 2015  testlink -> /home/beginnersforum/linux

[[email protected] test]#unlink testlink                    #Removing the link named testlink

[[email protected] test]#ls -l

[[email protected] test]#

Thus in the above unlink command example, we have removed the link named testlink which was created in our previous example.

For more options with these commands, you may refer to the man pages (man ln, man unlink).

That’s it..! We are done with creation and deletion of soft links. Hope this helped you.

For more posts on Linux please click here. We are happy to have your suggestions/queries in the comments section below.

 

1 2