Azure Fundamentals (AZ900) certification preparation – short notes-I

Azure certifications are of high industry demand right now and Azure Fundamentals (AZ-900) is the right starting point for the certifications. You can see here how you can get a free Azure training and an exam voucher you can use for the certification.

In these series of posts, we are sharing a certification preparation notes for you. Instead of going thru the detailed content over internet, you can refer these short notes in your exam preparation.

[ Disclaimer : This is not a complete training material for the certification. This is just random (short) notes which we captured from course curricula, which will help the readers for their final revision/rewind before appearing for the exam. We do not offer any guarantee in passing the exam with this content ]

We recommend referring to the Microsoft Docs page for the detailed notes.

Types of compute

Read more

Virtual machines : Emulating a computer system without having dedicated hardware. It can run the guest operating system on shared hardware. Consumers can deploy multiple virtual machines on the physical hardware as they need (depending on the hardware limitation also).
——————- advertisements ——————-

———————————————————

containers : containers serves the execution environments for applications without a guest operating system. A container will have the application and all the dependencies packaged in it. example : Docker
serverless computing : Lets you to build and run applications without worrying about the underlying server/host.Cloud provider runs the server for you.
Cloud computing benefits
Cost-effective : Consumer doesn’t have to pay for and maintain the hardware and infrastructure for their needs. Cloud provider allows a pay-as-you-go pricing.
Scalable : Lets the consumer scale their environment (both scaling up and scaling out) as per the demand
Elastic : Based on the needs, the cloud can automatically allocate more resources and can be de-allocated automatically once the requirement is completed.
Global : You can provision your resources in any region across the globe, totally redundant.
Reliable : reliability via redundancy, backups and disaster recovery solutions all inbuilt.
Secure : Physical (to the physical infrastrucure) and digital (relevent authentication for data access) security assured.
CapEx and OpEx
CapEx : all the expenditures in (initially) setting up the environment. Upfront expense.
examples include the Server, Storage, Networking, DataCenter infrastructure and Technical resources expense etc…
Benefits : Fixed expense and consumer can plan the budget.
——————- advertisements ——————-

———————————————————

OpEx : With Cloud Computing the consumers has to worry about on the operation expenses (the billing for the infra and services) which involves limited upfront payment.
Benefits : You do not have to pay full amount upfront.
Cloud deployment models
Private Cloud : Cloud environment within your data center. Complete control on the hardware/physical infrastructure and the physical security.
Public Cloud : Hardware is being managed completely by the cloud provider and the consumers use the required infra and services.
Hybrid Cloud : A combined model of private and public cloud models, adding the benefits of both the models to the consumer.
——————- advertisements ——————-

———————————————————

Types of cloud services
IaaS (Infrastructure as a Service) : A computing infrastructure for the consumer without having hardware with them. Consumer has the maximum control of the infra in this model compared to the other services.
PaaS (Platform as a Service) – For running/testing an application on the required platform without worrying about the infrastructure.
SaaS (Software as a Service) – Consumer can avail the software services from cloud without being concerned about the infra and the platform running it. Office365 is an example.

Hope this section will help you in your certification journey. You can find the next section in this series here. For the complete series click here.

COVID-19 : Let’s fight this battle, together

We are going thru such a difficult situation right now and the numbers keep on coming are horrible. COVID-19, started from a small district in China has now spread to almost everywhere (6 continents) around the globe.

Image courtesy : WHO

Let’s not be panic, but let’s be more vigilant and careful.We have to fight against this pandemic, together.

Make sure you are, Read more

  • Keeping yourself clean always. Sanitize your hands frequently, especially after any contact with others.
  • Covering your mouth and nose while coughing and sneezing
  • Avoiding gatherings, travel etc… as much as possible.
  • Using face masks whenever required
  • Getting yourself checked by medical practitioner if you have any of the listed symptoms of the disease
  • Being indoor with minimum contact with others, if you have recently traveled to any of the affected areas.
  • Following the instructions from the local government bodies and the medical team

The Symptoms of COVID-19 include :

  • Sore throat
  • (dry) Cough
  • Fever
  • Diarrhea, vomiting
  • Muscle pain and Headache along with Fever
  • etc…

Take care of yourself, take care of everyone. Our prayers are with everyone affected, globally. We will recover faster and better.

 

VMware vSAN – Understanding Fault Domains

VMware vSAN is one of the leading enterprise class software defined storage from VMware. It helps in leveraging the server based storage for enterprise applications. Advantages, as you might have already known – cost reduction, ease of administration and more…

In this post we are discussing one of the characteristic of vSAN, Fault Domains. Read more

What ?

Fault Domains helps an administrator to design the failure scenarios that may occur in a vSAN cluster. If a customer want to avoid data inaccessibility during a chassis failure or power failure in a rack etc… customer can do so by setting the right fault domains.

There should be a minimum of 3 fault domains for having this enabled on a cluster.

——————- advertisements ——————-

———————————————————

How ?

In a vSAN cluster, writes will be send to multiple hosts/drives depending on the Storage policy and the Failures To Tolerate (FTT) settings. If the FTT=1, the write will be send to 2 hosts at the same time. Even if one of the host fails, the data will be still accessible as the replica will be available on the host and thus IO operation continues. We will discuss the IO operation in vSAN, in a separate post.

In case of Failure Domain configuration, the replicas will be saved in different Failure Domains. We can define all the hosts in the same rack to be part of one Failure Domain and thus data and its replica will never be in the same (host in the same) rack. Thus the administrator can plan for any maintenance activities at the rack level without any disruption of the services running on the vSAN.

Same applies for the chassis level or any other level protection. We can define all the fault domains at the chassis level, so that replicas will not reside in the same chassis.

Additional reading :

 

Hope you enjoyed reading this post and was helpful for you. Please share your thoughts in the comments section.

Introducing Beginner’s Forum TV

Introducing our YouTube channel – Beginner’s Forum TV. A new platform from the team as promised, to share more of our contents from the technology world. 

We have been posting contents here in the blog in different categories around the Data Center including Networking, Servers, Storage, Cloud and a bit of programming stuff. Our channel will have contents from all these areas but will not be limited to these. We will be adding more tech stuff – on electronics and gadgets, Application and Web development and anything technical of your interest.

Read more

Subscribe (by clicking the above button) to Beginner’s Forum TV so that you will not be missing any updates on our latest contents. 

 
Thank You for all your support and keep doing the same. Follow/Subscribe us, share our contents and keep sharing feedback (comments) as you always did. 

HAPPY nEW YEAR

We wish all our readers, a very happy New Year 2019..!!

We had a wonderful year 2018, a year we achieved many milestones and were able to do some amazing stuff on and off our page here.

Now we are into this New Year and we hope to cover even further this year. We assure you excellent and innovative content and we have ‘things’ planned for this New Year. 🙂

Once again we wish all our readers a Happy New Year, 2019..!

Keep reading..!

Vembu BDR Suite v4.0 is now GA

Vembu’s BDR Suite v4.0 is now GA.

Vembu announced the latest version – 4.0 – recently and is now Generally Available for customers. Vembu BDR 4.0 is only available now for fresh installations. An upgrade package will soon be available for existing customers to upgrade their environment to 4.0.

There are a lot of exciting new features available with 4.0 including backup of VM’s running on Hyper-V cluster etc… You can click here to read more about the new features in Vembu BDR 4.0.  Read more

You can download the installer here for your environment today.

Also, Vembu is giving away up to 40% discount on their products till the 24th of December. Hurry and enjoy the Thanksgiving-Christmas discount from Vembu and grab your your piece of software. Please refer to the Vembu blog here for more details.

IT Blog Awards by Cisco – Vote now..!

Hurry, vote now for the best IT Blogs in the IT Blogs awards hosted by Cisco..!

About the program:

This is the first ever IT Blogs awards from Cisco for recognizing the contribution by the blogger community, in various categories.

(about the program) from the Cisco website :

The first-ever IT Blog Awards, hosted by Cisco, is our way of recognizing the great community of independent tech bloggers for the passion, creativity, and expertise shared throughout the year. We appreciate your impact on the tech community.
Voting is now open through January 4, 2019.  Winners will receive a Cisco Live US pass.

You can vote for the blogs in different categories and the voting ends on 4th Jan, 2019. Make sure to consider the value, credibility and the consistency of the content while you select a blog as the best in that category.

It is your opportunity now to recognize the bloggers/blogs who are helping the community by providing excellent contents. Do not wait, Vote Now.

We are proud to announce that, we have been chosen as one of the finalists in the Best Group Effort Category. If you feel our contents were of quality, helping the community and at the same time meeting the program guidelines, you can select our blog in best group effort category.

EMC ISILON Interview questions

Adding one more post to our interview questions post category, this time for ISILON. We are trying to cover some of the frequently asked questions from the ISILON architecture and configuration areas.

  •  Node and drive types supported :  ISILON supported 3 different types of nodes S-Series, X-Series and NL-Series. S-Series (S210) is high performance node type supports SSD drives. X-Series nodes (X210 and X410) supports upto 6 SSDs and can have remaining slots with HDDs in them. NL-Series (NL-410) nodes supports only one SSD in the system and SATA drives in the remaining slots. This node type is intended for the archiving requirements.

Read more

The system with the recent OneFS versions, also supports All-Flash nodes, Hybrid nodes, Archive nodes and IsilonSD nodes. ISILON All-Flash nodes (F800) can have upto 924 TB in a single 4U node and can grow upto 33PB in one cluster. One node can house upto 60 drives. Hybrid nodes (H400, H500 and H600) supports a mix of SSDs and HDDs. H400 and H500 can have SATA drives and SSDs and H600 supports SSDs and SAS drives. Archive nodes (A200 and A2000) are intended for archiving solutions. A2000 nodes can have 80 slots with 10TB drives only supported. This node is for high-density archiving. A200 is for near-primary archiving storage solutions which supports 2 TB, 4 TB or 8 TB SATA HDDs- a maximum of 60 drives.

IsilonSD is the software only node type which can be installed in customer hardware.

  •  Scale-Out and Scale-Up architecture : The first thing comes with ISILON is the architecture, Scale-Out. With Scale-Out architecture, the processing and capacity will be increased in parallel. As we add a node, both capacity and processing power will be increased for the system. Let’s take the example of VNX for Scale-Up architecture. Here, the processing power (i.e. Storage processors) can not be increased as the system limit is 2 SPs, but we can grow the overall system capacity by adding more DAEs (and disks) to the system supported limit.

  •  Infiniband Switches and types : ISILON makes use of IB switches for the internal communication between the nodes. ISILON now supports 40 GbE switches also with the Gen-6 hardware in addition to the 10GbE IB switches.
  •  SmartConnect and SSIP : [definition from ISILON SmartConnect whitepaper] SmartConnect is a licensable software module of the EMC ISILON OneFS Operating System that optimizes performance and availability by enabling intelligent client connection load balancing and failover support. Through a single host name, SmartConnect enables client connection load balancing and dynamic NFS failover  and failback of client connections across storage nodes to provide optimal utilization of the cluster resources. SmartConnect eliminates the need to install client side drivers, enabling the IT administrator to easily manage large numbers of client with confidence. And in the event of a system failure, file system stability and availability are maintained.

For every SmartConnect zone there will be one SSIP (SmarConnect Service IP), which wil be used for the client connections. SSIP and associated hostname will have the DNS entry and the client requests will come to the cluster/zone via SSIP. The zone redirects the request to the nodes for completion.

  •  SmartPool : SmartPool enables effective tiering of storage nodes within the filesystem. Data – based on the utilization – will be moved across the tiers within the filesystem automatically with seamless application and end user access. Customers can define policies for the data movement for different workflows and node types.

  •  Protection types in ISILON : ISILON cluster can have protection types N+M (where N is the number of data blocks and M is the number of nodes/drives failures the system can tolerate) or N+M:B (where N is the number of data blocks M is the number of drives failures the system can tolerate and B is the number of node failures can be tolerated ), where N>M. In case of a 3-node system, it can have +1 (i.e. 2+1) protection type. Here the system can tolerate 1 drive/node failure without any data loss.
  •  Steps to create an NFS export : Here we have listed the commands to create and list/view the NFS export.

To create the NFS export :
isi nfs exports create –clients=10.20.10.31,10.20.10.32 –root-clients=10.20.10.33,10.20.10.34 –description=”Beginnersforum Test NFS Share” –paths=/ifs/BForum/Test –security-flavors=unix

To list the NFS exports :
isi nfs exports list

To view the NFS export :
isi nfs exports view <export_number>

You can create the NFS export alias and quotas also for the NFS export.

Hope this helped you in learning some ISILON stuff. We will have more questions and details in upcoming posts. For more interview questions posts please click here. Please add any queries and suggestions in comments.

Understanding the VMAX FA/RDF port numbering

The logical numbering of the VMAX ports (FE, RF etc.. ports on the VMAX-3 and VMAX AFA’s) are quite confusing. It is important as we do the zoning and the host integration for these arrays. And yes it really is confusing, for any storage administrator who is not very familiar with.

This post is an attempt to explain the mapping of physical to logical numbering of the FA/RDF ports on a VMAX system.

Read more

The physical numbering of the SLICs (IO Modules) are as in the above snip and is quite straightforward. The modules are starting from the Slot 0 to Slot 10 with the Management modules (MMCS on the first engine/ MM on the remaining engines) on the left most. Few slots will be used for Vault to Flash modules which will have no ports on them. The slot# and the type of module in it may vary slightly with the addition of compression modules in AFA arrays. The SLICs 2,3,8 and 9 are important from an admin point of view as these will have either FA/RDF modules in. Even though the Back-end modules will have physical ports and will have connectivity, it is not of much concern for an admin as it will be configured during the array initialization.

——————- advertisements ——————-

———————————————————-

Here comes the logical numbering. For numbering a port, we should be aware of the director we are referring to and the slot number for the specific module. Considering the above snip as a single engine scenario, the odd director will be director 1 and even director will be director 2. This will be similar for the remaining engines (e.g., For engine-4, the odd director will be director 7 and even will be director 8 ).

——————- advertisements ——————-

———————————————————-

Now, let us assume the SLIC 2 is configured with FA emulation. The ports on director 1 SLIC will be numbered starting with 1d (d for FA emulation). The port numbers will be 1d4,1d5,1d6 and 1d7. You will have to keep this image in mind or will have to make a note of it to have the logical numbering for each SLIC. For RDF emulation, the numbering will have an e in it (e for RDF). Let us assume, the SLIC 8 is of RDF emulation. The 3rd port (the last port with numbering starting from 0 as in first pic) on SLIC 8 on even director will be 2e27. 2 for 2nd director, e for RDF emulation and 27 is the logical number.

Hopefully that was not that tough and it helped you. You may try various combinations for your practice. To start with, what will be the logical number for an FA port odd director SLIC 8 and port 2 ?

You may find more EMC VMAX posts here. Please use the comments section for any queries/suggestions.

Expanding a (EMC Celerra/VNX) NAS Pool

In this post let’s discuss expanding a (EMC Celerra/VNX-File) NAS pool by adding new LUNs from the backend storage. A NAS Pool from on which we create Filesystems for NFS/CIFS (SMB) should have sufficient space for catering the NAS requests. Here our pool is running out of space, with only a few MBs left.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 0
[[email protected] ~]$

Let’s see how we can get this pool extended.

Read more

Let’s have a look first at the existing disks (LUNs from backend). Here we already have 9 disks assigned. We should have the 10th one in place, which will add up space to the pool.

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
[[email protected] ~]$

As per the requirement, we have to assign the LUNs from the backend storage. It is recommended to add the new LUNs of identical size as of existing LUNs in the pool to have best performance.

Now to the most important part – Rescaning the new disks. We have to use the server_devconfig command for rescan. We can run the command against individual data movers also. The recommeded way to do this is to start with the standby DMs first and then on primary ones. Listing the nas_disks will show the servers on which the disks are scanned.

[[email protected] ~]$ server_devconfig ALL -create -scsi -all

Discovering storage (may take several minutes)
server_2 : done
server_3 : done
[[email protected] ~]$

Yes, that is done successfully. Now let’s check the disks list. We can see the 10th disk with inuse=n which is scanned on both servers (data movers).

[[email protected] ~]$ nas_disk -l
id inuse sizeMB storageID-devID type name servers
1 y 11263 CKxxxxxxxxxxx-0000 CLSTD root_disk 1,2
2 y 11263 CKxxxxxxxxxxx-0001 CLSTD root_ldisk 1,2
3 y 2047 CKxxxxxxxxxxx-0002 CLSTD d3 1,2
4 y 2047 CKxxxxxxxxxxx-0003 CLSTD d4 1,2
5 y 2047 CKxxxxxxxxxxx-0004 CLSTD d5 1,2
6 y 32767 CKxxxxxxxxxxx-0005 CLSTD d6 1,2
7 y 178473 CKxxxxxxxxxxx-0010 CLSTD d7 1,2
8 n 178473 CKxxxxxxxxxxx-0011 CLSTD d8 1,2
9 y 547418 CKxxxxxxxxxxx-0007 CLSTD d9 1,2
10 n 547418 CKxxxxxxxxxxx-0006 CLSTD d10 1,2
[[email protected] ~]$

Let’s check the pool again to see the available and potential storage capacity.

[[email protected] ~]$ nas_pool -size Bforum_Pool
id = 48
name = Bforum_Pool
used_mb = 491127
avail_mb = 123
total_mb = 491250
potential_mb = 547418
[[email protected] ~]$

Now, as you see, the expanded capacity is available in the pool (refer the potential storage) .

You may refer to our previous post on scanning new LUNs on VNX File/Celerra Data movers. click here for more Celerra/VNX posts.

1 2 3 4