COVID-19 : Let’s fight this battle, together

We are going thru such a difficult situation right now and the numbers keep on coming are horrible. COVID-19, started from a small district in China has now spread to almost everywhere (6 continents) around the globe.

Image courtesy : WHO

Let’s not be panic, but let’s be more vigilant and careful.We have to fight against this pandemic, together.

Make sure you are, Read more

  • Keeping yourself clean always. Sanitize your hands frequently, especially after any contact with others.
  • Covering your mouth and nose while coughing and sneezing
  • Avoiding gatherings, travel etc… as much as possible.
  • Using face masks whenever required
  • Getting yourself checked by medical practitioner if you have any of the listed symptoms of the disease
  • Being indoor with minimum contact with others, if you have recently traveled to any of the affected areas.
  • Following the instructions from the local government bodies and the medical team

The Symptoms of COVID-19 include :

  • Sore throat
  • (dry) Cough
  • Fever
  • Diarrhea, vomiting
  • Muscle pain and Headache along with Fever
  • etc…

Take care of yourself, take care of everyone. Our prayers are with everyone affected, globally. We will recover faster and better.

 

VMware vSAN – Understanding Fault Domains

VMware vSAN is one of the leading enterprise class software defined storage from VMware. It helps in leveraging the server based storage for enterprise applications. Advantages, as you might have already known – cost reduction, ease of administration and more…

In this post we are discussing one of the characteristic of vSAN, Fault Domains. Read more

What ?

Fault Domains helps an administrator to design the failure scenarios that may occur in a vSAN cluster. If a customer want to avoid data inaccessibility during a chassis failure or power failure in a rack etc… customer can do so by setting the right fault domains.

There should be a minimum of 3 fault domains for having this enabled on a cluster.

——————- advertisements ——————-

———————————————————

How ?

In a vSAN cluster, writes will be send to multiple hosts/drives depending on the Storage policy and the Failures To Tolerate (FTT) settings. If the FTT=1, the write will be send to 2 hosts at the same time. Even if one of the host fails, the data will be still accessible as the replica will be available on the host and thus IO operation continues. We will discuss the IO operation in vSAN, in a separate post.

In case of Failure Domain configuration, the replicas will be saved in different Failure Domains. We can define all the hosts in the same rack to be part of one Failure Domain and thus data and its replica will never be in the same (host in the same) rack. Thus the administrator can plan for any maintenance activities at the rack level without any disruption of the services running on the vSAN.

Same applies for the chassis level or any other level protection. We can define all the fault domains at the chassis level, so that replicas will not reside in the same chassis.

Additional reading :

 

Hope you enjoyed reading this post and was helpful for you. Please share your thoughts in the comments section.

Brocade SAN switch CLI Commands for troubleshooting minor issues

We have already discussed about, Brocade SAN switch Zoning steps Via CLI and CISCO MDS Zoning steps via CLI

This write-up, focuses on the basic trouble shooting commands used in Brocade SAN switch. For better understanding of the commands, let us first understand the day to day operational challenges faced in the SAN fabric. Listed below are few of the operational error codes/prompts:

  1. Alias/port went offline
  2. Bottlenecks
  3. Port error
  4. Hanging zones
  5. Rx Tx Voltage/Power Issue

let’s read in brief about, how to identify the errors and how to troubleshoot them. Read more

Alias/port went offline

This error is recorded due to the following reasons:

  1. Reboot/ Shutdown of the host
  2. Faulty cable
  3. Issue in the HBA card.

——————- advertisements ——————-

———————————————————

Thus, when ‘WWN/ Alias went offline’ is recorded, use the below mentioned commands to identify, when the port went offline and which port went offline.

#fabriclog -s                                                                              States the ports which went offline recently.

#fabriclog -s |grep -E “Port Index |GMT”                               This command states the ports which went offline before. Note: This command will fail in case the FOS upgrade or Switch reboot activity was performed. As both the activities clear the fabriclog.

In order to know the zoning details through the WWN of the device, use below mentioned command:  

#alishow |grep wwn -b2                                                              This lists the alias.

then use below command

#zoneshow –alias Alias_Name                                                    This lists the zone name and component aliases.

——————- advertisements ——————-

———————————————————

Bottlenecks

There are many kinds of bottlenecks. But, the once prominent in SAN fabric are Latency bottleneck and congestion bottleneck.

Latency bottleneck occurs when a slow drain device is connected to the port. Even initiator or target ports can report latency, no matter what kind of port it is, if a slow drain device is attached, there will be bottleneck in that port. A

Slow drain devices, is a device which either has all or any one of the bellow mentioned issues:

  1. Unsupported firmware.
  2. Hardware issues.
  3. SFP which has a voltage or power issue.

Whereas, Congestion bottleneck occurs due to high rate of data transfer in the port. In the next write-up we will discuss in detail, about the causes of a congestion bottleneck.

——————- advertisements ——————-

———————————————————

The commands used to identify latency as well as congestion bottleneck are:

#errdump

#mapsdb –show

If there is latency or congestion bottleneck, it should to be fixed by logging a support case with Server/Storage hardware vendor.

Port errors

There are many kinds of port errors. Most of the time, its due to bottleneck issue/ physical layer issue. Bottleneck issue we have already addressed above. Physical layer issue is, either Cable issue or SFP issue.

Below are the commands to identify the port errors:

#porterrshow                                                       This will list all ports in error state.

#porterrshow port_number                       

#porterrorshow -i Port_Index                              Both these commands will list the errors in a particular port.

——————- advertisements ——————-

———————————————————

In case an error is listed, before troubleshooting clear the status using below commands and observe it again.

#statsclear

#slotstatsclear

#portstatsclear port_number

Apart from this, there are other commands to display the current data transfer rate of a port or all ports, such as:

#portperfshow

#portperfshow port_number

Hanging Zone

Hanging zones are the purposeless zones residing in the zoning configuration. The zone in which all initiators or all targets are inactive are considered as hanging zone.

There is no specific command to list out hanging zones in the fabric, we have to use SAN health to identify the hanging zone. To check if all the aliases of a zone are active or not use the command mentioned below:

#zonevalidate “zonename

In the result of the above command, there will be have a ‘*’ mark at the end of each active alias in the zone.

Rx Tx Voltage/Power Issue

The Rx & Tx Voltage and power of an SFP can be validated only if, there is connectivity in the SFP with its port in online state.

The command below will display the voltage, power and all the details related to the SFP.

#sfpshow port_number -f

__________________________________________________________________________________________________

Please feel free to connect with us in case of any queries. Also, please give us your feedback, it will help us to improve our skill sets.

Troubleshooting NFS Mount Issues in Linux

Network File System (NFS) is a protocol which allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.

This post refers how to mount the network share in our local system and what are all the common issues and how to generally troubleshoot connectivity and config issues.

NFS Client Configuration

1. Install the required nfs packages if not already installed on the server Read more

# rpm -qa | grep nfs-utils


# yum install nfs-util

2. Use the mount command to mount exported file systems. Syntax for the command:

# mount -t nfs -o options host:/remote/export /local/directory 

——————- advertisements ——————-  

———————————————————

Example :

# mount -t nfs -o ro,nosuid remote_host:/home /remote_home

This example does the following:
– It mounts /home from remote host (remote_host) on local mount point /remote_home.
– File system is mounted read-only and users are prevented from running a setuid program (-o ro,nosuid options).

3. Update /etc/fstab to mount NFS shares at boot time.

# vi /etc/fstab


remote_host:/home      /remote_home nfs        ro,nosuid           0            0

Troubleshooting NFS connectivity issues

Depending on the client and the issue, wide range of error messages can appear while trying to mount an NFS share, it might also take forever to mount, or even mount normally but the mount points will be empty.Below are the common errors we face in the client side while mounting the NFS/NAS shares.

——————- advertisements ——————-  

———————————————————

Error 1: 

mount: mount to NFS server 'NFS-Server' failed: System Error: No route to host.

This can be caused by the RPC messages being filtered by either the host firewall, the client firewall, or a network switch. Verify if a firewall is active and if NFS traffic is allowed. Normally nfs is using port 2049.

  1. Check the show mount output of the server to verify the filesystem has exported for the client ip.
# showmount –e <NFS server IP > | grep –I  <clientIP>

Check the port Connectivity of the NFS server using telnet

# telnet <NFS server IP> 2049

 

Error 2:

mount_nfs: can't mount / from 1.2.3.4 onto /mnt: RPC prog. not avail

Error: “mount clntudp_create: RPC: Port mapper failure – RPC: Unable to receive

The Linux NFS implementation requires that both the NFS service and the portmapper (RPC) service be running on both the client and the server. Check it like this:

——————- advertisements ——————-  

———————————————————

            # rpcinfo -p
     program vers proto   port  service

    100000    4   tcp    111  portmapper

    100000    3   tcp    111  portmapper

    100000    2   tcp    111  portmapper

    100000    4   udp    111  portmapper

    100000    3   udp    111  portmapper

    100000    2   udp    111  portmapper...

          # ]# systemctl status rpcbind
  •  rpcbind.service - RPC bind service
             Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)

            Active: active (running) since Fri 2018-05-18 12:39:15 IST; 2s ago

            Process: 15222 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited,      status=0/SUCCESS)

 Main PID: 15223 (rpcbind)

            CGroup: /system.slice/rpcbind.service

           └─15223 /sbin/rpcbind -w

 

May 18 12:39:15 nfsserver systemd[1]: Starting RPC bind service...

May 18 12:39:15 nfsserver systemd[1]: Started RPC bind service.

If not, start it with the commands give below.

# systemctl start rpcbind

——————- advertisements ——————-  

———————————————————

Error 3: 

Error: “NFS Stale File Handle”

Unlike traditional Linux file systems that allow an application to access an open file even if the file has been deleted using unlink or rm, NFS does not support this feature. An NFS file is deleted immediately. Any program which attempts to do further I/O on the deleted file will receive the “NFS Stale File Handle” error. For example, if your current working directory is an NFS directory and is deleted, you will see this error at the next shell prompt.

To refresh the client’s state with that of the server you may do a lazy unmount the mount point and remount it

# umount -l /mnt/mount_point

or kill the process, which references the mounted file system:

# fuser -k [mounted-filesystem].

——————- advertisements ——————-  

———————————————————

Error 4:

Error: “Access Denied” or “Permission Denied

Check the export permissions for the NFS file system. You can do this from the client:

# showmount -e server_name

 

Error 5:

Error: “rpc mount export: RPC: Timed out

Unable to access file system at [NFS SERVER]: rpc mount export: RPC: Timed out This is caused by DNS name resolution issue. NFS(RPC) needs reverse name resolution. If NFS server or client cannot resolve their name, this error occurs. In case gets the error message, check DNS configuration and /etc/hosts configuration.

 

Hope we have covered almost all the regular errors and steps for solving those. Please share your thoughts in the comments section. If you want us to add any additional issues-resolution, kindly let us know.

Thanks for reading..!

Triggering a Jenkins job using API call and passing parameters

Jenkins is one of the important tool in DevOps and most of the time we would require to execute Jenkins job using remote REST API call. Jobs can either be parameterized or non parameterized. A Parameterized job will need certain input from user side for execution. Here we will discuss how to call both types of jobs using REST API.

We will discuss on following steps. Read more

  1. Setting Jenkins Job respond to REST API
  2. Finding out the API URL for the Job
  3. Handling Authentication
  4. Building REST API request
  5. How to trigger a non-parametrized Job.
  6. Triggering paremeterized job by sending values in URL
  7. Triggering parametrized Job by sending values in JSON file

Introduction to Jenkins API

[From Jenkins Documentation] “Jenkins is the market leading continuous integration system, originally created by Kohsuke Kawaguchi. This API makes Jenkins even easier to use by providing an easy to use conventional python interface.”

——————- advertisements ——————-  

———————————————————

Jenkins provides rich set of REST based APIs. 

Setting Jenkins Job to respond REST API

The REST API feature can be enabled per Job basis. To enable REST API trigger for a Job, Navigate to Your JobName ->Configure -> Build triggers TAB and Check on ‘Trigger build remotely’.

Find out the API URL for the Job

Once you enabled the check box to trigger build remotely , Jenkins will show you the URL to access the particular Job and gives an option to provide the API token for the build. Consider my Jenkins server URL is 10.10.10.100:8080 and my job name is ‘test-job’ , then the URL will be as follows

‘ http://10.10.10.100:8080/job/test-job/build?token=MyTestAPIToken’ -> For non parameterized build

‘ http://10.10.10.100:8080/job/test-job/buildWithParameters?token=MyTestAPIToken’ -> For parameterized build

——————- advertisements ——————-  

———————————————————

Handling Authentication

Jenkins using combination of user credential based authentication and API token authentication. We can build token for each build as shown above. In user credential authentication, you can either pass the usename+password or username+token . To access the token for your username, login with your user account , navigate to Manage Jenkins -> Manage Users ->Select User name -> Click Add New Token ->Give Token Name -> Click generate . This will display token for your user . Copy this token and save on safe place as this token can not be recovered in future.

Building REST API request

There are two steps involved in making a successful API request. The first step is to send authentication request and get the CRUMB variable. This crumb data required to be send as header on further API requests. Jenkins use this to prevent Cross Site Forgery. The second one include the actual job request where you will specify job name and parameters for the job. Following are the example for getting CRUMB data using CURL query

——————- advertisements ——————-  

———————————————————

Getting CRUMB data :

Format : crumb=$(curl -vvv -u “username:passsword” -s ‘http://jenkinsurl/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,”:”,//crumb)’)

Example using password :

crumb=$(curl -vvv -u “apiuser:[email protected]″ -s ‘http:// 10.10.10.100 :8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,”:”,//crumb)’)

Example using User Token :

crumb=$(curl -vvv -u “apiuser: 1104fbd9d00f4e9e0240365c20a358c2b7 ” -s ‘http:// 10.10.10.100 :8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,”:”,//crumb)’)

Triggering a Non-parameterized Job :

Triggering a non-parameterized job will be easy as there is no requirement of sending any additional data for the build. Below are the example for the API request. Assume we have got crumb data from the above step.

curl -H $crumb –user apiuser: 1104fbd9d00f4e9e0240365c20a358c2b7 -X POST http://10.10.10.100:8080/job/test-job/build?token=MyTestAPIToken

Where ‘test-job’ is name of my Job and ‘ MyTestAPIToken ‘ is the token keyword which i have set manually on Job configure page. Refer ‘Find out the API URL for the Job’ section above to understand for setting up token keyword.

——————- advertisements ——————-  

———————————————————

How to create Parameterized Job:

Consider a Jenkins Job where i am asking user inputs before executing Job, these type of jobs are called parameterized job. We can enable parameter request by checking ‘This Job Parameterized’ option under the general tab. Here i am enabling paramerized option for the job name ‘test-job’ and adding two string parameters ‘message1’ and ‘message2’.

Click on Job name -> Configure ->click on General Tab -> enable ‘This project is parameterized’ -> click add parameter and select ‘String Parameter’ from the drop down box. You can see multiple input parameter types which you can use as per your requirement.

On the upcoming window, enter the name as message1 and give the description. Click ‘add parameter’ and repeat the steps for ‘message2’ as well.

——————- advertisements ——————-  

———————————————————

Execute the Job by selecting your job name and clicking ‘Build with parameters’. This will prompt for user input before initiating the build. You can use the data provided by the user inside your build bash script or pipeline steps using ‘$message1’ format

Eg: echo Message 1 to the user $message1

echo Message 2 to the user $message2

 

Triggering Parametrized Job:

You can trigger parametrized job using API call and pass values. You can pass values in a URL encoded format or as part of a JSON file if you have many parameters. 

Passing parameters using URL encoded :

Step 1: Collect the CRUMB data 

——————- advertisements ——————-  

———————————————————

See the above section ‘Building REST API request’

Step 2: Send the parameters in URL

curl -v -H $crumb –user apiuser:apiuser -X POST ‘http://10.10.10.100:8080/job/testjob/buildWithParameters?token=MyTestAPIToken&message1=’hai’&message2=’hello’

Note : Message1 and message2 are the name of the parameter , please see above 

Passing parameters using URL encoded JSON format:

Step 1: Collect the crumb data 

See the above section ‘Building REST API request’

Step 2: Send the parameters in URL encoded Json format 

curl -v –user apiuser:apiuser -X POST http://10.10.10.100:8080/job/testjob/build –data token=MyTestAPIToken –data-urlencode json='{“parameter”:[{“name”:”message1″,”value”:”hai”},{“name”:”message2″,”value”:”hello”}]}’

 

Passing parameters using file:

Step 1: create a JSON file with all the parameters in the following format 

[[email protected] ~]# cat testapi.json
json={
“parameter”:[{“name”:”message1″,”value”:”hai”},{“name”:”message2″,”value”:”hello”}]
}
[[email protected]b01 ~]#

Step 2: Collect the crumb data 

See the above section ‘Building REST API request’

Step 3: Send the api request by specifying file name . 

curl -v –user apiuser:apiuser -X POST http://10.10.10.100:8080/job/testjob/build –data “@testapi.json” -H “Accept: application/json”

 

Hope this helped you. Share your queries/feedback in the comments section below.

LINUX- Active Directory Integration

Most of the organisation uses Active directory domain services for user administration and management.Like windows machines, Linux servers also can authenticate and managed via active directory. In this tutorial, we are describing how to join a Linux server in to an active directory domain.

 Environment Prerequisites

Read more

  • Microsoft Windows Active Directory.
  • Linux host – RHEL
  • Below Packages needed to be installed on Linux host
  • Samba (version 3):
    • samba3x
    • samba3x-client
    • samba3x-winbind
    • samba3x-common
    • And  packages that might be needed to meet dependencies
  • Kerberos:
    • krb5-workstation
    • krb5-libs
    • And packages that might be needed to meet dependencies
  • PAM:
    • pam_krb5
  • NTP:

——————- advertisements ——————-  

———————————————————

   Configuration

This section describes the technical configuration of how to add Linux host as member of a Microsoft Windows Active Directory domain.Technical steps are below.

1. Update  the FQDN in /etc/hosts

It’s highly recommended to update  /etc/hosts with Acive directory FQDN. If something happens to DNS ,system can still resolve out to it.

2. Update the Host name – /etc/sysconfig/network

where “master” is the RHEL host name and “ADserver “is the ADDS (Active directory domain service) Server name.

3. Update the DNS – /etc/resolve.conf

Set the system’s search domain and point to the AD DNS server in /etc/resolv.conf

4. Synchronise the Time – /etc/ntp.conf

Its mandatory to have time synchronization between the domain server and its client. To achieve this, edit the ntp server details in the ntp.conf.

——————- advertisements ——————-  

———————————————————-

5. Update the Samba and krb configuration using authconfig-tui

Check if necessary packages are installed and backup the below configuration file 

/etc/krb5.conf

/etc/samba/smb.conf

Execute the command authconfig-tui. You will get the below text user interface. Fill in the field as below

Once You checked the necessary fields mentioned above, click on Next

——————- advertisements ——————-  

———————————————————-

Update the Kerberos setting as per your environment and click next.

Modify the Samba settings and click Ok.

Verify the configuration

Validate and update the additional information on the Kerberos and samba configuration files

  1. Verify /etc/krb5.conf

2.Update /etc/samba/smb.conf for ID management

Update idmap config range as below as well as backend connection as rid. This is to keep      same UID for the users across the domain. Please insert if these lines are not present

——————- advertisements ——————-  

———————————————————-

3. verify /etc/nsswitch.conf

In order to tell the system to use winbind for authentication, add winbind to passwd and group in /etc/nsswitch.conf as below if it is not already get updated

Join the server to the domain

To join the server in domain, under the specific OU , use the below command

#net ads join createcomputer=Datacenter-FI/Linux_Servers -U <admin id>

Replace the OU names accroding to your environment (Datacenter-Fi/Linux_servers is based on my test environment).You should be having an admin ID created in the AD already to join the computer.

Restart the service

Once joined to the domain , restart the  winbind service

#systemctl restart winbind

——————- advertisements ——————-  

———————————————————-

Restrict Access only to a specific AD group

To restrict access to the server for a specific AD group is possible via editing the file /etc/security/pam_winbind.conf .

 

Edit the line require_membership_of  and add the SIDs of the group which needs access to this server by comma separated.

Enable the Home directory on first login

Enable oddjobd to create home directory automatically in the initial login with default permissions of 700

# authconfig –enablemkhomedir –update

Verify Your Access

We have completed the AD integration in the server. now test your access with your AD id and password.

eg: login [email protected] and password – AD password.

Hope this helps you. Please have your queries and suggestions in the comments section below.

Top 50 CISCO ACI interview questions & answers

Cisco ACI is a part of Software Defined Network (SDN) product portfolio from Cisco . Cisco ACI is an emerging technology on DC build up and disruptive technology for traditional networking .This Question and Answers guide will help you to understand Cisco ACI from basics to advanced level and give confidence to tackling the interviews with positive result . You can download PDF of 50 Q&A from here by contributing small amount of money for our effort.

Read more

1.What is Cisco ACI.?
Cisco ACI, the industry-leading software-defined networking solution, facilitates application agility and data center automation with two important concepts from SDN solution, overlays and centralized control. ACI is a is a well defined architecture with centralised automation and policy-driven application profiles. ACI uses a centralised controller called the Application Policy Infrastructure Controller (APIC),It is the controller that creates application policies for the data center infrastructure.

2. What are the three components of ACI architecture .?
Application Network Profile (ANP)– a collection of end-point groups (EPG), their connections, and the policies that define those connections
Application Policy Infrastructure Controller (APIC)– a centralized software controller that manages downstream switches and act as management plane.
ACI fabric : This is connection of Spine and Leaf switches. In the ACI world Spine and Leaf are the Cisco Nexus 9000 Series Switches (N9k) , and they are act as Control and the Data plane of the ACI. It is running re written version of NX-OS in ACI mode.

3. Describe about ACI Fabric connection terminology.?
• You should use One or more spine switches to be connected to each Leaf, Models supported are Cisco Nexus 9336PQ, 9504, 9508, or 9516 switches
• You should use One or more leaf switches to be connected to End Points and APIC cluster , Models supported are Cisco Nexus 93128TX, 9332PQ, 9372PX, 9372PX-E, 9372TX, 9396PX, or 9396TX etc switches
• Spin switches can be connected to leaf switches but not each other.
• Leaf switches can be connected only to spine switches and endpoint devices including APIC devices , so this means APIC will be connected only to Leaf switches
• ACI Switches are not running spanning tree.
• Minimum 3 APIC controller should require in ACI fabric
• Max APIC can be used are 5
• Max Spine switches can be used are 6
• Max Leaf switches can be used are 200

4. What is the use of Application Policy Infrastructure Controller (APIC) on ACI Fabric.?
This is the network controller is responsible for provisioning policies to physical and virtual devices that belong to an ACI fabric. Minimum a cluster of three controllers is used. Following are the main APIC features.

  • Application and topology monitoring and troubleshooting
  • APIC shows Physical and logical topology (who is connected to whome)
  • Third-party integration (Layer 4 through Layer 7 [L4-L7] services & VMware vCenter/ vShield)
  • Image management (spine and leaf)
  • Cisco ACI inventory and configuration
  • Implementation on a distributed framework across a cluster of appliances
  • Health scores for critical managed objects (tenants, application profiles, switches, etc.)
  • Fault, event, and performance management
  • Cisco Application Virtual Switch (AVS), which can be used as a virtual leaf switch

5. How Cisco ACI differs from other SDN controllers.?
Open SDN architecture separates control plane and data plane . Control plane resides on the central controller and data plane resides on switches. If the switches lost connection to controller, it won’t function for new connections and applying traffic policies. In CIsco ACI architecture , the APIC is not control plane, rather switches still hold control plane and data plane and can function properly if the controller down.

6. What are the different object model implementation in ACI.?
Within the ACI object model, there are essentially three stages of implementation of the model, the Logical Model, the Resolved Model, and the Concrete Model.
Logical Model: The logical model is the interface for the system. Administrators are interacting with the logical model through the API, CLI, or GUI. This is a Policy layer which include endpoint configuration on the controller .Changes to the logical model are then pushed down to the concrete model, which becomes the hardware and software configuration.
Resolved Model : The Resolved Model is the abstract model expression that the APIC resolves from the logical model. This is essentially the elemental configuration components that would be delivered to the physical infrastructure when the policy must be executed (such as when an endpoint connects to a leaf)
Concrete Model : The Concrete Model is the actual in-state configuration delivered to each individual fabric member based on the resolved model and the Endpoints attached to the fabric.This is include actual configuration of device and resides on fabric (spines and leafes )

7.What is Policy layer and Concrete Layer in ACI model.?
Concrete layer is the ACI fabric and policy layer is controllers

8.What you mean by Tenant .?
Basically a Tenant (fvTenant) is logical container for application policies to isolate switching and routing function. A tenant represents a unit of isolation from a policy perspective, but it does not represent a private network. Tenants can represent a customer in a service provider setting, an organisation or domain in an enterprise setting, or just a convenient grouping of policies.
Four types of Tenant available

  1. User
  2. Common
  3. Management
  4. Infra

9 . Difference between management tenant and infrastructure tenant.?
Management Tenant : Used for infrastructure discovery and also used for all communication/integration with virtual machine controllers. It has separate Out Of Band (OOB) address space for APIC to Fabric communication, it is using to connect all fabric management interfaces
Infrastructure Tenant : It governs operation of fabric resources like allocating VXLAN overlays and allows fabric administrator to deploy selective shared services to tenants

10.What you mean by Context/VRF on ACI .?
The top level network construct within an ACI tenant is the VRF or Context . It is called as tenant network and available as ‘private network’ in the ACI GUI .Following are the important point about VRF’s
• A VRF defines Layer 3 address domain
• One or more bridge domain can associated with VRF
• All of the endpoints within the Layer 3 domain (VRF) must have unique IP addresses because it is possible to forward packets directly between these devices if the policy allows it.
• A tenant can contain multiple VRFs How ARP handled by ACI.?

Below are some of the additional questions available on PDF

  • How ARP and broadcast handled by ACI.?
  • Why and when you require contract in ACI Fabric.?
  • How to perform unicast routing on ACI.?
  • In Fabric, which switch will act as default gateway for pertucler subnet.?
  • How Cisco ACI differentiate Layer 2 traffic and Layer 3 traffic.?
  • How VLAN working in Cisco ACI.?
  • How can you configure trunk and access port on ACI.?
  • What is micro segmentation and how to configure.?
  • How to configure inter-VRF and Inter-tenant communication.?
  • How can you integrate Cisco ACI with VmWare.?
  • Explain about ACI fabric discovery process .?
  • Explain about traffic flow lookup on ACI fabric.?

Interested to know about the detailed answers of above questions along with other exclusive commonly asked 30 interview questions.? You can download PDF copy of 50 interview Q&A from here by contributing small perks to support our efforts. Please send email to ‘[email protected]‘ for PayPal payment option.


Hope you have enjoyed reading. Kindly share your feedback/suggestions in the comments section. For Q&A posts on other topics, please click here.

 

Ref:
https://www.sdxcentral.com/data-center/definitions/what-is-cisco-aci/

https://www.cisco.com/c/en_in/solutions/data-center-virtualization/application-centric-infrastructure/index.html

 

Introducing Beginner’s Forum TV

Introducing our YouTube channel – Beginner’s Forum TV. A new platform from the team as promised, to share more of our contents from the technology world. 

We have been posting contents here in the blog in different categories around the Data Center including Networking, Servers, Storage, Cloud and a bit of programming stuff. Our channel will have contents from all these areas but will not be limited to these. We will be adding more tech stuff – on electronics and gadgets, Application and Web development and anything technical of your interest.

Read more

Subscribe (by clicking the above button) to Beginner’s Forum TV so that you will not be missing any updates on our latest contents. 

 
Thank You for all your support and keep doing the same. Follow/Subscribe us, share our contents and keep sharing feedback (comments) as you always did. 

HAPPY nEW YEAR

We wish all our readers, a very happy New Year 2019..!!

We had a wonderful year 2018, a year we achieved many milestones and were able to do some amazing stuff on and off our page here.

Now we are into this New Year and we hope to cover even further this year. We assure you excellent and innovative content and we have ‘things’ planned for this New Year. 🙂

Once again we wish all our readers a Happy New Year, 2019..!

Keep reading..!

1 2 3 6