Azure certifications are of high industry demand right now and Azure Fundamentals (AZ-900) is the right starting point for the certifications. You can see here how you can get a free Azure training and an exam voucher you can use for the certification.
In these series of posts, we are sharing a certification preparation notes for you. Instead of going thru the detailed content over internet, you can refer these short notes in your exam preparation.
[ Disclaimer : This is not a complete training material for the certification. This is just random (short) notes which we captured from course curricula, which will help the readers for their final revision/rewind before appearing for the exam. We do not offer any guarantee in passing the exam with this content ]
Virtual machines : Emulating a computer system without having dedicated hardware. It can run the guest operating system on shared hardware. Consumers can deploy multiple virtual machines on the physical hardware as they need (depending on the hardware limitation also).
——————- advertisements ——————-
———————————————————
containers : containers serves the execution environments for applications without a guest operating system. A container will have the application and all the dependencies packaged in it. example : Docker
serverless computing : Lets you to build and run applications without worrying about the underlying server/host.Cloud provider runs the server for you.
Cloud computing benefits
Cost-effective : Consumer doesn’t have to pay for and maintain the hardware and infrastructure for their needs. Cloud provider allows a pay-as-you-go pricing.
Scalable : Lets the consumer scale their environment (both scaling up and scaling out) as per the demand
Elastic : Based on the needs, the cloud can automatically allocate more resources and can be de-allocated automatically once the requirement is completed.
Global : You can provision your resources in any region across the globe, totally redundant.
Reliable : reliability via redundancy, backups and disaster recovery solutions all inbuilt.
Secure : Physical (to the physical infrastrucure) and digital (relevent authentication for data access) security assured.
CapEx and OpEx
CapEx : all the expenditures in (initially) setting up the environment. Upfront expense.
examples include the Server, Storage, Networking, DataCenter infrastructure and Technical resources expense etc…
Benefits : Fixed expense and consumer can plan the budget.
——————- advertisements ——————-
———————————————————
OpEx : With Cloud Computing the consumers has to worry about on the operation expenses (the billing for the infra and services) which involves limited upfront payment.
Benefits : You do not have to pay full amount upfront.
Cloud deployment models
Private Cloud : Cloud environment within your data center. Complete control on the hardware/physical infrastructure and the physical security.
Public Cloud : Hardware is being managed completely by the cloud provider and the consumers use the required infra and services.
Hybrid Cloud : A combined model of private and public cloud models, adding the benefits of both the models to the consumer.
——————- advertisements ——————-
———————————————————
Types of cloud services
IaaS (Infrastructure as a Service) : A computing infrastructure for the consumer without having hardware with them. Consumer has the maximum control of the infra in this model compared to the other services.
PaaS (Platform as a Service) – For running/testing an application on the required platform without worrying about the infrastructure.
SaaS (Software as a Service) – Consumer can avail the software services from cloud without being concerned about the infra and the platform running it. Office365 is an example.
Hope this section will help you in your certification journey. You can find the next section in this series here. For the complete series click here.
We are going thru such a difficult situation right now and the numbers keep on coming are horrible. COVID-19, started from a small district in China has now spread to almost everywhere (6 continents) around the globe.
Image courtesy : WHO
Let’s not be panic, but let’s be more vigilant and careful.We have to fight against this pandemic, together.
VMware vSAN is one of the leading enterprise class software defined storage from VMware. It helps in leveraging the server based storage for enterprise applications. Advantages, as you might have already known – cost reduction, ease of administration and more…
In this post we are discussing one of the characteristic of vSAN, Fault Domains. Read more
What ?
Fault Domains helps an administrator to design the failure scenarios that may occur in a vSAN cluster. If a customer want to avoid data inaccessibility during a chassis failure or power failure in a rack etc… customer can do so by setting the right fault domains.
There should be a minimum of 3 fault domains for having this enabled on a cluster.
——————- advertisements ——————-
———————————————————
How ?
In a vSAN cluster, writes will be send to multiple hosts/drives depending on the Storage policy and the Failures To Tolerate (FTT) settings. If the FTT=1, the write will be send to 2 hosts at the same time. Even if one of the host fails, the data will be still accessible as the replica will be available on the host and thus IO operation continues. We will discuss the IO operation in vSAN, in a separate post.
In case of Failure Domain configuration, the replicas will be saved in different Failure Domains. We can define all the hosts in the same rack to be part of one Failure Domain and thus data and its replica will never be in the same (host in the same) rack. Thus the administrator can plan for any maintenance activities at the rack level without any disruption of the services running on the vSAN.
Same applies for the chassis level or any other level protection. We can define all the fault domains at the chassis level, so that replicas will not reside in the same chassis.
This write-up, focuses on the basic trouble shooting commands used in Brocade SAN switch. For better understanding of the commands, let us first understand the day to day operational challenges faced in the SAN fabric. Listed below are few of the operational error codes/prompts:
Alias/port went offline
Bottlenecks
Port error
Hanging zones
Rx Tx Voltage/Power Issue
let’s read in brief about, how to identify the errors and how to troubleshoot them. Read more
Alias/port went offline
This error is recorded due to the following reasons:
Reboot/ Shutdown of the host
Faulty cable
Issue in the HBA card.
——————- advertisements ——————-
———————————————————
Thus, when ‘WWN/ Alias went offline’ is recorded, use the below mentioned commands to identify, when the port went offline and which port went offline.
#fabriclog -s States the ports which went offline recently.
#fabriclog -s |grep -E “Port Index |GMT” This command states the ports which went offline before. Note: This command will fail in case the FOS upgrade or Switch reboot activity was performed. As both the activities clear the fabriclog.
In order to know the zoning details through the WWN of the device, use below mentioned command:
#alishow |grep wwn -b2 This lists the alias.
then use below command
#zoneshow –alias Alias_Name This lists the zone name and component aliases.
——————- advertisements ——————-
———————————————————
Bottlenecks
There are many kinds of bottlenecks. But, the once prominent in SAN fabric are Latency bottleneck and congestion bottleneck.
Latency bottleneck occurs when a slow drain device is connected to the port. Even initiator or target ports can report latency, no matter what kind of port it is, if a slow drain device is attached, there will be bottleneck in that port. A
Slow drain devices, is a device which either has all or any one of the bellow mentioned issues:
Unsupported firmware.
Hardware issues.
SFP which has a voltage or power issue.
Whereas, Congestion bottleneck occurs due to high rate of data transfer in the port. In the next write-up we will discuss in detail, about the causes of a congestion bottleneck.
——————- advertisements ——————-
———————————————————
The commands used to identify latency as well as congestion bottleneck are:
#errdump
#mapsdb –show
If there is latency or congestion bottleneck, it should to be fixed by logging a support case with Server/Storage hardware vendor.
Port errors
There are many kinds of port errors. Most of the time, its due to bottleneck issue/ physical layer issue. Bottleneck issue we have already addressed above. Physical layer issue is, either Cable issue or SFP issue.
Below are the commands to identify the port errors:
#porterrshow This will list all ports in error state.
#porterrshow port_number
#porterrorshow -i Port_Index Both these commands will list the errors in a particular port.
——————- advertisements ——————-
———————————————————
In case an error is listed, before troubleshooting clear the status using below commands and observe it again.
#statsclear
#slotstatsclear
#portstatsclear port_number
Apart from this, there are other commands to display the current data transfer rate of a port or all ports, such as:
#portperfshow
#portperfshow port_number
Hanging Zone
Hanging zones are the purposeless zones residing in the zoning configuration. The zone in which all initiators or all targets are inactive are considered as hanging zone.
There is no specific command to list out hanging zones in the fabric, we have to use SAN health to identify the hanging zone. To check if all the aliases of a zone are active or not use the command mentioned below:
#zonevalidate “zonename”
In the result of the above command, there will be have a ‘*’ mark at the end of each active alias in the zone.
Rx Tx Voltage/Power Issue
The Rx & Tx Voltage and power of an SFP can be validated only if, there is connectivity in the SFP with its port in online state.
The command below will display the voltage, power and all the details related to the SFP.
Here we are going to discuss about the login error where LDAP authentication set up through the company Active Directory server. After logging in to our server with LDAP user id and password ,sometimes we may get an error like ” cannot find name for group ID 20103039″
This error can be easily resolved by clearing the sssd cache from the client server.
What is SSSD Cache
SSSD caches the results of users and credentials from these remote locations so that if the identity provider goes offline, the user credentials are still available and users can still login. This helps to improve performance and facilitates scalability with a single user that can login over many systems, rather than using local accounts everywhere.
——————- advertisements ——————-
———————————————————
The cached results can potentially be problematic if the stored records become stale and are no longer in sync with the identity provider.Hence clearing the cache files will resolve the issues.
How to clear the Cache
Here we will discuss couple of methods to clear the cache files. 1. sss_cache Tool The cache purge utility, sss_cache invalidates records in the SSSD cache for a user, a domain, or a group. Invalidating the current records forces the cache to retrieve the updated records from the identity provider, so changes can be realized quickly.
# sss_cache -E
2. Deleting Cache Files SSSD stores its cache files in the /var/lib/sss/db/ directory. it is also possible to clear the cache by simply deleting the corresponding cache files.
——————- advertisements ——————-
———————————————————
Before deleting the files , it is important to stop the sssd service .
# systemctl stop sssd
After this remove the cache files as below
# rm -rf /var/lib/sss/db/*
Once removed , start the sssd service back online
# systemctl restart sssd
SSSD should now start up correctly with an empty cache.Any user login will now first go directly to the LDAP for authentication, and then be cached locally afterwards.So the login errors should be cleared . Hope this will help you. Please have your suggestions/feedback in the comments section.
Network File System (NFS) is a protocol which allows a system to share directories and files with others over a network. By using NFS, users and programs can access files on remote systems almost as if they were local files.
This post refers how to mount the network share in our local system and what are all the common issues and how to generally troubleshoot connectivity and config issues.
1. Install the required nfs packages if not already installed on the server Read more
# rpm -qa | grep nfs-utils
# yum install nfs-util
2. Use the mount command to mount exported file systems. Syntax for the command:
# mount -t nfs -o options host:/remote/export /local/directory
——————- advertisements ——————-
———————————————————
Example :
# mount -t nfs -o ro,nosuid remote_host:/home /remote_home
This example does the following: – It mounts /home from remote host (remote_host) on local mount point /remote_home. – File system is mounted read-only and users are prevented from running a setuid program (-o ro,nosuid options).
3. Update /etc/fstab to mount NFS shares at boot time.
# vi /etc/fstab
remote_host:/home /remote_home nfs ro,nosuid 0 0
Troubleshooting NFS connectivity issues
Depending on the client and the issue, wide range of error messages can appear while trying to mount an NFS share, it might also take forever to mount, or even mount normally but the mount points will be empty.Below are the common errors we face in the client side while mounting the NFS/NAS shares.
——————- advertisements ——————-
———————————————————
Error 1:
mount: mount to NFS server 'NFS-Server' failed: System Error: No route to host.
This can be caused by the RPC messages being filtered by either the host firewall, the client firewall, or a network switch. Verify if a firewall is active and if NFS traffic is allowed. Normally nfs is using port 2049.
Check the show mount output of the server to verify the filesystem has exported for the client ip.
# showmount –e <NFS server IP > | grep –I <clientIP>
Check the port Connectivity of the NFS server using telnet
# telnet <NFS server IP> 2049
Error 2:
mount_nfs: can't mount / from 1.2.3.4 onto /mnt: RPC prog. not avail
Error: “mount clntudp_create: RPC: Port mapper failure – RPC: Unable to receive”
The Linux NFS implementation requires that both the NFS service and the portmapper (RPC) service be running on both the client and the server. Check it like this:
May 18 12:39:15 nfsserver systemd[1]: Starting RPC bind service...
May 18 12:39:15 nfsserver systemd[1]: Started RPC bind service.
If not, start it with the commands give below.
# systemctl start rpcbind
——————- advertisements ——————-
———————————————————
Error 3:
Error: “NFS Stale File Handle”
Unlike traditional Linux file systems that allow an application to access an open file even if the file has been deleted using unlink or rm, NFS does not support this feature. An NFS file is deleted immediately. Any program which attempts to do further I/O on the deleted file will receive the “NFS Stale File Handle” error. For example, if your current working directory is an NFS directory and is deleted, you will see this error at the next shell prompt.
To refresh the client’s state with that of the server you may do a lazy unmount the mount point and remount it
# umount -l /mnt/mount_point
or kill the process, which references the mounted file system:
# fuser -k [mounted-filesystem].
——————- advertisements ——————-
———————————————————
Error 4:
Error: “Access Denied” or “Permission Denied”
Check the export permissions for the NFS file system. You can do this from the client:
# showmount -e server_name
Error 5:
Error: “rpc mount export: RPC: Timed out”
Unable to access file system at [NFS SERVER]: rpc mount export: RPC: Timed outThis is caused by DNS name resolution issue. NFS(RPC) needs reverse name resolution. If NFS server or client cannot resolve their name, this error occurs. In case gets the error message, check DNS configuration and /etc/hosts configuration.
Hope we have covered almost all the regular errors and steps for solving those. Please share your thoughts in the comments section. If you want us to add any additional issues-resolution, kindly let us know.
Jenkins is one of the important tool in DevOps and most of the time we would require to execute Jenkins job using remote REST API call. Jobs can either be parameterized or non parameterized. A Parameterized job will need certain input from user side for execution. Here we will discuss how to call both types of jobs using REST API.
Triggering paremeterized job by sending values in URL
Triggering parametrized Job by sending values in JSON file
Introduction to Jenkins API
[From Jenkins Documentation] “Jenkins is the market leading continuous integration system, originally created by Kohsuke Kawaguchi. This API makes Jenkins even easier to use by providing an easy to use conventional python interface.”
——————- advertisements ——————-
———————————————————
Jenkins provides rich set of REST based APIs.
Setting Jenkins Job to respond REST API
The REST API feature can be enabled per Job basis. To enable REST API trigger for a Job, Navigate to Your JobName ->Configure -> Build triggers TAB and Check on ‘Trigger build remotely’.
Find out the API URL for the Job
Once you enabled the check box to trigger build remotely , Jenkins will show you the URL to access the particular Job and gives an option to provide the API token for the build. Consider my Jenkins server URL is 10.10.10.100:8080 and my job name is ‘test-job’ , then the URL will be as follows
‘ http://10.10.10.100:8080/job/test-job/build?token=MyTestAPIToken’ -> For non parameterized build
‘ http://10.10.10.100:8080/job/test-job/buildWithParameters?token=MyTestAPIToken’ -> For parameterized build
——————- advertisements ——————-
———————————————————
Handling Authentication
Jenkins using combination of user credential based authentication and API token authentication. We can build token for each build as shown above. In user credential authentication, you can either pass the usename+password or username+token . To access the token for your username, login with your user account , navigate to Manage Jenkins -> Manage Users ->Select User name -> Click Add New Token ->Give Token Name -> Click generate . This will display token for your user . Copy this token and save on safe place as this token can not be recovered in future.
Building REST API request
There are two steps involved in making a successful API request. The first step is to send authentication request and get the CRUMB variable. This crumb data required to be send as header on further API requests. Jenkins use this to prevent Cross Site Forgery. The second one include the actual job request where you will specify job name and parameters for the job. Following are the example for getting CRUMB data using CURL query
——————- advertisements ——————-
———————————————————
Getting CRUMB data :
Format : crumb=$(curl -vvv -u “username:passsword” -s ‘http://jenkinsurl/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,”:”,//crumb)’)
Triggering a non-parameterized job will be easy as there is no requirement of sending any additional data for the build. Below are the example for the API request. Assume we have got crumb data from the above step.
curl -H $crumb –user apiuser: 1104fbd9d00f4e9e0240365c20a358c2b7 -X POST http://10.10.10.100:8080/job/test-job/build?token=MyTestAPIToken
Where ‘test-job’ is name of my Job and ‘ MyTestAPIToken ‘ is the token keyword which i have set manually on Job configure page. Refer ‘Find out the API URL for the Job’ section above to understand for setting up token keyword.
——————- advertisements ——————-
———————————————————
How to create Parameterized Job:
Consider a Jenkins Job where i am asking user inputs before executing Job, these type of jobs are called parameterized job. We can enable parameter request by checking ‘This Job Parameterized’ option under the general tab. Here i am enabling paramerized option for the job name ‘test-job’ and adding two string parameters ‘message1’ and ‘message2’.
Click on Job name -> Configure ->click on General Tab -> enable ‘This project is parameterized’ -> click add parameter and select ‘String Parameter’ from the drop down box. You can see multiple input parameter types which you can use as per your requirement.
On the upcoming window, enter the name as message1 and give the description. Click ‘add parameter’ and repeat the steps for ‘message2’ as well.
——————- advertisements ——————-
———————————————————
Execute the Job by selecting your job name and clicking ‘Build with parameters’. This will prompt for user input before initiating the build. You can use the data provided by the user inside your build bash script or pipeline steps using ‘$message1’ format
Eg: echo Message 1 to the user $message1
echo Message 2 to the user $message2
Triggering Parametrized Job:
You can trigger parametrized job using API call and pass values. You can pass values in a URL encoded format or as part of a JSON file if you have many parameters.
Passing parameters using URL encoded :
Step 1: Collect the CRUMB data
——————- advertisements ——————-
———————————————————
See the above section ‘Building REST API request’
Step 2: Send the parameters in URL
curl -v -H $crumb –user apiuser:apiuser -X POST ‘http://10.10.10.100:8080/job/testjob/buildWithParameters?token=MyTestAPIToken&message1=’hai’&message2=’hello’
Note : Message1 and message2 are the name of the parameter , please see above
Passing parameters using URL encoded JSON format:
Step 1: Collect the crumb data
See the above section ‘Building REST API request’
Step 2: Send the parameters in URL encoded Json format
Most of the organisation uses Active directory domain services for user administration and management.Like windows machines, Linux servers also can authenticate and managed via active directory. In this tutorial, we are describing how to join a Linux server in to an active directory domain.
Below Packages needed to be installed on Linux host
Samba (version 3):
samba3x
samba3x-client
samba3x-winbind
samba3x-common
And packages that might be needed to meet dependencies
Kerberos:
krb5-workstation
krb5-libs
And packages that might be needed to meet dependencies
PAM:
pam_krb5
NTP:
——————- advertisements ——————-
———————————————————
Configuration
This section describes the technical configuration of how to add Linux host as member of a Microsoft Windows Active Directory domain.Technical steps are below.
1. Update the FQDN in /etc/hosts
It’s highly recommended to update /etc/hosts with Acive directory FQDN. If something happens to DNS ,system can still resolve out to it.
2. Update the Host name – /etc/sysconfig/network
where “master” is the RHEL host name and “ADserver “is the ADDS (Active directory domain service) Server name.
3. Update the DNS – /etc/resolve.conf
Set the system’s search domain and point to the AD DNS server in /etc/resolv.conf
4. Synchronise the Time – /etc/ntp.conf
Its mandatory to have time synchronization between the domain server and its client. To achieve this, edit the ntp server details in the ntp.conf.
——————- advertisements ——————-
———————————————————-
5. Update the Samba and krb configuration using authconfig-tui
Replace the OU names accroding to your environment (Datacenter-Fi/Linux_servers is based on my test environment).You should be having an admin ID created in the AD already to join the computer.
Cisco ACI is a part of Software Defined Network (SDN) product portfolio from Cisco . Cisco ACI is an emerging technology on DC build up and disruptive technology for traditional networking .This Question and Answers guide will help you to understand Cisco ACI from basics to advanced level and give confidence to tackling the interviews with positive result . You can download PDF of 50 Q&A from here by contributing small amount of money for our effort.
1.What is Cisco ACI.? Cisco ACI, the industry-leading software-defined networking solution, facilitates application agility and data center automation with two important concepts from SDN solution, overlays and centralized control. ACI is a is a well defined architecture with centralised automation and policy-driven application profiles. ACI uses a centralised controller called the Application Policy Infrastructure Controller (APIC),It is the controller that creates application policies for the data center infrastructure.
2. What are the three components of ACI architecture .? Application Network Profile (ANP)– a collection of end-point groups (EPG), their connections, and the policies that define those connections Application Policy Infrastructure Controller (APIC)– a centralized software controller that manages downstream switches and act as management plane. ACI fabric : This is connection of Spine and Leaf switches. In the ACI world Spine and Leaf are the Cisco Nexus 9000 Series Switches (N9k) , and they are act as Control and the Data plane of the ACI. It is running re written version of NX-OS in ACI mode.
3. Describe about ACI Fabric connection terminology.? • You should use One or more spine switches to be connected to each Leaf, Models supported are Cisco Nexus 9336PQ, 9504, 9508, or 9516 switches • You should use One or more leaf switches to be connected to End Points and APIC cluster , Models supported are Cisco Nexus 93128TX, 9332PQ, 9372PX, 9372PX-E, 9372TX, 9396PX, or 9396TX etc switches • Spin switches can be connected to leaf switches but not each other. • Leaf switches can be connected only to spine switches and endpoint devices including APIC devices , so this means APIC will be connected only to Leaf switches • ACI Switches are not running spanning tree. • Minimum 3 APIC controller should require in ACI fabric • Max APIC can be used are 5 • Max Spine switches can be used are 6 • Max Leaf switches can be used are 200
4. What is the use of Application Policy Infrastructure Controller (APIC) on ACI Fabric.? This is the network controller is responsible for provisioning policies to physical and virtual devices that belong to an ACI fabric. Minimum a cluster of three controllers is used. Following are the main APIC features.
Application and topology monitoring and troubleshooting
APIC shows Physical and logical topology (who is connected to whome)
Implementation on a distributed framework across a cluster of appliances
Health scores for critical managed objects (tenants, application profiles, switches, etc.)
Fault, event, and performance management
Cisco Application Virtual Switch (AVS), which can be used as a virtual leaf switch
5. How Cisco ACI differs from other SDN controllers.? Open SDN architecture separates control plane and data plane . Control plane resides on the central controller and data plane resides on switches. If the switches lost connection to controller, it won’t function for new connections and applying traffic policies. In CIsco ACI architecture , the APIC is not control plane, rather switches still hold control plane and data plane and can function properly if the controller down.
6. What are the different object model implementation in ACI.? Within the ACI object model, there are essentially three stages of implementation of the model, the Logical Model, the Resolved Model, and the Concrete Model. Logical Model: The logical model is the interface for the system. Administrators are interacting with the logical model through the API, CLI, or GUI. This is a Policy layer which include endpoint configuration on the controller .Changes to the logical model are then pushed down to the concrete model, which becomes the hardware and software configuration. Resolved Model : The Resolved Model is the abstract model expression that the APIC resolves from the logical model. This is essentially the elemental configuration components that would be delivered to the physical infrastructure when the policy must be executed (such as when an endpoint connects to a leaf) Concrete Model : The Concrete Model is the actual in-state configuration delivered to each individual fabric member based on the resolved model and the Endpoints attached to the fabric.This is include actual configuration of device and resides on fabric (spines and leafes )
7.What is Policy layer and Concrete Layer in ACI model.? Concrete layer is the ACI fabric and policy layer is controllers
8.What you mean by Tenant .? Basically a Tenant (fvTenant) is logical container for application policies to isolate switching and routing function. A tenant represents a unit of isolation from a policy perspective, but it does not represent a private network. Tenants can represent a customer in a service provider setting, an organisation or domain in an enterprise setting, or just a convenient grouping of policies. Four types of Tenant available
User
Common
Management
Infra
9 . Difference between management tenant and infrastructure tenant.? Management Tenant : Used for infrastructure discovery and also used for all communication/integration with virtual machine controllers. It has separate Out Of Band (OOB) address space for APIC to Fabric communication, it is using to connect all fabric management interfaces Infrastructure Tenant : It governs operation of fabric resources like allocating VXLAN overlays and allows fabric administrator to deploy selective shared services to tenants
10.What you mean by Context/VRF on ACI .? The top level network construct within an ACI tenant is the VRF or Context . It is called as tenant network and available as ‘private network’ in the ACI GUI .Following are the important point about VRF’s • A VRF defines Layer 3 address domain • One or more bridge domain can associated with VRF • All of the endpoints within the Layer 3 domain (VRF) must have unique IP addresses because it is possible to forward packets directly between these devices if the policy allows it. • A tenant can contain multiple VRFs How ARP handled by ACI.?
Below are some of the additional questions available on PDF
How ARP and broadcast handled by ACI.?
Why and when you require contract in ACI Fabric.?
How to perform unicast routing on ACI.?
In Fabric, which switch will act as default gateway for pertucler subnet.?
How Cisco ACI differentiate Layer 2 traffic and Layer 3 traffic.?
How VLAN working in Cisco ACI.?
How can you configure trunk and access port on ACI.?
What is micro segmentation and how to configure.?
How to configure inter-VRF and Inter-tenant communication.?
How can you integrate Cisco ACI with VmWare.?
Explain about ACI fabric discovery process .?
Explain about traffic flow lookup on ACI fabric.?
Interested to know about the detailed answers of above questions along with other exclusive commonly asked 30 interview questions.? You can download PDF copy of 50 interview Q&A from here by contributing small perks to support our efforts. Please send email to ‘beginnersforum@gmail.com‘ for PayPal payment option.
Hope you have enjoyed reading. Kindly share your feedback/suggestions in the comments section. For Q&A posts on other topics, please click here.
Introducing our YouTube channel – Beginner’s Forum TV. A new platform from the team as promised, to share more of our contents from the technology world.
We have been posting contents here in the blog in different categories around the Data Center including Networking, Servers, Storage, Cloud and a bit of programming stuff. Our channel will have contents from all these areas but will not be limited to these. We will be adding more tech stuff – on electronics and gadgets, Application and Web development and anything technical of your interest.
Subscribe (by clicking the above button) to Beginner’s Forum TV so that you will not be missing any updates on our latest contents.
Thank You for all your support and keep doing the same. Follow/Subscribe us, share our contents and keep sharing feedback (comments) as you always did.