http://beginnersforum.net/blog Beginner's Forum - Here to help the beginners Wed, 12 Dec 2018 13:59:56 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.9 78092692 Vembu BDR Suite v4.0 is now GA http://beginnersforum.net/blog/2018/12/12/vembu-bdr-suite-v4-0-is-now-ga/ http://beginnersforum.net/blog/2018/12/12/vembu-bdr-suite-v4-0-is-now-ga/#respond Wed, 12 Dec 2018 13:59:56 +0000 http://beginnersforum.net/blog/?p=1060 Vembu’s BDR Suite v4.0 is now GA. Vembu announced the latest version – 4.0 – recently and is now Generally Available for customers. Vembu BDR 4.0 is only available now for fresh installations. An upgrade package will soon be available for existing customers to upgrade their environment to 4.0.   There are a lot of exciting new features available with

The post Vembu BDR Suite v4.0 is now GA appeared first on .

]]>
Vembu’s BDR Suite v4.0 is now GA.

Vembu announced the latest version – 4.0 – recently and is now Generally Available for customers. Vembu BDR 4.0 is only available now for fresh installations. An upgrade package will soon be available for existing customers to upgrade their environment to 4.0.

 

There are a lot of exciting new features available with 4.0 including backup of VM’s running on Hyper-V cluster etc… You can click here to read more about the new features in Vembu BDR 4.0. 

You can download the installer here for your environment today.

Also, Vembu is giving away up to 40% discount on their products till the 24th of December. Hurry and enjoy the Thanksgiving-Christmas discount from Vembu and grab your your piece of software. Please refer to the Vembu blog here for more details.

The post Vembu BDR Suite v4.0 is now GA appeared first on .

]]>
http://beginnersforum.net/blog/2018/12/12/vembu-bdr-suite-v4-0-is-now-ga/feed/ 0 1060
IT Blog Awards by Cisco – Vote now..! http://beginnersforum.net/blog/2018/12/12/it-blog-awards-by-cisco-vote-now/ http://beginnersforum.net/blog/2018/12/12/it-blog-awards-by-cisco-vote-now/#respond Wed, 12 Dec 2018 13:21:29 +0000 http://beginnersforum.net/blog/?p=1058 Hurry, vote now for the best IT Blogs in the IT Blogs awards hosted by Cisco..! About the program: This is the first ever IT Blogs awards from Cisco for recognizing the contribution by the blogger community, in various categories. (about the program) from the Cisco website : The first-ever IT Blog Awards, hosted by Cisco, is our way of

The post IT Blog Awards by Cisco – Vote now..! appeared first on .

]]>
Hurry, vote now for the best IT Blogs in the IT Blogs awards hosted by Cisco..!

About the program:

This is the first ever IT Blogs awards from Cisco for recognizing the contribution by the blogger community, in various categories.

(about the program) from the Cisco website :

The first-ever IT Blog Awards, hosted by Cisco, is our way of recognizing the great community of independent tech bloggers for the passion, creativity, and expertise shared throughout the year. We appreciate your impact on the tech community.
Voting is now open through January 4, 2019.  Winners will receive a Cisco Live US pass.

You can vote for the blogs in different categories and the voting ends on 4th Jan, 2019. Make sure to consider the value, credibility and the consistency of the content while you select a blog as the best in that category.

It is your opportunity now to recognize the bloggers/blogs who are helping the community by providing excellent contents. Do not wait, Vote Now.

We are proud to announce that, we have been chosen as one of the finalists in the Best Group Effort Category. If you feel our contents were of quality, helping the community and at the same time meeting the program guidelines, you can select our blog in best group effort category.

The post IT Blog Awards by Cisco – Vote now..! appeared first on .

]]>
http://beginnersforum.net/blog/2018/12/12/it-blog-awards-by-cisco-vote-now/feed/ 0 1058
Azure cloud provisioning using Ansible http://beginnersforum.net/blog/2018/11/19/azure-cloud-provisioning-using-ansible/ http://beginnersforum.net/blog/2018/11/19/azure-cloud-provisioning-using-ansible/#comments Mon, 19 Nov 2018 06:46:31 +0000 http://beginnersforum.net/blog/?p=994                 Automating the IT Infrastructure is today’s one of major focus of all organizations. This reduces the cost and human workloads. When you make a plan to automating your infrastructure, it should start with provisioning of the resources, this makes managing the resources very easy. Many businesses have adopted cloud computing in their

The post Azure cloud provisioning using Ansible appeared first on .

]]>
                Automating the IT Infrastructure is today’s one of major focus of all organizations. This reduces the cost and human workloads. When you make a plan to automating your infrastructure, it should start with provisioning of the resources, this makes managing the resources very easy. Many businesses have adopted cloud computing in their operations in the past years because of its flexibility and high sociability features. When you integrate the cloud infrastructure with today’s open source DevOps tools available in the market, this makes your daily life easier to handling huge environments.

I would rather suggest to go with Ansible as the configuration management tool because of its simplicity and straight forward operation features. This came in market late, but gained solid footing and adopted by many DevOps professionals because of its unique features. Ansible offers huge number of modules for managing the cloud operations for all major cloud providers like Azure AWS and GCP.

The Ansible playbooks which I refer below will help you to provisioning cloud resources in Azure environment, which create a Window VM and configure the VM to connect with Ansible host for any post provision activities, The playbook will perform the following tasks.

  1. Create the resource groups and Network infrastructure
  2. Provisioning of windows VMs
  3. Adding the new host to dynamic inventory for any post provision activities
  4. Enabling the PowerShell execution policy to connect to WinRM
  5. Installing a Firefox package using ansible on the newly created VM
The playbook contains 3 roles which will create Network infrastructure, provision a windows VMs and install the Firefox package on it.
——————- advertisements ——————-
———————————————————-
Let’s go through the main playbook first which includes 3 roles First 2 will run against the localhost which creates the Network infrastructure and Virtual machine respectively. As you can see the third role which install the Firefox package is running against a host group azure_vms which will be created dynamically after provisioning the server

Now let’s go through the first role common which creates the resource group and network infrastructure.

 

- name: Create a resource group
   azure_rm_resourcegroup:      
     name: "{{ rg_name }}"      
     location: "{{ location }}"      
     state: present 

- name: Create a virtual network   
  azure_rm_virtualnetwork:      
    name: "{{ vitual_network }}"      
    resource_group: "{{ rg_name }}"      
    address_prefixes_cidr:         
      - "{{ CIDR }}" 
- name: Create network windows base_security groups   
  azure_rm_securitygroup:     
    resource_group: "{{ rg_name }}"     
    name: windows_base     
    purge_rules: yes     
    rules:        
     - name: 'AllowRDP'          
       protocol: Tcp          
       source_address_prefix: 0.0.0.0/0          
       destination_port_range: 3389          
       access: Allow          
       priority: 100          
       direction: Inbound        
     - name: 'AllowWinRM'          
       protocol: Tcp          
       source_address_prefix: 0.0.0.0/0          
       destination_port_range: 5986          
       priority: 102          
       direction: Inbound        
     - name: 'DenyAll'          
       protocol: Tcp          
       source_address_prefix: 0.0.0.0/0          
       destination_port_range: 0-65535          
       priority: 103          
       direction: Inbound

- name: Create a Subnet and adding the windows_base security group in to it
  azure_rm_subnet:
    name: "{{ subnet }}"
    virtual_network_name: "{{ vitual_network }}"
    resource_group: "{{ rg_name }}"
    address_prefix_cidr: "{{ subnet_CIDR }}"
    security_group_name: windows_base 

——————- advertisements ——————-
———————————————————-

Here it’s creating a Resource group, virtual network and a security group which allow incoming RDP and WinRM traffics. You can either add the security group to the NIC card or to the subnet where we create the Virtual machine. Azure will create a NIC card and allocate to the VM in default if you are not giving any custom NIC cards while provisioning. Here I am not creating any custom NIC cards for the server instead attaching the security group with the subnet.

Let’s go through the second role which creates the Virtual machine.


- name: Create a VM    
  azure_rm_virtualmachine:      
    os_type: Windows      
    resource_group: "{{ rg_name }}"      
    virtual_network_name: "{{ virtual_network_name }}"      
    name: "{{ vm_name }}"      
    admin_username: "{{ admin_user }}"      
    admin_password: "{{ admin_passwd }}"      
    vm_size: Standard_F2s_v2      
    image:         
      offer: WindowsServer         
      publisher: MicrosoftWindowsServer         
      sku: '2016-Datacenter'         
      version: latest    
  register: output  

- name: Add new instance to the host group    
  add_host:       
    hostname: "{{ vm_name }}"       
    ansible_host: "{{ azure_vm.properties.networkProfile.networkInterfaces[0].properties.ipConfigurations[0]. properties.publicIPAddress.properties.ipAddress }}"       
    ansible_user: "{{ admin_user }}"       
    ansible_password: "{{ admin_passwd }}"       
    ansible_connection: winrm       
    ansible_port: 5986       
    ansible_winrm_server_cert_validation: ignore       
    ansible_winrm_transport: ssl 
    groupname: azure_vms    
  with_items: output.instances   

- name: create Azure vm extension to enable HTTPS WinRM listener     
  azure_rm_virtualmachine_extension:        
    name: winrm-extension        
    resource_group: "{{ rg_name }}"        
    virtual_machine_name: "{{ vm_name }}"        
    publisher: Microsoft.Compute        
    virtual_machine_extension_type: CustomScriptExtension        
    type_handler_version: 1.9        
    settings: '{"commandToExecute": "powershell.exe -ExecutionPolicy ByPass -   EncodedCommand {{winrm_enable_script}}"}'        
    auto_upgrade_minor_version: true     
  with_items: output.instances   

- name: wait for the WinRM port to come online     
  wait_for:        
    port: 5986        
    host: '{{azure_vm.properties.networkProfile.networkInterfaces[0].properties.ipConfigurations[ 0].properties.publicIPAddress.properties.ipAddress}}'        
    timeout: 600     
  with_items: output.instances
——————- advertisements ——————-
———————————————————-
As you can see in the second task in the role, the newly created server will be added to a host group azure_vms using the ansible add_host module. The third and 4 th task will enable HTTPS WinRM listener for ansible communication.

The third and final role in the playbook will install a Firefox browser in the newly provisioned VM using the ansible win_chocolatey module.

 - name: Install Firefox 
   win_chocolatey:
     name: firefox
     state: present

Here is the main playbook which calls all the 3 roles

---
- hosts: localhost
  gather_facts: yes
  roles:
   - common
   - vm

- hosts: azure_vms
  gather_facts: no
  roles:
   - install_firefox

Hope this post helped you. Please share your feedback/suggestions in the comments below.

The post Azure cloud provisioning using Ansible appeared first on .

]]>
http://beginnersforum.net/blog/2018/11/19/azure-cloud-provisioning-using-ansible/feed/ 1 994
Update service-now ticket using a Python script http://beginnersforum.net/blog/2018/10/16/update-service-now-incident-using-python/ http://beginnersforum.net/blog/2018/10/16/update-service-now-incident-using-python/#comments Tue, 16 Oct 2018 03:49:34 +0000 http://beginnersforum.net/blog/?p=957 How cool it will be if you can upload the output of your script in to Service now incident notes or task notes automatically. This python script helps you to run set of command against the Cisco switches and routers and the output of command will upload to service now incident automatically. This will help you to increase the response

The post Update service-now ticket using a Python script appeared first on .

]]>
How cool it will be if you can upload the output of your script in to Service now incident notes or task notes automatically. This python script helps you to run set of command against the Cisco switches and routers and the output of command will upload to service now incident automatically. This will help you to increase the response time of NOC L1  team in troubleshooting task.

Service-now a IT Service management (ITSM) tool based on cloud platform provides end to end transformation of IT services. Service Now provides REST API to communicate with SNOW instance. We will use REST API in our program to interact with service now instance.

We are explaining step by step procedure to achieve this

Following are the components required:

  1. Service now developer account
    2. Service now instance
    3. Python with Service now API installed

——————- advertisements ——————-

———————————————————-

Create service now developer account and instance

Please refer our post ‘Create service now developer account and instance’ and create new user for API calls.

Setup environment

We would  require ‘netmiko’ package to take ssh of devices. Please read part 1 and part 2  of our post for details about installing python and running your first program. Please read part 4 if you want to know how to take SSH of a switch.

Install python service-now API package

We also require ‘pysnow’ package which is using to interact with service now using REST API call. Please click here if you would like to know more about ‘pysnow’ package.

Install ‘psysnow’ using following command

‘pip install psynow’

please click here if you did not know how to install a package on python using pip

Script Definition:

The script will get service-now information and device credential initially. Then it will continuously run on server so user can update multiple incident by running the commands against multiple devices. All the required commands have to be saved on ‘command.txt’ file.

——————- advertisements ——————-

———————————————————-

It is using class ‘inc_update’ to gather information and update service-now.  Inside the class, the function ‘collectdata’ using to SSH to device and taking the out put of commands.  The function ‘inc_update’ using to update service now instance with the output.

Following are the script. It is easy to understand, and we have put inline comments for making it easy.

import pysnow
import getpass
from netmiko import ConnectHandler

print “=============================\n”
print “Program to update service now incident notes\n”
print “\n=============================\n”

##class to connect device
class cls_incident:
#initialising variables
def __init__(self,uname,password):
#initialising variables
self.uname = uname
self.password = password
self.secret=password
self.dev_type=’cisco_ios’
self.ip=”
self.output=”

——————- advertisements ——————-

———————————————————-

#creating dictionery for netmiko
self.dict_device = {
‘device_type’: self.dev_type,
‘ip’: self.ip,
‘username’: self.uname,
‘password’: self.password,
‘secret’: self.secret,
‘global_delay_factor’:1,

}

#function to login to device and collect output of command
def collectdata(self,ipaddress):
self.dict_device[‘ip’]=ipaddress
self.net_connect = ConnectHandler(**self.dict_device)
#opening command file
cmd_file=open(‘command.txt’)
self.output=”
#loop for reading command one by one
for line in cmd_file:
cmd=line.lstrip()
self.output+=”\nOutput of command “+cmd+” \n”
self.output+=self.net_connect.send_command(cmd)
cmd_file.close()

——————- advertisements ——————-

———————————————————-

print self.output
print “\nCommand Output collected”

#function to update service now
def inc_update(self,inc_number,s_uname,s_password,s_instance):
#connecting with service now
snow = pysnow.Client(instance=s_instance, user=s_uname, password=s_password)
incident = snow.resource(api_path=’/table/incident’)
#payload=self.output
update = {‘work_notes’:self.output, ‘state’: 5}
#updating incident record
updated_record = incident.update(query={‘number’:inc_number}, payload=update)
print “Incident note updated ”

def main():

#Collecting service now details
instance=raw_input(“Enter service now instant name in format of ‘company.service-now.com’ :”)

——————- advertisements ——————-

———————————————————-

instance=instance.rstrip(‘.service-now.com’)
s_uname=raw_input(“Enter service now user name:”)
s_password=getpass.getpass(“Password:”)

##Collecting device credential
dev_uname=raw_input(“\nEnter Device user name :”)
dev_passwd=getpass.getpass(“Password:”)

objDev=cls_incident(dev_uname,dev_passwd)

while True:
try:
inc_number=raw_input(“Enter incident number :”)
ip_address=raw_input(“Enter IP address of device:”)
print “Connecting device and collecting data ”
#creating class object
objDev.collectdata(ip_address)

print (“Updating service now”)
#updaing service nw
objDev.inc_update(inc_number,s_uname,s_password,instance)
print “\nThis program will keep on running, press ctrl C to exit”
print “Enter details for next incident \n”
except Exception,e:
print “Error on execution :”,e
if __name__== “__main__”:
main()

——————- advertisements ——————-

———————————————————-

How to run :

Download the ‘command.txt‘ and ‘incident-update.txt‘ in to same folder of your system. rename ‘incident-update.txt’ in to ‘incident-update.py’. Open the file ‘command.txt’ and add your required commands which need to be run on networking device.. Run the program from command prompt using ‘ python incident-update.py’ . Please provide your input and test . Please ensure you have the reach-ability to service-now instance and network devices from your machine.

Program screen shot

——————- advertisements ——————-

———————————————————-

Service-now screen shot

You could see service now incident notes updated with command output automatically

Hope this will ease your life a bit.. 🙂

Please comment below if you would require customized script based on your requirement which will support multiple device model like Cisco ASA, Juniper, Palo Alto, Checkpoint etc.

The post Update service-now ticket using a Python script appeared first on .

]]>
http://beginnersforum.net/blog/2018/10/16/update-service-now-incident-using-python/feed/ 1 957
Create service-now developer account and REST API user http://beginnersforum.net/blog/2018/10/16/create-service-now-dev-account-api-user/ http://beginnersforum.net/blog/2018/10/16/create-service-now-dev-account-api-user/#comments Tue, 16 Oct 2018 02:51:16 +0000 http://beginnersforum.net/blog/?p=941 Here we are explaining how to create service-now developer account and a new user with REST-API privilege which is required to update your service now task or notes using API Call.  Please refer our post ‘Updating serivice-now incident using python‘ to see the sample usage. Create Service now Developer account : We are using demo service now instance available on

The post Create service-now developer account and REST API user appeared first on .

]]>
Here we are explaining how to create service-now developer account and a new user with REST-API privilege which is required to update your service now task or notes using API Call.  Please refer our post ‘Updating serivice-now incident using python‘ to see the sample usage.

Create Service now Developer account :

We are using demo service now instance available on service now developer account.Here we will explain you how to create service now developer account and make ready for REST API call. Login to service now here (https://developer.servicenow.com) and register for developer account by clicking REGISTER tab available on right side

Launch Instance:

Service Now will provide a demo instance with all functionalities. This will act as service-now platform of your company and where you can create, delete and update incidents and all other service now options.

Once you logged in to developer account, click on Manage->Instance->Request Instance to launch instance

——————- advertisements ——————-

———————————————————-

Click London from next window as this is the latest version.

——————- advertisements ——————-

———————————————————-

Service now will give you new instance .Login to new instance by using the URL showing below. You could see default user name and password on same page. Copy the URL and open using your favorite web browser. Login using user name admin and password as shown in the picture below. It will prompt for a password reset.

Once you are authenticated , we have to create new user account which is having privilege to update service now incident notes using REST API. We are assigning privilege using the ‘role assignment’.

——————- advertisements ——————-

———————————————————-

Create a new user with name ‘apiuser’ and give password for the user. To create new user account, type ‘user’ on search box available on left hand side. It will list Users and Groups under ‘System Security’ category. Click users and add ‘New’

Give the user details on the new window and click submit.

——————- advertisements ——————-

———————————————————-

Next, we have to assign roles. Navigate to ‘Roles’ tab on same page which will be available after your ‘submit’. Click ‘edit’ available under the ‘Roles’ tab

Type ‘web’ on  search box and select ‘web_service_admin’ and click ‘add’

 

——————- advertisements ——————-

———————————————————-

Now we have created user with required Roles. You can use same user name for REST API call from external scripts.

This user can be used for updating the Service-Now tickets or notes using API call. Hope this helps. Please add your suggestions and queries in the comments section.

The post Create service-now developer account and REST API user appeared first on .

]]>
http://beginnersforum.net/blog/2018/10/16/create-service-now-dev-account-api-user/feed/ 1 941
Linux Swap Space Creation and Monitoring http://beginnersforum.net/blog/2018/07/09/linux-swap-space-creation-and-monitoring/ http://beginnersforum.net/blog/2018/07/09/linux-swap-space-creation-and-monitoring/#respond Mon, 09 Jul 2018 05:10:39 +0000 http://beginnersforum.net/blog/?p=884 Overview This Post is intended to understand the swap creation, monitoring and extending in Redhat Linux. Swap space is a restricted amount of physical memory that is allocated for use by the operating system when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are

The post Linux Swap Space Creation and Monitoring appeared first on .

]]>
Overview

This Post is intended to understand the swap creation, monitoring and extending in Redhat Linux.

Swap space is a restricted amount of physical memory that is allocated for use by the operating system when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory.

Recommended System Swap Space
In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. But because the amount of memory in modern systems has increased into the hundreds of gigabytes, it is now recognized that the amount of swap space that a system needs is a function of the memory workload running on that system. However, given that swap space is usually designated at install time, and that it can be difficult to determine beforehand the memory workload of a system, Redhat recommend determining system swap using the following table.

Amount of RAM in the System Recommended Amount of Swap Space
4GB of RAM or less a minimum of 2GB of swap space
4GB to 16GB of RAM a minimum of 4GB of swap space
16GB to 64GB of RAM a minimum of 8GB of swap space
64GB to 256GB of RAM a minimum of 16GB of swap space
256GB to 512GB of RAM a minimum of 32GB of swap space

Note : On most distributions of Linux, it is recommended that you set swap space while installing the operating system

 

How to Monitor Swap Space

We shall look at different commands and tools that can help you to monitor your swap space usage in your Linux systems as follows

Using the swapon Command

To view all devices marked as swap in the /etc/fstab file you can use the –all option. Though devices that are already working as swap space are skipped

If you want to view a summary of swap space usage by device, use the – summary (swapon –s) option.

[root@nfsserver ~]# swapon –summary
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       2097148 0       -1
[root@nfsserver ~]#
[root@nfsserver ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       2097148 0       -1
Note :- Use –help option to view more options and information.
Using /proc/swaps

The /proc filesystem is a process information pseudo-file system. It actually does not contain ‘real’ files but runtime system information, for example system memory, devices mounted, hardware configuration and many more.

[root@nfsserver ~]# cat /proc/swaps

Filename                                Type            Size    Used    Priority

/dev/dm-1                               partition       2097148 0       -1

[root@nfsserver ~]#

Using ‘free’ Command
The free command is used to display the amount of free and used system memory. Using the free command with -h option, which displays output in a human readable format.
[root@nfsserver ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        674M        6.5G        9.8M        507M        6.7G
Swap:          2.0G          0B        2.0G
[root@nfsserver ~]#
 Using top Command
To check swap space usage with the help of ‘top’ command
Using the vmstat Command
This command is used to display information about virtual memory statistics
[root@nfsserver ~]# vmstat
procs ———–memory———- —swap– —–io—- -system– ——cpu—–
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 6791708   2784 516484    0    0     7     0   24   23  0  0 100  0  0
[root@nfsserver ~]#
ADDING SWAP SPACE
Sometimes it is necessary to add more swap space after installation
You have three options: create a new swap partition, create a new swap file, or extend swap on an existing LVM2 logical volume. It is recommended that you extend an existing logical volume
Extending Swap on an LVM2 Logical Volume
To extend an LVM2 swap logical volume(suppose /dev/mapper/centos-swap is our swap volume)
1. Disable swapping for the associated logical volume:
[root@nfsserver ~]# swapoff -v /dev/mapper/centos-swap
swapoff /dev/mapper/centos-swap
[root@nfsserver ~]# swapon -s
[root@nfsserver ~]#
2. Resize the LVM2 logical volume by 256 MB
 [root@nfsserver ~]# lvresize /dev/mapper/centos-swap -L +256M
  Size of logical volume centos/swap changed from 2.00 GiB (512 extents) to 2.25 GiB (576 extents).
  Logical volume centos/swap successfully resized.
 [root@nfsserver ~]#
3. Format the new swap space
[root@nfsserver ~]# mkswap /dev/centos/swap
mkswap: /dev/centos/swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 2359292 KiB
no label, UUID=5e487401-9ae0-4e1d-adff-2346edfc6244
[root@nfsserver ~]#
4. Enable the extended logical volume
[root@nfsserver ~]# swapon -va
swapon /dev/mapper/centos-swap
swapon: /dev/mapper/centos-swap: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/centos-swap: pagesize=4096, swapsize=2415919104, devsize=2415919104
[root@nfsserver ~]#
5. Test that the logical volume has been extended properly
[root@nfsserver ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        677M        6.5G        9.8M        507M        6.7G
Swap:          2.2G          0B        2.2G
[root@nfsserver ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       2359292 0       -1
[root@nfsserver ~]#
Creating an LVM2 Logical Volume for Swap
To add a swap volume group (suppose /dev/centos/swap2 is the new volume)
1. Create the LVM2 logical volume of size 256 MB
[root@nfsserver ~]# lvcreate centos -n swap2 -L 256M
  Logical volume “swap2” created.
[root@nfsserver ~]#
2. Format the new swap space
[root@nfsserver ~]# mkswap /dev/centos/swap2
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=6ea40455-47a0-46bf-844e-ec0ebd4a4e6a
[root@nfsserver ~]#
3. Add the following entry to the /etc/fstab file
/dev/mapper/centos-swap2 swap                    swap    defaults        0 0
4. Enable the extended logical volume
[root@nfsserver ~]# swapon –va
swapon /dev/mapper/centos-swap2
swapon: /dev/mapper/centos-swap2: found swap signature: version 1, page-size 4, same byte order
swapon: /dev/mapper/centos-swap2: pagesize=4096, swapsize=268435456, devsize=268435456
[root@nfsserver ~]#
5. Verify the swap space
[root@nfsserver ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       2097148 0       -1
/dev/dm-3                               partition       262140  0       -2
Creating a Swap File
To Add a swap file
1. Determine the size of the new swap file in megabytes and multiply by 1024 to determine the number of blocks. For example, the block size of a 64 MB swap file is 65536.
2. At a shell prompt as root, type the following command with count being equal to the desired block size:
[root@nfsserver ~]# dd if=/dev/zero of=/swapfile bs=1024 count=65536
65536+0 records in
65536+0 records out
67108864 bytes (67 MB) copied, 0.0893063 s, 751 MB/s
[root@nfsserver ~]#
[root@nfsserver ~]# ls -ld /swapfile
-rw-r–r–. 1 root root 67108864 May 17 16:38 /swapfile
[root@nfsserver ~]# du -sh /swapfile
64M     /swapfile
[root@nfsserver ~]#
3. Change the permissions of the newly created file
[root@nfsserver ~]# chmod 0600 /swapfile
[root@nfsserver ~]#
4. Setup the swap file with the command
[root@nfsserver ~]# mkswap /swapfile
Setting up swapspace version 1, size = 65532 KiB
no label, UUID=8a404550-e8a3-4f2b-9daf-137fc34f7b6d
[root@nfsserver ~]#
5. Edit /etc/fstab and enable the newly added swap space
/swapfile          swap            swap    defaults        0 0
[root@nfsserver ~]# swapon -va
swapon /swapfile
swapon: /swapfile: found swap signature: version 1, page-size 4, same byte order
swapon: /swapfile: pagesize=4096, swapsize=67108864, devsize=67108864
[root@nfsserver ~]#
6. Verify the swap space created.
[root@nfsserver ~]# swapon -s
Filename                                Type            Size    Used    Priority
/dev/dm-1                               partition       2097148 0       -1
/dev/dm-3                               partition       262140  0       -2
/swapfile                               file    65532   0       -3
[root@nfsserver ~]#
Hope this has helped you ..
Thanks!!!!

The post Linux Swap Space Creation and Monitoring appeared first on .

]]>
http://beginnersforum.net/blog/2018/07/09/linux-swap-space-creation-and-monitoring/feed/ 0 884
EMC ISILON Interview questions http://beginnersforum.net/blog/2018/02/15/emc-isilon-interview-questions/ http://beginnersforum.net/blog/2018/02/15/emc-isilon-interview-questions/#respond Thu, 15 Feb 2018 11:05:35 +0000 http://beginnersforum.net/blog/?p=846 Adding one more post to our interview questions post category, this time for ISILON. We are trying to cover some of the frequently asked questions from the ISILON architecture and configuration areas.  Node and drive types supported :  ISILON supported 3 different types of nodes S-Series, X-Series and NL-Series. S-Series (S210) is high performance node type supports SSD drives. X-Series

The post EMC ISILON Interview questions appeared first on .

]]>
Adding one more post to our interview questions post category, this time for ISILON. We are trying to cover some of the frequently asked questions from the ISILON architecture and configuration areas.

  •  Node and drive types supported :  ISILON supported 3 different types of nodes S-Series, X-Series and NL-Series. S-Series (S210) is high performance node type supports SSD drives. X-Series nodes (X210 and X410) supports upto 6 SSDs and can have remaining slots with HDDs in them. NL-Series (NL-410) nodes supports only one SSD in the system and SATA drives in the remaining slots. This node type is intended for the archiving requirements.

The system with the recent OneFS versions, also supports All-Flash nodes, Hybrid nodes, Archive nodes and IsilonSD nodes. ISILON All-Flash nodes (F800) can have upto 924 TB in a single 4U node and can grow upto 33PB in one cluster. One node can house upto 60 drives. Hybrid nodes (H400, H500 and H600) supports a mix of SSDs and HDDs. H400 and H500 can have SATA drives and SSDs and H600 supports SSDs and SAS drives. Archive nodes (A200 and A2000) are intended for archiving solutions. A2000 nodes can have 80 slots with 10TB drives only supported. This node is for high-density archiving. A200 is for near-primary archiving storage solutions which supports 2 TB, 4 TB or 8 TB SATA HDDs- a maximum of 60 drives.

IsilonSD is the software only node type which can be installed in customer hardware.

  •  Scale-Out and Scale-Up architecture : The first thing comes with ISILON is the architecture, Scale-Out. With Scale-Out architecture, the processing and capacity will be increased in parallel. As we add a node, both capacity and processing power will be increased for the system. Let’s take the example of VNX for Scale-Up architecture. Here, the processing power (i.e. Storage processors) can not be increased as the system limit is 2 SPs, but we can grow the overall system capacity by adding more DAEs (and disks) to the system supported limit.
  •  Infiniband Switches and types : ISILON makes use of IB switches for the internal communication between the nodes. ISILON now supports 40 GbE switches also with the Gen-6 hardware in addition to the 10GbE IB switches.
  •  SmartConnect and SSIP : [definition from ISILON SmartConnect whitepaper] SmartConnect is a licensable software module of the EMC ISILON OneFS Operating System that optimizes performance and availability by enabling intelligent client connection load balancing and failover support. Through a single host name, SmartConnect enables client connection load balancing and dynamic NFS failover  and failback of client connections across storage nodes to provide optimal utilization of the cluster resources. SmartConnect eliminates the need to install client side drivers, enabling the IT administrator to easily manage large numbers of client with confidence. And in the event of a system failure, file system stability and availability are maintained.

For every SmartConnect zone there will be one SSIP (SmarConnect Service IP), which wil be used for the client connections. SSIP and associated hostname will have the DNS entry and the client requests will come to the cluster/zone via SSIP. The zone redirects the request to the nodes for completion.

  •  SmartPool : SmartPool enables effective tiering of storage nodes within the filesystem. Data – based on the utilization – will be moved across the tiers within the filesystem automatically with seamless application and end user access. Customers can define policies for the data movement for different workflows and node types.
  •  Protection types in ISILON : ISILON cluster can have protection types N+M (where N is the number of data blocks and M is the number of nodes/drives failures the system can tolerate) or N+M:B (where N is the number of data blocks M is the number of drives failures the system can tolerate and B is the number of node failures can be tolerated ), where N>M. In case of a 3-node system, it can have +1 (i.e. 2+1) protection type. Here the system can tolerate 1 drive/node failure without any data loss.
  •  Steps to create an NFS export : Here we have listed the commands to create and list/view the NFS export.

To create the NFS export :
isi nfs exports create –clients=10.20.10.31,10.20.10.32 –root-clients=10.20.10.33,10.20.10.34 –description=”Beginnersforum Test NFS Share” –paths=/ifs/BForum/Test –security-flavors=unix

To list the NFS exports :
isi nfs exports list

To view the NFS export :
isi nfs exports view <export_number>

You can create the NFS export alias and quotas also for the NFS export.

Hope this helped you in learning some ISILON stuff. We will have more questions and details in upcoming posts. For more interview questions posts please click here. Please add any queries and suggestions in comments.

The post EMC ISILON Interview questions appeared first on .

]]>
http://beginnersforum.net/blog/2018/02/15/emc-isilon-interview-questions/feed/ 0 846
Palo Alto Interview Questions and Answers – Part II http://beginnersforum.net/blog/2017/10/31/palo-alto-interview-questions-and-answers-part-2/ http://beginnersforum.net/blog/2017/10/31/palo-alto-interview-questions-and-answers-part-2/#comments Tue, 31 Oct 2017 07:03:26 +0000 http://beginnersforum.net/blog/?p=807 Plao Alto Interview Questions and Answers This post is a continuation to one of our recent post where we discussed a few questions and answers on Palo Alto firewall. Here we are adding another set of Q&A based on our readers interest. Hope this will help you in improving your knowledge of the PA firewall. 1. How to publish internal website to internet.

The post Palo Alto Interview Questions and Answers – Part II appeared first on .

]]>
Plao Alto Interview Questions and Answers

This post is a continuation to one of our recent post where we discussed a few questions and answers on Palo Alto firewall. Here we are adding another set of Q&A based on our readers interest. Hope this will help you in improving your knowledge of the PA firewall.

1. How to publish internal website to internet. Or how to perform destination NAT ?

To publish internal website to outside world, we would require destination NAT and policy configuration. NAT require converting internal private IP address in to external public IP address. Firewall policy need to enable access to internal server on http service from outside .We can see how to perform NAT and policy configuration with respect to following scenario

Provide the access to 192.168.10.100 through the public IP address 64.10.11.10 from internet

Following NAT and policy rules need to be created.

NAT:-> Here we need to use pre-NAT configuration to identify zone. Both source and destination Zone should be Untrust-L3 as source and destination address part of un trust zone

——————- advertisements ——————-

———————————————————-

Policy-> Here we need to use Post-NAT configuration to identify zone. The source zone will be Untrust-L3 as the source address still same 12.67.5.2 and the destination zone would be Trust-L3 as the translated IP address belongs to trust-l3 zone.

We have to use pre-NAT IP address for the source and destination IP address part on policy configuration. According to packet flow, actual translation is not yet happen, only egress zone and route look up happened for the packet. Actual translation will happen after policy lookup . Please click here to understand detailed packet flow in PA firewall.  Just remember the following technique so it will be easy to understand

In firewall rule,

Zone: Post NAT

IP address: Pre NAT

In NAT rule,

Zone: Pre NAT

Final Configuration looks like below:

2. What is Global Protect ?

——————- advertisements ——————-

———————————————————-

GlobalProtect provides a transparent agent that extends enterprise security Policy to all users regardless of their location. The agent also can act as Remote Access VPN client.  Following are the component

Gateway : This can be or more interface on Palo Alto firewall which provide access and security enforcement for traffic from Global Protect Agent

Portal: Centralized control which manages gatrway, certificate , user authentication and end host check list

Agent : software on the laptop that is configured to connect to the GlobalProtect deployment.

3. Explain about virtual system ?

A virtual system specifies a collection of physical and logical firewall interfaces and security zones.Virtual system allows to segmentation of security policy functionalities like ACL, NAT and QOS. Networking functions including static and dynamic routing are not controlled by virtual systems. If routing segmentation is desired for each virtual system, we should have an additional virtual router.

——————- advertisements ——————-

———————————————————-

4.Explain about various links used to establish HA or HA introduction ?

PA firewall use HA links to synchronize data and maintain state information. Some models of the firewall have dedicated HA ports—Control link (HA1) and Data link (HA2), while others require you to use the in-band ports as HA links.

Control Link :  The HA1 links used to exchange hellos, heartbeats, and HA state information, and management plane sync for routing, User-ID information and synchronize configuration . The HA1 should be layar 3 interface which require an IP address

Data Link : The HA2 link is used to synchronize sessions, forwarding tables, IPSec security associations and ARP tables between firewalls in an HA pair. The HA 2 is a layer 2 link

Backup Links: Provide redundancy for the HA1 and the HA2 links. In-band ports are used as backup links for both HA1 and HA2. The HA backup links IP address must be on different subnet from primary HA links.

Packet-Forwarding Link: In addition to the HA1 and HA2 links, an active/active deployment also requires a dedicated HA3 link. The firewalls use this link for forwarding packets to the peer during session setup and asymmetric traffic flow.

4. What protocol used to exchange heart beat between HA ?

ICMP

——————- advertisements ——————-

———————————————————-

5. Various port numbers used in HA ?

HA1: tcp/28769,tcp/28260 for clear text communication ,tcp/28 for encrypted communication

HA2: Use protocol number 99 or UDP-29281

6. What are the scenarios for fail-over triggering ?

->if one or more monitored interfaces fail

->if one or more specified destinations cannot be pinged by the active firewall

->if the active device does not respond to heartbeat polls (Loss of three consecutive heartbeats over period of 1000 milliseconds)

7. How to troubleshoot HA using CLI ?

>show high-availability state : Show the HA state of the firewall

>show high-availability state-synchronization : to check sync status

>show high-availability path-monitoring : to show the status of path monitoring

>request high-availablity state suspend : to suspend active box and make the current passive box as active

8. which command to check the firewall policy matching for particular destination ?

>test security-policy-match from trust to untrust destination <IP>

9.Command to check the NAT rule ?

>test nat-policy-match

10. Command to check the system details ?

>show system info  // It will show management IP , System version and serial number

11. How to perform debug in PA ?

Following are the steps

Clear all packet capture settings

>debug dataplane packet-diag clear all

set traffic matching condition

> debug dataplane packet-diag set filter match source 192.168.9.40 destination 4.2.2.2
> debug dataplane packet-diag set filter on

——————- advertisements ——————-

———————————————————-

Enable packet capture

> debug dataplane packet-diag set capture stage receive file rx.pcap
> debug dataplane packet-diag set capture stage transmit file tx.pcap
> debug dataplane packet-diag set capture stage drop file dp.pcap
> debug dataplane packet-diag set capture stage firewall file fw.pcap
> debug dataplane packet-diag set capture on

View the captured file

view-pcap filter-pcap rx.pcap

12. What you mean by Device Group and Device Template.?

Device group allows you to group firewalls which is require similar  set of policy , such as firewalls that manage a group of branch offices or individual departments in a company. Panorama treats each group as a single unit when applying policies. A firewall can belong to only one device group. The Objects and Policies are only part of Device Group.

Device Template :

Device Templates enable you to deploy a common base configuration like Network and device specific settings to multiple firewalls that require similar settings.
This is available in Device and Network tabs on Panorama

13. Why you are using Security Profile .?

Security Profile using to scans allowed applications for threats, such as viruses, malware, spyware, and DDOS attacks.Security profiles are not used in the match criteria of a traffic flow. The security profile is applied to scan traffic after the application or category is allowed by the security policy. You can add security profiles that are commonly applied together to a Security Profile Group

Following are the Security Profiles available
Antivirus Profiles
Anti-Spyware Profiles
Vulnerability Protection Profiles
URL Filtering Profiles
Data Filtering Profiles
File Blocking Profiles
WildFire Analysis Profiles
DoS Protection Profiles

Thanks for reading. Hope this helped in improving your Palo Alto knowledge, or clearing some of your doubts. Please let us know if you have any queries/comments.

The post Palo Alto Interview Questions and Answers – Part II appeared first on .

]]>
http://beginnersforum.net/blog/2017/10/31/palo-alto-interview-questions-and-answers-part-2/feed/ 2 807
Palo Alto Interview Questions and Answers – Part I http://beginnersforum.net/blog/2017/10/31/palo-alto-interview-questions-answers/ http://beginnersforum.net/blog/2017/10/31/palo-alto-interview-questions-answers/#comments Tue, 31 Oct 2017 06:13:28 +0000 http://beginnersforum.net/blog/?p=787 Plao Alto Interview Questions and Answers Some of our readers had requested for a post with some of the common questions and answers for the Palo Alto Firewall, after reading our post on PA Firewall. Following are some of the questions normally asked for PA interview. Please use the comment section if you have any questions to add . 1.

The post Palo Alto Interview Questions and Answers – Part I appeared first on .

]]>
Plao Alto Interview Questions and Answers

Some of our readers had requested for a post with some of the common questions and answers for the Palo Alto Firewall, after reading our post on PA Firewall. Following are some of the questions normally asked for PA interview. Please use the comment section if you have any questions to add .

1. Why Palo Alto is being called as next generation firewall ?

Ans: Next-generation firewalls include enterprise firewall capabilities, an intrusion prevention system (IPS) and application control features. Palo Alto Networks delivers all the next generation firewall features using the single platformparallel processing and single management systems, unlike other vendors who use different modules or multiple management systems to offer NGFW features. Palo Alto NGFW different from other venders in terms of Platform, Process and architecture

2. Difference between Palo Alto NGFW and Checkpoint UTM  ?

PA follows Single pass parallel processing while UTM follows Multi pass architecture process

3. Describe about Palo Alto architecture and advantage ?

Architecture- Single Pass Parallel Processing (SP3) architecture

Advantage: This Single Pass traffic processing enables very high throughput and low latency – with all security functions active.  It also offers single, fully integrated policy which helps simple and easier management of firewall policy

——————- advertisements ——————-

———————————————————-

4. Explain about Single Pass and Parallel processing architecture ?

Single Pass : The single pass software performs operations once per packet. As a packet is processed, networking functions, policy lookup, application identification and decoding, and signature matching for any and all threats and content are all performed just once.  Instead of using separate engines and signature sets (requiring multi-pass scanning) and instead of using file proxies (requiring file download prior to scanning), the single pass software in next-generation firewalls scans content once and in a stream-based fashion to avoid latency introduction.

Parallel Processing :   PA designed with separate data and control planes to support parallel processing. The second important element of the Parallel Processing hardware is the use of discrete, specialized processing groups to perform several critical functions.

  • Networking: routing, flow lookup, stats counting, NAT, and similar functions are performed on network-specific hardware
  • User-ID, App-ID, and policy all occur on a multi-core security engine with hardware acceleration for encryption, decryption, and decompression.
  • Content-ID content analysis uses dedicated, specialized content scanning engine
  • On the controlplane, a dedicated management processor (with dedicated disk and RAM) drives the configuration management, logging, and reporting without touching data processing hardware.

5. Difference between PA-200,PA-500 and higher models ?

In PA-200 and PA-500, Signature process and network processing implemented on software while higher models have dedicate hardware processer

6. What are the four deployment mode and explain ?
  1. Tap Mode : Tap mode allows you to passively monitor traffic flow across network by way of tap or switch SPAN/mirror port
  2. Virtual wire : In a virtual wire deployment, the firewall is installed transparently on a network segment by binding two interfaces together

——————- advertisements ——————-

———————————————————-

  1. Layer 2 mode : multiple interfaces can be configured into a “virtual-switch” or VLAN in L2 mode.
  2. Layer 3 Deployment : In a Layer 3 deployment, the firewall routes traffic between multiple interfaces. An IP address must be assigned to each interface and a virtual router must be defined to route the traffic.

7. What you mean by Zone Protection profile ?

Zone Protection Profiles offer protection against most common flood, reconnaissance, and other packet-based attacks. For each security zone, you can define a zone protection profile that specifies how the security gateway responds to attacks from that zone. The following types of protection are supported:

-Flood Protection—Protects against SYN, ICMP, UDP, and other IP-based flooding attacks.

-Reconnaissance detection—Allows you to detect and block commonly used port scans and IP address sweeps that attackers run to find potential attack targets.

-Packet-based attack protection—Protects against large ICMP packets and ICMP fragment attacks.

Configured under Network tab -> Network Profiles -> Zone protection.

8. What is u-turn NAT and how to configure ?

U-turn NAT is applicable when internal resources on trust zone need to access DMZ resources using public IP addresses of Untrust zone.

——————- advertisements ——————-

———————————————————-

Let’s explain based on below scenario.

 

In above example, the website company.com (192.168.10.20) statically NAT’ed with public IP address 81.23.7.22 on untrusted zone. Users in the corporate office on the 192.168.1.0/24 segment need to access the company webpage. Their DNS lookup will resolve to the public IP in the Internet zone. The basic destination NAT rules that provide internet users access to the web server will not work for internal users browsing to the public IP .

Following are the NAT rule and policy definition.

  Next Page

 

okay, not making this post too long to read. We will be adding another set of questions in our next post soon.

Thanks for reading. Hope this helped in improving your Palo Alto knowledge, or clearing some of your doubts. Please let us know if you have any queries/comments.

Click Here for Part 2 of this post, another set of questions for you.

 

The post Palo Alto Interview Questions and Answers – Part I appeared first on .

]]>
http://beginnersforum.net/blog/2017/10/31/palo-alto-interview-questions-answers/feed/ 5 787
Network Automation using Python – Part VII – SSL certificate status validation and alert configuration http://beginnersforum.net/blog/2017/10/27/network-automation-python-check-ssl-certificate-expiry-date-send-mail/ http://beginnersforum.net/blog/2017/10/27/network-automation-python-check-ssl-certificate-expiry-date-send-mail/#respond Thu, 26 Oct 2017 19:33:14 +0000 http://beginnersforum.net/blog/?p=736 Python SSL Certificate Checker  Continuing our Networking Automation using Python blog series, here is the Part 7. In this part we are explaining python script which will check the expiry date of a SSL certificate from a list of IP address and send an e-mail automatically if the certificate expiry date is nearing. The IP addresses can be of your load balancer VIP or Server IP

The post Network Automation using Python – Part VII – SSL certificate status validation and alert configuration appeared first on .

]]>
Python SSL Certificate Checker 

Continuing our Networking Automation using Python blog series, here is the Part 7.

In this part we are explaining python script which will check the expiry date of a SSL certificate from a list of IP address and send an e-mail automatically if the certificate expiry date is nearing. The IP addresses can be of your load balancer VIP or Server IP address or any device IP address. You can use same script to check SSL certificate for any port number like 443,587,993,995,465 etc.

Basic Requirements

  1. Python 3.6
  2. server_ip.txt , a text file which contains all device IP address
  3. A email account on www.outlook.com . You can use any other mail account by editing SMTP server detail on the script. Please let us know if you want customised script which will sent mail from your corporate mail account or Microsoft Outlook.

Please read part 1 and part 2 to get started with python and how to run your first program.

This script have two files

  1. server_ip.txt -> this file store all the device IP address
  2. sslcheck.py -> This is the python script

——————- advertisements ——————-

———————————————————-

How to run :

Step 1. Download the sslcheck and server_ip to the same folder

Step 2. Change the sslcheck.txt to sslcheck.py

Step 3. Open server_ip.txt and save with all your device IP address with port number whose SSL certificate need to be check.

Step 4. Open command prompt “CMD” and navigate to the folder where you have saved script and ‘server_ip.txt’

Step 5. Run script by typing “python sslcheck.py”  on command prompt

Step 6.It will ask for threshold date, from mail id , to mail id and credentials. Please provide the same

Step 7. Script will go though each device SSL certificate and sent mail if anything going to expire within given number of days.

 

Script Details

import ssl
from datetime import datetime
import pytz
import OpenSSL
import socket
import getpass
from datetime import timedelta
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

——————- advertisements ——————-

———————————————————-

print(“Program to check SSL certificate validity \n”)
##opening file
ipfile=open(‘server_ip.txt’)
cur_date = datetime.utcnow()
mailbody=””
expcount=0

##getting details
expday=input(“Please provide threshold expiry date :”)
from_mail=input(“Your mail id : “)
passwd=getpass.getpass(“password : “)
to_mail=input(“Target mail id : “)
##checking certificate validity. for loop to go through each IP in server_ip.txt file

for ip in ipfile:
try:
host = ip.strip().split(“:”)[0]
port = ip.strip().split(“:”)[1]
print(“\nChecking certifcate for server “,host)
ctx = OpenSSL.SSL.Context(ssl.PROTOCOL_TLSv1)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, int(port)))
cnx = OpenSSL.SSL.Connection(ctx, s)
cnx.set_connect_state()
cnx.do_handshake()
cert=cnx.get_peer_certificate()
s.close()
server_name = cert.get_subject().commonName
print (server_name)

——————- advertisements ——————-

———————————————————-

##checking expiry date
edate=cert.get_notAfter()
edate=edate.decode()

##converting in to system time format
exp_date = datetime.strptime(edate,’%Y%m%d%H%M%SZ’)
days_to_expire = int((exp_date – cur_date).days)
print(“day to expire”,days_to_expire)
##preparing mail body
if days_to_expire < int(expday) :
expcount=expcount+1
mailbody=mailbody+”\n Server name =”+server_name+”, Days to expire:”+str(days_to_expire)

except:
print (“error on connection to Server,”,host)
print (mailbody)

#sending mail if any certificate going to expire within threshold days
if expcount >= 1 :
try:
print(“\nCertifcate alert for “+str(expcount)+” Servers,Sending mails”)

body=”Following certificate going to expire, please take action \n”+mailbody
s = smtplib.SMTP(host=’smtp-mail.outlook.com’, port=587) # change here if you want to use other smtp server
s.starttls()
s.login(from_mail,passwd)

——————- advertisements ——————-

———————————————————-

msg = MIMEMultipart() # create a message
msg[‘From’]=from_mail
msg[‘To’]=to_mail
msg[‘Subject’]=”Certificate Expire alert”
# add in the message body
msg.attach(MIMEText(str(body),’plain’))

# send the message via the server set up earlier.
s.send_message(msg)
print(“Mail sent”)
s.close()
except:
print (“Sending mail failed”)
else :
print(“All certificate are below the threshold date”)

print (‘\nCert check completed’)

 

Sample Output 

Below images are sample script and a sample e-mail alert.

——————- advertisements ——————-

———————————————————-

Sample e-mail alert

Hope this post helped you. You can read more posts on Network automation using Python here. Please use the comments section for any queries/suggestions .

Reference :

https://www.python.org/

http://www.tutorialspoint.com/python/ 

The post Network Automation using Python – Part VII – SSL certificate status validation and alert configuration appeared first on .

]]>
http://beginnersforum.net/blog/2017/10/27/network-automation-python-check-ssl-certificate-expiry-date-send-mail/feed/ 0 736