Getting familiar with Boto3 – AWS SDK for Python
Let’s discuss about Boto3, AWS Software Development Kit (SDK) for Python in this post. Boto3 enables you to perform most of the tasks in almost all the AWS services via Python scripts. For more details on the available services and actions, I would recommend you to refer the Boto3 documentation.
In this post we will have a brief introduction to the SDK including the installation and a few examples.
Yes, as you expected it is the pip install command. pip install boto3 will take care of the installation and if you want a specific version you can use the below syntax (where version is the required version 1.20.30 for example).
pip install boto3==version
C:\Users\bforum>python -m pip install boto3
Downloading boto3-1.20.31-py3-none-any.whl (131 kB)
|████████████████████████████████| 131 kB 168 kB/s
Downloading botocore-1.23.31-py3-none-any.whl (8.5 MB)
|████████████████████████████████| 8.5 MB 726 kB/s
==== truncated output ====
Successfully installed boto3-1.20.31 botocore-1.23.31
Setting up Boto3
Now that you have installed the module, you have import the same to your program. import boto3 would do it.
For accessing the AWS services, you have to allow the credentials for Boto3 to connect to the environment. You can have the shared credentials (key_id and secret for AWS access updated in the .aws/credentials file. If you are using an EC2 instance, you can use the IAM roles for accessing the AWS environment.
Not at all recommended, but here I am having my credentials inside my script itself.
Basic S3 tasks
s3 = boto3.client(service_name=”s3″,aws_access_key_id=”test_key”,aws_secret_access_key=aws_secret_access_key)
You can replace the above line with just s3 = boto3.client(‘s3’) if you have your credentials defined in the file or is being taken care by IAM.
You can now invoke the below command to create an S3 bucket.
# Where bucket_name is the desired bucket name should be unique and must meeting the naming requirements (lowercase,numbers,periods and dashes only, having length of 3-63 characters etc…)
You will get an HTTP 200 response with bucket name and creation details.
Now let’s see how we can list S3 buckets.
s3 = boto3.client(‘s3’)
the above list_buckets command will list the existing buckets, but that will be HTTP response in JSON format. To filter only the names, let’s use the below commands (basically a for loop).
out = s3.list_buckets()
for bucket in out[‘Buckets’]:
The output will be the bucket names as below
You can download a file from S3 using the below commands
bucket_name = ‘beginnersforum-bf1’
keyname = ‘Test Text file1.txt’
output_filename = ‘test1.txt’
s3.download_file(bucket_name, keyname, output_filename)
Few EC2 examples
ec2 = boto3.client(‘ec2’) #will create an EC2 client which can be used in the upcoming commands.
start an instance : ec2.start_instances(InstanceIds=[instance_id])
Reboot an instance : ec2.reboot_instances(InstanceIds=[‘Instance_ID’])
Shutdown an instance : ec2.stop_instances(InstanceIds=[instance_id])
That was just a very basic intro to this Python module. Based on your need, you may refer to the specific service and action details in the SDK documentation. Hope this post helped you in understanding the module, please feel free to have your thoughts/feedback in the comments section.