--- author: Sean Cavanaugh date: 2022-03-07 00:00 UTC description: Ansible Automation Platform can automate deployments, migrations and operational tasks for your public cloud. lang: en-us title: How to Migrate your Ansible Playbooks to Support AWS boto3 --- # How to Migrate your Ansible Playbooks to Support AWS boto3 Red Hat Ansible Automation Platform is known for automating Linux, Windows and networking infrastructure. While both the community version of Ansible and our enterprise offering, Red Hat Ansible Automation Platform, are prominently known for configuration management, this is just a small piece of what you can really achieve with Ansible's automation. There are many other use-cases that Ansible Automation Platform is great at automating, such as your AWS, Azure or Google public cloud ![diagram of Ansible on public clouds](/images/posts/archive/ansible-public-clouds.png) Ansible Automation Platform can automate deployments, migrations and operational tasks for your public cloud. This is extremely powerful because you can orchestrate your entire infrastructure [workflow](https://docs.ansible.com/automation-controller/latest/html/userguide/workflows.html), from cloud deployment, to instance configuration, to retirement, rather than requiring a point tool for each separate use-case. This also allows IT administrators to concentrate on automating business outcomes rather than individual technology silos. Specifically for this blog, I wanted to cover converting your Ansible Playbooks for provisioning an instance on AWS from the unsupported ec2 module to the fully supported ec2_instance module. Amazon has deprecated their Software Development Kit (SDK) Boto in favor of the newer fully supported SDK Boto3. Alina Buzachis announced "What's New: The Ansible AWS Collection 2.0 Release" back in October 2021, which includes full support in our Red Hat Ansible Certified Content Collection for the amazon.aws.ec2_instance module, which uses Python 3 and Boto3. The supported ec2_instance module has existed for some time, but I had not adopted it for my use-case yet because we needed one last feature for parity with the older ec2 module. Specifically, for demos and workshops, I required the exact_count parameter. This allows me to boot as many identical instances as I specify. For example, if I specify exact_count: 50, it will spin up 50 identical Red Hat Enterprise Linux 8 instances. Using exact_count can save hours of time versus using a loop, and I don't need a massive declarative file to represent my 50 servers; it's just a tweak of a single parameter to make identical copies. [Luckily we know that we have parameter](https://github.com/ansible-collections/amazon.aws/pull/539), so I started converting all workshops and demos that the technical marketing team uses to Boto3. Let's look at an older version of a task file from our technical workshops so I can show you how to convert from ec2 to [ec2_instance](https://docs.ansible.com/ansible/latest/collections/amazon/aws/ec2_instance_module.html#ansible-collections-amazon-aws-ec2-instance-module): ```yaml --- - name: Create EC2 instances for RHEL8 ec2: assign_public_ip: true key_name: "{{ ec2_name_prefix }}-key" group: "{{ ec2_security_group }}" instance_type: "{{ ec2_info[rhel8].size }}" image: "{{ node_ami_rhel.image_id }}" region: "{{ ec2_region }}" exact_count: "{{ student_total }}" count_tag: Workshop_node1": "{{ ec2_name_prefix }}-node1" instance_tags: Workshop_node1": "{{ ec2_name_prefix }}-node1" Workshop: "{{ ec2_name_prefix }}" Workshop_type: "{{ workshop_type }}" wait: "{{ ec2_wait }}" vpc_subnet_id: "{{ ec2_vpc_subnet_id }}" volumes: - device_name: /dev/sda1 volume_type: gp2 volume_size: "{{ ec2_info[control_type].disk_space }}" delete_on_termination: true register: control_output ``` For booting an instance into AWS, there are only six required parameters. You need to specify a key (i.e. the SSH key to access the image), security group (virtual firewall for your ec2 instances), instance_type (e.g. t2.medium), a region (i.e. us-east-1), image (e.g. an AMI for RHEL8) and a network interface or VPC subnet id (vpc_subnet_id). The rest of the parameters in my task above are for: - tweaking the instance - adding a public IP address, increasing storage - changing the module behavior - wait refers to waiting for the instance to reach running state, - exact_count refers to provisioning multiple instances in parallel - tagging, which is just adding key value tags to the instance so we can filter on them in subsequent automation, or just sort easily in the AWS web console. To convert this to ec2_instance, there are only a few small tweaks you need to make!
ec2: | ec2_instance: |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|