Tag Archives: Cloud

DevOps on AWS – Building an AMI Bakery

There’s a relatively new concept in the IT world called immutable infrastructure.  This is the idea that once you create a server you should never change it’s running configuration.  The advantages of this approach include: avoidance of configuration drift; no need to patch running systems; and no need for privileged access to running systems.

Configuration drift is where, over time, administrators log on to running systems and make changes.  Unfortunately these changes are often undocumented and in some cases not persisted, so they aren’t applied on reboot.  This leads to lots of unique servers which are impossible to manage at scale.

Everyone should be familiar with the idea of patching running servers.  In my experience performing patching of live systems never goes smoothly, often due to the aforementioned configuration drift.  If we don’t need to change the configuration of a running server, nor to patch it, then we’ve reached the system where there’s no need to log on as root or administrator.  This is great news for tightly regulated organisations who often have to worry about privileged insider threats and spend vast sums of money to build systems that monitor what their administrators are doing.

The way to create immutable infrastructure, and to achieve these benefits, is to create a master image and use this to instantiate all of your servers.  If you want to modify a server, changing it’s configuration or patching it, then you update your master image and redeploy your servers in a rolling upgrade.  This may sound like a lot of work, but by adopting the processes and tooling of DevOps it’s actually quite simple to get up and running.

I’m doing a lot of work with Amazon Web Services (AWS) at the moment and their master images are called Amazon Machine Images (AMI).  AWS also provides a number of DevOps tools that we can use to automate the process of creating AMIs.

Building an AMI with Packer

I started out by creating an AMI manually using the Packer tool from Hashicorp.  Packer is an opensource application written in Go that is design to automate the production of machine images.  The images are generated by taking a base image and then customising it based on a configuration file.  For the purposes of my proof of concept I used the following Packer configuration file:

{
    "builders": [{
        "type": "amazon-ebs",
        "region": "eu-west-1",
        "vpc_id": "vpc-4925e72e",
        "subnet_id": "subnet-4d13d12a",
        "source_ami": "ami-01ccc867",
        "instance_type": "t2.micro",
        "ssh_username": "ec2-user",
        "ami_name": "yum-upgrade {{timestamp}}"
    }],
    "provisioners": [{
        "type": "shell",
        "inline": [
            "sleep 30",
            "sudo yum update -y"
        ]
    }]
}

The first part of the file, the builder, describes how the image will be built.  In this example I am building an “amazon-ebs” image, i.e. an AMI backed with an Elastic Block Storage filesystem.  The other values specify things like the AWS region, VPC, and EC2 instance type that will be used for the build process.  One of the key fields is “source_ami”, this field specifies the base AMI to use, here I am using the latest Amazon Linux AMI available at the time of writing.

The second part of the file, the provisioner, describes how the base image should be customised.  In this example all I am doing is running YUM to apply all of the available package updates using an inline shell provisioner.  There are lots of other provisioners described in the Packer documentation that may be more useful for complex configurations.

The other prerequisite that you need is a set of valid AWS credentials.  Check the AWS documentation on how to set these up.

Once you’ve got your credentials configured you should save the configuration file as packer.json, and you can then check it’s validity by running:

packer validate packer.json

Assuming there’s no syntax errors, building an AMI is as simple as:

packer build packer.json

The build might take a while to run, but once it’s finished you should be able to look at the AMIs section of the EC2 web console and see your newly baked image!

Automating the Process

The source code for my proof of concept AMI bakery is available from my GitHub account.

The automated process works by creating an AWS CodePipeline that is triggered by changes to an AWS CodeCommit Git repository.  The pipeline has two stages: a source stage that monitors the Git repository and a build stage which is an AWS CodeBuild process that runs the Packer command that will produce our new AMI.  For simplicity I’ve written AWS CloudFormation templates to deploy all of these services and their supporting AWS IAM roles.  For the steps to do this, see the README in the GitHub repository.

AWS CodeCommit

AWS CodeCommit is a managed Git service, similar to GitHub.  The service isn’t as feature rich as GitHub, but it has the advantages of being tightly integrated with the other AWS services and of using AWS IAM roles to control access.  AWS CodePipeline supports GitHub Git repositories as well, though there are a couple of extra integration steps needed to setup access.

To create the AWS CodeCommit repository, deploy the codecommit.yaml AWS CloudFormation template using either the AWS web console or the CLI.

AWS CodeBuild

AWS CodeBuild is a fully managed build service that covers all of the steps necessary to create software packages that are ready to be installed – compilation, testing, and packaging.  AWS CodeBuild works by processing a build specification YAML file that describes the build environment and the build steps.  Build environments are supplied as Docker containers, AWS provides a number of pre-built containers for common languages and platforms such as Java, Python, and Ruby.

Unfortunately, Packer is not one of the supplied build containers, fortunately with AWS CodeBuild you can supply your own container.  This is the Dockerfile I put together to run Packer on the AWS CodeBuild service:

FROM ubuntu

RUN apt-get update && apt-get -y install curl unzip jq && \
    curl -o packer.zip https://releases.hashicorp.com/packer/1.0.0/packer_1.0.0_linux_amd64.zip && \
    unzip packer.zip

CMD ["/packer"]

Normally I would have built a minimal Packer container, but AWS CodeBuild requires a bunch of other commands to function and I couldn’t find these listed in the documentation, so I went with the quick solution of copying what Amazon do themselves!

AWS CodeBuild needs to pull the container from a registry.  You can use the Docker Hub container registry, but I chose to use the AWS Elastic Container Registry because it integrates with AWS CodeBuild using IAM roles which makes configuring security simpler.  To create the AWS Elastic Container Registry, deploy the ecr-repository.yaml AWS CloudFormation template using either the AWS web console or the CLI.

With the registry created, building and uploading the Packer container is simple:

docker build --rm -t /packer:latest .
aws ecr get-login --region AWSREGION

Run the docker login command that’s output by aws ecr ..., then:

docker tag /packer:latest AWSACCOUNT.dkr.ecr.AWSREGION.amazonaws.com/packer:latest
docker push AWSACCOUNT.dkr.ecr.AWSREGION.amazonaws.com/packer/latest

The final piece of configuration for AWS CodeBuild is the buildspec.yml file.  Normally, I would just need a single phase, build, which would invoke Packer.  However, there was a bug in the AWS Go SDK which means that you need to manually setup the security credentials for Packer to be able to access EC2.  This bug has been fixed and the next version of Packer should pick this up and the install phase can be removed.

To create the AWS CodeBuild project, deploy the codebuild-role.yaml AWS CloudFormation template and then the codebuild-project.yaml AWS CloudFormation template using either the AWS web console or the CLI.  Note that you will need to edit the codebuild-project.yaml template to reflect your own values for the container image and the source location.

AWS CodePipeline

AWS CodePipeline is the glue that connects the AWS CodeCommit Git repository to the AWS CodeBuild project that invokes Packer to create an AMI.  The pipeline I used has two stages: a source stage and a build stage.  The source stage watches the Git repository for new commits and then invokes the build stage.  The build stage kicks off the AWS CodeBuild project which uses the Packer container I created to build my new AMI.

To create the AWS CodePipeline pipeline, deploy the codepipeline-role.yaml AWS CloudFormation template and then the codepipeline.yaml AWS CloudFormation template using either the AWS web console to the CLI.

Building an AMI

At this point to make the pipeline work all you need to do is to commit the files packer.json and buildspec.yml to the root of the AWS CodeCommit Git repository.  Within a few seconds the source stage of the pipeline will notice the commit, package up the files into an S3 bucket and invoke the build stage to actually create the AMI.

Note that you will need to edit the packer.json file to reflect the AWS Region you are using and the base AMI.  You can omit the “vpc_id” field if the region you are using still has it’s default VPC.  If, like me, you don’t have a default VPC anymore then you can deploy the vpc.yaml AWS Cloudformation template to create a VPC and use the VPC ID of your new VPC in packer.json.

Extra Credit

Once the basic AMI Bakery pipeline is up and running there’s lots of enhancements you could make, here’s some ideas:

  1. If you are creating a VPC just for Packer, you will end up paying for the Internet Gateway.  To avoid this you could create two additional pipeline stages, one to create the VPC and one to tear it down.
  2. Pipelines can be configured to send messages to an AWS SNS topic when they complete.  You could write an AWS Lambda function to listen for these messages and then trigger another pipeline or build project (in a different account) that bakes another AMI based on your newly created AMI.  We’re looking at doing this to allow one team to manage the base operating system AMI that is then used by application teams to build their own AMIs.
  3. You could create extra stages in the pipeline to perform automated testing of your newly baked AMI, to add a manual approval stage, or to perform a rolling upgrade of EC2 instances using older AMIs.
Advertisements

Building a VPC with AWS Cloudformation

One of the advantages of Amazon Web Services is the ability to quickly create complex infrastructures for development and testing, and then, when you’re done, to tear the infrastructure down.  The simplest way to reproducibly provision infrastructure is through the use of Cloudformation Templates.  These templates allow you to describe your infrastructure in JSON or YAML which AWS will then provision for you.

Amazon provides detailed documentation for Cloudformation, unfortunately the documentation is very strong on the reference side, but not so good at worked examples.  There’s also lots of examples on blogs around the Internet, however these tend to be of the variety that state “Here’s our template to do X”, again with little or no explanation about how the templates work.

I’ve been doing a lot of work with distributed systems recently and I wanted to be able to create Virtual Private Cloud containing a number of servers where I could install the software I was experimenting with.  An AWS VPC was the perfect solution but has a larger number of moving parts than you might think, especially if you want to limit the exposure your instances have to the Internet.  This blog post describes the Cloudformation template I created to spin up a VPC in a single availability zone.  Mainly for my own benefit the rest of the post explains how the template works, hopefully this might be of use to other people as well.

VPC Template

I’ve chosen to write my template in YAML.  This is a relatively new feature of Cloudformation, previously templates had to be written in JSON.  YAML has a number of advantages including the ability to have inline comments and a (I think) a cleaner syntax.  I’m also seeing YAML being used in many other projects so it seemed to be a useful thing to learn.

All Cloudformation templates have the following structure:

---
AWSTemplateFormatVersion: "2010-09-09"
Description: My VPC Template

Resources:
    ...

The three hyphens on the first line are part of the YAML specification, indicating the start of a document.  It used to be that the AWSTemplateFormatVersion section was mandatory, but the  latest documentation says that it’s now optional, I tend to include it anyway.  The Description section is also optional, but I would say that it’s best practice to include.

Before we get to the Resources section I should point out that you can do many complex things to make your templates very flexible.  A lot of this flexibility is driven using the Parameters and Mappings sections.  I’ve chosen not to use these two features to keep this template as simple as possible, I may do a follow up post showing how to use these sections to do deployment time customisation.  One other section I’ve not included but I should mention is the Outputs section.  This allows you to output values when AWS is deploying the stack based on your template.  You can use this to get values relating to the created resources such as public IP addresses.

The Resources section is where you specify the items that you want AWS to create when deploying a stack based on the template.  I’ll now go through each of these in turn, explaining the parameters I’ve chosen.  The full template is available from GitHub.

VPC

VPC:
    Type: AWS::EC2::VPC
    Properties:
        CidrBlock: 10.1.0.0/16
        EnableDnsSupport: true
        EnableDnsHostnames: true
        InstanceTenancy: default
        Tags:
          - Key: Name
            Value: Cloudformation Test VPC

All resources have the same basic structure: the logical ID of the resource, a Type, and then a Properties section.  In this example the logical ID of the resource is VPC and the type is AWS::EC2::VPC.  The properties I’m setting are:

  • CidrBlock: The subnet for the VPC.  You need to make sure your VPC has enough IP address space to carve out all the subnets you need.  I’m using an RFC1918 range, though AWS supports any address range you want.
  • EnableDnsSupport: If set to true the AWS DNS server resolves hostnames for instances in the VPC.
  • EnableDnsHostnames: If set to true instances get allocated DNS hostnames, you need to have EnableDnsSupport set to true as well for this to work.
  • InstanceTenancy: You can have your instances run on dedicated hardware assigned to only you if this is set to dedicated.  Understandably setting this to dedicated costs more!

You can optionally set tags on resources to make it easier to manage your AWS account.  Setting the Name tag makes resources identifiable in the AWS web interface.

Internet Gateway

An Internet Gateway is an Amazon managed device that allows resources in your VPC to connect to the Internet.  As I want to connect to my VPC over the Internet and I want instances in my VPC to be able to download from the Internet I need to create an Internet Gateway.

InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
        Tags:
          - Key: Name
            Value: Internet Gateway

You only create a single Internet Gateway per VPC, even if your VPC spans multiple availability zones.  Amazon take care of making the Internet Gateway highly available.  As you can see the Internet Gateway doesn’t need any extra properties.

Creating an Internet Gateway is a two stage operation.  First, as above, you declare the gateway, then you need to attach it to your VPC:

AttachGateway:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
        VpcId:
            Ref: VPC
        InternetGatewayId:
            Ref: InternetGateway

The AttachGateway section has two properties, these are both references to other sections in your Cloudformation template.  Here we are referencing the VPC and the InternetGateway that we have already declared in the template.

Note that these types of attach sections (we’ll see more shortly) don’t have Tag properties.  Adding a Tag section will cause your template to fail.

Bastion Host Subnet

My VPC design has three subnets: one for the bastion host that will allow SSH access to my VPC from the Internet, one for the NAT Gateway that will allow my instances access to the Internet, and one for my worker instances that shouldn’t be reachable directly from the Internet.

We’ll start by creating the subnet for the bastion host:

BastionHostSubnet:
    Type: AWS::EC2::Subnet
    Properties:
        VpcId:
            Ref: VPC
        CidrBlock: 10.1.1.0/28
        MapPublicIpOnLaunch: true
        Tags:
          - Key: Name
            Value: Bastion Host Subnet

As well as the Type, we need to define the following properties for the subnet:

  • VpcId: This is a reference to the VPC which will contain the subnet.
  • CidrBlock: The IP address range for the subnet in CIDR notation.  Note that AWS reserves 5 addresses from the range, that’s why I’ve chosen a /28 subnet mask leaving me with 11 usable addresses.
  • MapPublicIpOnLaunch: With this set to true, instances launched into the subnet will be allocated a public IP address by default.  This means that any instances in this subnet will be reachable from the Internet, subject to Security Groups and Network ACLs.

Once the subnet has been declared we need to configure routing.  By default a VPC is created with a main route table which allows instances to send traffic to each other even if they are in different subnets.  However, we want instances on this subnet to be able to communicate across the Internet so we need to create a subnet specific route table that will route Internet traffic via the Internet Gateway we declared previously.

It’s a three step process to declare and configure the subnet route table.  Step one is declaration of the route table:

BastionHostSubnetRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
        VpcId:
            Ref: VPC
        Tags:
          - Key: Name
            Value: Bastion Host Subnet Route Table

The route table is a very simple object, all it contains is a Type, a reference to the VPC and a tag giving it a name.

Step two is to declare the route entry that will send Internet bound traffic to our Internet Gateway:

BastionHostInternetRoute:
    Type: AWS::EC2::Route
    DependsOn: InternetGateway
    Properties:
        DestinationCidrBlock: 0.0.0.0/0
        GatewayId:
            Ref: InternetGateway
        RouteTableId:
            Ref: BastionHostSubnetRouteTable

This section introduces a item: DependsOn which instructs AWS not to create this resource until the InternetGateway has been created.  This is how we ensure that resources are created in the correct order.  The DestinationCidrBlock describes which traffic we want this route to apply to.  A value of 0.0.0.0/0 means all traffic.  It’s important to note that routes operate on a most specific match first and 0.0.0.0/0 is the least specific of all routes.  This means that the default VPC route entry of 10.1.0.0/16 will match first ensuring that traffic does not leak out of the VPC.

The GatewayId reference specifies where traffic matching this route should be sent, in this case it’s to the Internet Gateway we previously declared.  The RouteTableId reference connects this route to the route table.

Finally, step three is to associate the route table with the subnet:

BastionHostSubnetRouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
    RouteTableId:
        Ref: BastionHostSubnetRouteTable
    SubnetId:
        Ref: BastionHostSubnet

The route table association connects the route table (and route) we’ve just declared with the subnet we declared earlier.  With this in place any instances created within the bastion host subnet should be able to route traffic to and from the Internet (security groups and network ACLs permitting).

Bastion Host Security Group

Before we deploy a bastion host we need to declare a security group.  By default instances are firewalled off from all network traffic so the security group needs to describe what traffic to let in and out of the instance.

BastionHostSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
        GroupDescription: Allow SSH to Bastion Host
        VpcId:
            Ref: VPC
        SecurityGroupIngress:
          - IpProtocol: tcp
            FromPort: '22'
            ToPort: '22'
            CidrIp: 0.0.0.0/0
        SecurityGroupEgress:
          - IpProtocol: -1
            CidrIp: 0.0.0.0/0
        Tags:
          - Key: Name
            Value: Bastion Host Security Group

As you can see a security group has a number of properties you can configure:

  • GroupDescription: A free text field that you can use to describe what the security group allows.
  • VpcId: The VPC where we will be using the security group.
  • SecurityGroupIngress: This property describes the traffic we should allow through the security group to the instance.
    • IpProtocol: This should be pretty obvious, we’re interested in TCP traffic.
    • FromPort and ToPort: Combined these describe a range of ports to which traffic should be allowed.
    • CidrIp: The range of IP addresses from which we should allow traffic that matched the IpProtocol and FromPort/ToPort range.
  • SecurityEgressGroup: This property describes traffic we should allow from the instance into the VPC and beyond.
    • IpProtocol: Same as before but this time we are using the value -1 which means all traffic types on all ports.
    • CidrIp: The range of IP addresses to which we should allow traffic that matches the IpProtocol setting (all traffic in this case).

Bastion Host Launch Configuration and Autoscaling

At this stage we could just create our bastion host instance and start using it, however it’s better to expend the extra effort and create a launch configuration and autoscaling group.  By launching our bastion host instance from within an autoscaling group we benefit from the feature that AWS will automatically restart our instance should it die for any reason.

BastionHostLaunchConfig:
    Type: "AWS::AutoScaling::LaunchConfiguration"
    Properties:
        AssociatePublicIpAddress: true
        ImageId: ami-9398d3e0 # Amazon Linux in eu-west-1
        InstanceMonitoring: false
        InstanceType: t2.micro
        KeyName: TestStack
        PlacementTenancy: default
        SecurityGroups:
          - Ref: BastionHostSecurityGroup

Launch configurations have a large number of properties that you can configure, we’re only using a small subset here:

  • AssociatePublicIpAddress: Make sure that instance has a public IP address when it launches, we don’t strictly need this as we configured the subnet to have this feature.
  • ImageId: The AMI to use to create the instance.  Note that I am using the Amazon Linux AMI in the eu-west-1 region.  If you’re in a different region, want to use a different operating system, or Amazon have released an updated version of their Linux, then you’ll need to change this value.
  • InstanceMonitoring: Setting to true enables detailed monitoring for your instance, this costs extra so I don’t use it in throwaway environments.
  • InstanceType: The type (size) of instance you want.  I’m just using the small and cheap t2.micro.
  • KeyName: The SSH key pair to use to access this instance.  You need to have previously created the key pair, obviously your key pair will probably be called something different.
  • PlacementTenancy: Same as for the VPC, we don’t want dedicated hardware for our instance.
  • SecurityGroups: A reference to the security group we declared previously.

After we’ve declared our launch configuration we need to create the autoscaling group:

BastionHostScalingGroup:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
        LaunchConfigurationName:
            Ref: BastionHostLaunchConfig
        MinSize: '1'
        MaxSize: '1'
        VPCZoneIdentifier:
          - Ref: BastionHostSubnet
        Tags:
          - Key: Name
            Value: Bastion Host
            PropagateAtLaunch: true

The autoscaling group references the launch configuration we declared using the LaunchConfigurationName reference.  The other properties are:

  • MinSize and MaxSize: By setting these both to one the autoscaling group will create a single instance and restart it if it fails.
  • VPCZoneIdentifier: A list of references to subnets into which the instances will be launched.  If we had multiple subnets spread across multiple availability zones we could reference them here to create a highly available system.

NAT Gateway, Subnet, and Elastic IP

The bastion host is configured to allow SSH traffic into our VPC from the Internet, however we also want our worker instances to be able to access the internet, mainly so that they can download software updates.  To do this we need create a NAT gateway, a public subnet to host it, and an elastic IP address.

NatGatewaySubnet:
    Type: AWS::EC2::Subnet
    Properties:
        VpcId:
            Ref: VPC
        CidrBlock: 10.1.1.16/28
        MapPublicIpOnLaunch: true
        Tags:
          - Key: Name
            Value: NAT Gateway Host Subnet

NatGatewaySubnetRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
        VpcId:
            Ref: VPC
        Tags:
          - Key: Name
            Value: NAT Gateway Subnet Route Table

NatGatewayInternetRoute:
    Type: AWS::EC2::Route
    DependsOn: InternetGateway
    Properties:
        DestinationCidrBlock: 0.0.0.0/0
        GatewayId:
            Ref: InternetGateway
        RouteTableId:
            Ref: NatGatewaySubnetRouteTable

NatGatewaySubnetRouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
        RouteTableId:
            Ref: NatGatewaySubnetRouteTable
        SubnetId:
            Ref: NatGatewaySubnet

We declare the subnet for the NAT gateway in exactly the same way we did for the bastion host subnet.  The only differences are in the logical IDs for the resources we declare and the subnet IP address range we are using.

With the subnet declared we can declare the elastic IP address that we’ll assign to the NAT gateway:

NatGatewayEIP:
    Type: AWS::EC2::EIP
    Properties:
        Domain: vpc

The Domain property needs to be set to vpc as we are working within a VPC.

Finally we can create the NAT gateway:

NatGateway:
    Type: AWS::EC2::NatGateway
    DependsOn: AttachGateway
    Properties:
        AllocationId:
            Fn::GetAtt:
              - NatGatewayEIP
              - AllocationId
        SubnetId:
            Ref: NatGatewaySubnet

The AllocationId property is interesting, Fn::GetAtt is an intrinsic function.  Basically what it does is get the AllocationId attribute if the NatGatewayEIP resource that we declared previously.  The SubnetId property is a reference to the subnet where the NAT gateway should be deployed.

Private Subnet

The declarations for the private subnet are much the same as the previous subnets.

PrivateSubnetA:
    Type: AWS::EC2::Subnet
    Properties:
        VpcId:
            Ref: VPC
        CidrBlock: 10.1.2.0/24
        MapPublicIpOnLaunch: false
        Tags:
          - Key: Name
            Value: Private Subnet A

Note that MapPublicIpOnLaunch is set to false so that instances in this subnet don’t get a public IP address.

PrivateSubnetARouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
        VpcId:
            Ref: VPC
        Tags:
          - Key: Name
            Value: Private Subnet A Route Table

PrivateSubnetANatInternetRoute:
    Type: AWS::EC2::Route
    DependsOn: NatGateway
    Properties:
        DestinationCidrBlock: 0.0.0.0/0
        NatGatewayId:
            Ref: NatGateway
        RouteTableId:
            Ref: PrivateSubnetARouteTable

The Internet route for the private subnet is a little different.  We declare the NatGatewayId property as a reference to the Nat gateway we declared previously.

PrivateSubnetARouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
        RouteTableId:
            Ref: PrivateSubnetARouteTable
        SubnetId:
            Ref: PrivateSubnetA

Private Subnet Security Group

The private subnet security group is pretty much the same as the bastion host security group:

PrivateSubnetASecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
        GroupDescription: Allow SSH from Bastion Host
        VpcId:
            Ref: VPC
        SecurityGroupIngress:
          - IpProtocol: tcp
            FromPort: '22'
            ToPort: '22'
            SourceSecurityGroupId:
                Fn::GetAtt:
                  - BastionHostSecurityGroup
                  - GroupId
        SecurityGroupEgress:
          - IpProtocol: -1
            CidrIp: 0.0.0.0/0
        Tags:
          - Key: Name
            Value: Private Subnet A Security Group

The difference is that in the SecurityGroupIngress property we don’t use the CidrIp property instead we use the SourceSecurityGroupId to link this security group to the bastion host security group allowing SSH traffic.

Private Subnet Launch Configuration and Autoscaling Group

The final part of the puzzle is the launch configuration and autoscaling group for launching worker instances into the private subnet.

PrivateSubnetALaunchConfig:
    Type: "AWS::AutoScaling::LaunchConfiguration"
    Properties:
        AssociatePublicIpAddress: false
        ImageId: ami-9398d3e0 # Amazon Linux in eu-west-1
        InstanceMonitoring: false
        InstanceType: t2.micro
        KeyName: TestStack
        PlacementTenancy: default
        SecurityGroups:
          - Ref: PrivateSubnetASecurityGroup

PrivateSubnetAScalingGroup:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
        LaunchConfigurationName:
            Ref: PrivateSubnetALaunchConfig
        MinSize: '1'
        MaxSize: '1'
        VPCZoneIdentifier:
          - Ref: PrivateSubnetA
        Tags:
          - Key: Name
            Value: Worker Host
            PropagateAtLaunch: true

These are pretty much identical to the bastion host launch configuration and autoscaling group.  The changes are that the AssociatePublicIpAddress is set to false so that instances don’t get public IP addresses (they use the NAT gateway to access the Internet) and the logical IDs for various resources point to those in the private subnet.

Network Access Control Lists

If you’re familiar with AWS VPCs you’ll have noticed that I am not declaring any network access control lists.  By not declaring and ACLs AWS will create a default ACL that allows all inbound and outbound traffic and our subnets will be associated with this ACL.  In a production deployment you might want to use ACLs to limit traffic flows between subnets.  I’ll leave that as an exercise for the reader.

Final Thoughts

Well that turned out far longer than I though (just over 2800 words!).

Cloudformation templates have a huge amount of flexibility, so I’m sure that there are different, and probably better, ways to achieve what I’ve created here.  If you’ve got any tips please feel free to leave a comment.

The full template is available from my GitHub.

Installing OpenStack Keystone on Fedora

I have been playing a bit with cloud services, in particular Amazon Web Services but I recently wanted to install OpenStack to see what all the hype was about and to better understand the underlying components and technologies.  It’s possible to do a full OpenStack install on a single server or virtual machine running Fedora using the RDO instructions, however I wanted to do the build by hand.

I started with a minimal install of Fedora using the standard file system layout, I ran yum update and rebooted.  Once the system was back up I installed the RDO relase RPM as per the RDO quickstart instructions:

sudo yum install http://rdo.fedorapeople.org/openstack-havana/rdo-release-havana.rpm

This gives access to pre-built RPMs for all of the OpenStack Havana components.  RDO makes use of a number of other components to provide a DevOps style approach to installation.  I didn’t want this extra functionality so I edited /etc/yum.repos.d/foreman.repo and /etc/yum.repos.d/puppetlabs.repo to disable both of those repositories.

OpenStack supports a variety of database backends, but the simplest and best documented seems to be MySQL.  Fedora has switched to using the MariaDB fork of MySQL so that’s what I installed, along with the MySQL module for Python:

sudo yum install mariadb mariadb-server MySQL-python
sudo systemctl start mariadb.service
sudo systemctl enable mysqld.service
sudo mysql_secure_installation

Note that only the database server needs mariadb-server package.  Next I installed the OpenStack utils package:

sudo yum install openstack-utils

As well as the database the other piece of infrastructure that OpenStack needs is a messaging service that provides AMQP.  The two main implementations of this are RabbitMQ or Qpid.  I’ve chosen to use Qpid:

sudo yum install qpid-cpp-server memcached

For simplicity I turned off authentication in Qpid by adding auth=no to /etc/qpidd.conf, you probably wouldn’t do this in a production deployment!  Start and enable qpidd:

sudo systemctl start qpidd.service
sudo systemctl enable qpidd.service

Keystone is the identity component of OpenStack, similar to IAM in AWS terms.  Install the Keystone packages:

yum install openstack-keystone python-keystoneclient

Keystone needs to be configured to use the database we installed.  The openstack-config command allows use to set values in various config files without firing up vi.

sudo openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:keystone_db_pass@controller/keystone

The arguments to this command are the --set option indicating you want to set a value; the file that contains the value we want to set; the section within the file (if you edit the file with vi you can search for [sql]); the parameter we want to set: connection; and the value for the parameter.  In this case we’re setting the SQL Alchemy connection string which is an RFC1738 URL.

Now that the database connection is configured it can be initialised.  Note that you need to pass the same password value (in my case “keystone_db_pass“) as you configured in the SQL Alchemy connection string/URL:

sudo openstack-db --init --service keystone --password keystone_db_pass

You’ll be prompted for the database root password you set when you ran the mysql_secure_installation command.

Setup the main admin user password, the first command creates a random password and stores it in a shell environment variable so you can use it in subsequent commands:

ADMIN_TOKEN=$(openssl rand -hex 10)
sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
sudo keystone-manage pki_setup --keystone-user keystone --keystone-group keystone

The second command initialises the certificates that Keystone uses to create the cryptographically strong authentication tokens that we will use later when accessing the service via the command line or API.  There’s fuller discussion in the OpenStack Keystone documentation.

Now we can start the service:

sudo chown -R keystone:keystone /etc/keystone/ /var/log/keystone/keystone.log
sudo systemctl start openstack-keystone.service
sudo systemctl enable openstack-keystone.service

We need to set up a couple of environment variables so that we can use the command line tools.  OS_SERVICE_TOKEN is the password we created with the previous openssl command.  OS_SERVICE_ENDPOINT is the URL for the Keystone API, I’m using the IP address 10.0.0.29, but you should use the appropriate hostname or IP address for your environment:

export OS_SERVICE_TOKEN="763237339bc02dd92bfb"
export OS_SERVICE_ENDPOINT="http://10.0.0.29:35357/v2.0"

With all of that done we can now start using the keystone command to actual create tenants, users, and services:

keystone tenant-create --name=admin --description="Admin Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |           Admin Tenant           |
|   enabled   |               True               |
|      id     | 4b7e1355bb4d4afb960da724a9dfa0fc |
|     name    |              admin               |
+-------------+----------------------------------+
keystone tenant-create --name=service --description="Service Tenant"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |          Service Tenant          |
|   enabled   |               True               |
|      id     | c2e553ac9d164c74aff6d1a130f0f099 |
|     name    |             service              |
+-------------+----------------------------------+

These two commands create our first two tenants.  In OpenStack tenants can be thought of as groups that hold users and other resources that clouds provide.  For example, in a public cloud a tenant might represent a customer of that cloud service or in a private cloud a department or business line.  The admin tenant will hold the admin users for the cloud and the service tenant will hold the services that the cloud provides.  The names aren’t special you could call them anything.

We can also use the keystone command to list the tenants:

keystone tenant-list
+----------------------------------+---------+---------+
|                id                |   name  | enabled |
+----------------------------------+---------+---------+
| 4b7e1355bb4d4afb960da724a9dfa0fc |  admin  |   True  |
| c2e553ac9d164c74aff6d1a130f0f099 | service |   True  |
+----------------------------------+---------+---------+

The next step is to create an admin user, you should give this user a better password than I’ve chosen here!

keystone user-create --name=admin --pass=admin --email=admin@example.org
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |        admin@example.org         |
| enabled  |               True               |
|    id    | 22f1020799b7425cabbf22837934d510 |
|   name   |              admin               |
+----------+----------------------------------+

Privileges in OpenStack are assigned to users through roles, the privileges are associated with the role and the role is then associated with a user.  We’ve got the admin user so the next step is to create the admin role.  In this case the role name is important as it needs to match the role name in the policy.json file that controls rights and access.

keystone role-create --name=admin
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|    id    | 676f70baed8e430799138acf75a3f8b3 |
|   name   |              admin               |
+----------+----------------------------------+

The final step is to tie the tenant, user, and role together:

keystone user-role-add --user=admin --tenant=admin --role=admin

To summarise, creating a user consists of four steps:

  1. If necessary create a new tenant – keystone tenant-create.
  2. Create the new user – keystone user-create.
  3. If necessary create the new role (remember the role name must match that in the policy.json file) – keystone role-create.
  4. Tie the tenant, role, and user together – keystone user-role-add.

In OpenStack all of the cloud resources are presented as Services, this includes Keystone itself.  Our next step is to create the Keystone service and then make it available:

keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |    Keystone Identity Service     |
|      id     | dbb075345d404db5a64e33918a8e96f4 |
|     name    |             keystone             |
|     type    |             identity             |
+-------------+----------------------------------+

Having created the service we need to create endpoints for consumers to access the service.  Note that there are three different endpoints, this is to support the common deployment scenario where the server hosting Keystone has three network interface cards – one for public access (i.e. users of the cloud), one for internal access (i.e. other services within the cloud), and one for admin access.  In this test deployment they’re all on the same interface card.  The --service-id parameter is the UUID that was returned as the id parameter in the keystone service-create command above.

keystone endpoint-create  --service-id=dbb075345d404db5a64e33918a8e96f4 --publicurl=http://10.0.0.29:5000/v2.0 --internalurl=http://10.0.0.29:5000/v2.0 --adminurl=http://10.0.0.29:35357/v2.0
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
|   adminurl  |   http://10.0.0.29:35357/v2.0    |
|      id     | 35bfb42f44194228a66ec8a70b44493e |
| internalurl |    http://10.0.0.29:5000/v2.0    |
|  publicurl  |    http://10.0.0.29:5000/v2.0    |
|    region   |            regionOne             |
|  service_id | dbb075345d404db5a64e33918a8e96f4 |
+-------------+----------------------------------+

We can now verify that the tenant, user, and service we’ve created are all working.  To do this we first need to clear the credentials and service endpoint we’ve been using so far:

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

We can now use the keystone command with the username, password, and service endpoint that we just created:

keystone --os-username=admin  --os-password=admin --os-auth-url=http://10.0.0.29:35357/v2.0 token-get

We can do the same thing, but additionally specifying the tenant name:

keystone --os-username=admin  --os-password=admin --os-tenant-name=admin --os-auth-url=http://10.0.0.29:35357/v2.0 token-get

OpenStack authentication works on the principle that you supply valid credentials to a service endpoint and in return you get a token which you present to the service when you make subsequent requests.  The previous two commands are using the token-get parameter to request a token.

It can get tedious to have to type in the username, password, tenant name, and endpoint parameters for each command so OpenStack allows you to set these are environment variables:

export OS_USERNAME=admin
export OS_PASSWORD=admin
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://10.0.0.29:35357/v2.0

Which then allows you to shorten commands:

keystone user-list
+----------------------------------+-------+---------+-------------------+
|                id                |  name | enabled |       email       |
+----------------------------------+-------+---------+-------------------+
| 22f1020799b7425cabbf22837934d510 | admin |   True  | admin@example.org |
+----------------------------------+-------+---------+-------------------+

Note that putting passwords, especially admin ones, into environment variables probably isn’t best practice!

Now that command line access is working we can do exactly the same things using the Keystone API.  In the following example we make an HTTP POST request to the tokens URL passing our credentials as a JSON document in the request payload.

In response we get a token that we can use in further API calls, a service catalog detailing the service endpoints, and information about our user, role, and tenant.

curl -k -X 'POST' -v http://10.0.0.29:35357/v2.0/tokens -d '{"auth":{"passwordCredentials":{"username": "admin", "password":"admin"}, "tenantId":"4b7e1355bb4d4afb960da724a9dfa0fc"}}' -H 'Content-type: application/json'
* About to connect() to 10.0.0.29 port 35357 (#0)
*   Trying 10.0.0.29...
* Connected to 10.0.0.29 (10.0.0.29) port 35357 (#0)
> POST /v2.0/tokens HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.0.0.29:35357
> Accept: */*
> Content-type: application/json
> Content-Length: 121
> 
* upload completely sent off: 121 out of 121 bytes
< HTTP/1.1 200 OK
< Vary: X-Auth-Token
< Content-Type: application/json
< Content-Length: 2347
< Date: Sun, 03 Nov 2013 17:22:39 GMT
< 
{
  "access": {
    "token": {
      "issued_at": "2013-11-03T17:22:39.311048", 
      "expires": "2013-11-04T17:22:39Z", 
      "id": "MIIErwYJKoZIhvcNAQcCoIIEoDCCBJwCAQExCTAHBgUrDgMCGjCCAwUGCSqGSIb3DQEHAaCCAvYEggLyeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMS0wM1QxNzoyMjozOS4zMTEwNDgiLCAiZXhwaXJlcyI6ICIyMDEzLTExLTA0VDE3OjIyOjM5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjRiN2UxMzU1YmI0ZDRhZmI5NjBkYTcyNGE5ZGZhMGZjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6MzUzNTcvdjIuMCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6NTAwMC92Mi4wIiwgImlkIjogIjIxMjhiOWExMDc0OTQ3ZDU4NDI0YWQwOTJmNTM3MTdhIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMC4wLjI5OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjIyZjEwMjA3OTliNzQyNWNhYmJmMjI4Mzc5MzRkNTEwIiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiNjc2ZjcwYmFlZDhlNDMwNzk5MTM4YWNmNzVhM2Y4YjMiXX19fTGCAYEwggF9AgEBMFwwVzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbQIBATAHBgUrDgMCGjANBgkqhkiG9w0BAQEFAASCAQCUcTFJU550veZlBYtXQos0Q24BJVbw2acBSZ2p42Ifw2itZxHRa6RpYKyPhltTE93v8zbLbNLVS+KI-+U-SP3zsTzWrrFxS2Bt7AWh2qPhPossGqmxmv3DnFZPk5bOXk3fMWMRnYydsH5hFknmhilbPX4EwJNV6qLyZvDjpg4szIc8YBVludPiy-6aGrv7eWNZUhMi7zz3b7SSYJ0gTTB7brTzmtcH946ayY33a0lx8fSlcfUWV22Ey7BWPFHzVQxzF+2Ho46uIqPDs3ohV9q5I-XSOwTvA+lWvI35VbFHnBKnhjpYGrGAjexhQyTD7InCGYejKCu6H1yedr2c0aci", 
      "tenant": {
        "description": "Admin Tenant", 
        "enabled": true, 
        "id": "4b7e1355bb4d4afb960da724a9dfa0fc", 
        "name": "admin"
      }
    }, 
    "serviceCatalog": [{
      "endpoints": [{
        "adminURL": "http://10.0.0.29:35357/v2.0",
        "region": "regionOne",
        "internalURL": "http://10.0.0.29:5000/v2.0",
        "id": "2128b9a1074947d58424ad092f53717a",
        "publicURL": "http://10.0.* Connection #0 to host 10.0.0.29 left intact
0.29:5000/v2.0"
      }],
      "endpoints_links": [],
      "type": "identity",
      "name": "keystone"
    }],
    "user": {
      "username": "admin",
      "roles_links": [],
      "id": "22f1020799b7425cabbf22837934d510",
      "roles": [{
        "name": "admin"
      }],
      "name": "admin"
    },
    "metadata": {
      "is_admin": 0,
      "roles": ["676f70baed8e430799138acf75a3f8b3"]
    }
  }
}

The next example uses the authentication toke we’ve just received to make a API call listing the extensions that are available in this OpenStack instance, note that this is an HTTP GET request so there’s no payload this time:

curl -k -D - -H "X-Auth-Token: MIIErwYJKoZIhvcNAQcCoIIEoDCCBJwCAQExCTAHBgUrDgMCGjCCAwUGCSqGSIb3DQEHAaCCAvYEggLyeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMS0wM1QxNzoyMjozOS4zMTEwNDgiLCAiZXhwaXJlcyI6ICIyMDEzLTExLTA0VDE3OjIyOjM5WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjRiN2UxMzU1YmI0ZDRhZmI5NjBkYTcyNGE5ZGZhMGZjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6MzUzNTcvdjIuMCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xMC4wLjAuMjk6NTAwMC92Mi4wIiwgImlkIjogIjIxMjhiOWExMDc0OTQ3ZDU4NDI0YWQwOTJmNTM3MTdhIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTAuMC4wLjI5OjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiYWRtaW4iLCAicm9sZXNfbGlua3MiOiBbXSwgImlkIjogIjIyZjEwMjA3OTliNzQyNWNhYmJmMjI4Mzc5MzRkNTEwIiwgInJvbGVzIjogW3sibmFtZSI6ICJhZG1pbiJ9XSwgIm5hbWUiOiAiYWRtaW4ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsiNjc2ZjcwYmFlZDhlNDMwNzk5MTM4YWNmNzVhM2Y4YjMiXX19fTGCAYEwggF9AgEBMFwwVzELMAkGA1UEBhMCVVMxDjAMBgNVBAgMBVVuc2V0MQ4wDAYDVQQHDAVVbnNldDEOMAwGA1UECgwFVW5zZXQxGDAWBgNVBAMMD3d3dy5leGFtcGxlLmNvbQIBATAHBgUrDgMCGjANBgkqhkiG9w0BAQEFAASCAQCUcTFJU550veZlBYtXQos0Q24BJVbw2acBSZ2p42Ifw2itZxHRa6RpYKyPhltTE93v8zbLbNLVS+KI-+U-SP3zsTzWrrFxS2Bt7AWh2qPhPossGqmxmv3DnFZPk5bOXk3fMWMRnYydsH5hFknmhilbPX4EwJNV6qLyZvDjpg4szIc8YBVludPiy-6aGrv7eWNZUhMi7zz3b7SSYJ0gTTB7brTzmtcH946ayY33a0lx8fSlcfUWV22Ey7BWPFHzVQxzF+2Ho46uIqPDs3ohV9q5I-XSOwTvA+lWvI35VbFHnBKnhjpYGrGAjexhQyTD7InCGYejKCu6H1yedr2c0aci" -X 'GET' -v http://10.0.0.29:35357/v2.0/extensions  -H 'Content-type: application/json'

In response we get a HTTP 200 OK from the server and a JSON document that lists the available extensions:

{
  "extensions": {
    "values": [{
      "updated": "2013-07-07T12:00:0-00:00", 
      "name": "OpenStack S3 API", 
      "links": [{
        "href": "https://github.com/openstack/identity-api", 
        "type": "text/html", 
        "rel": "described by"
      }], 
      "namespace": "http://docs.openstack.org/identity/api/ext/s3tokens/v1.0", 
      "alias": "s3tokens", 
      "description": "OpenStack S3 API."
    }, {
      "updated": "2013-07-11T17:14:00-00:00", 
      "name": "OpenStack Keystone Admin", 
      "links": [{
        "href": "https://github.com/openstack/identity-api", 
        "type": "text/html", 
        "rel": "described by"
      }], 
      "namespace": "http://docs.openstack.org/identity/api/ext/OS-KSADM/v1.0", 
      "alias": "OS-KSADM", 
      "description": "OpenStack extensions to Keystone v2.0 API enabling Administrative Operations."
    }, {
      "updated": "2013-07-07T12:00:0-00:00", 
      "name": "OpenStack EC2 API", 
      "links": [{
        "href": "https://github.com/openstack/identity-api",
        "type": "text/html",
        "rel": "described by"
      }],
      "namespace": "http://docs.openstack.org/identity/api/ext/OS-EC2/v1.0", 
      "alias": "OS-EC2",
      "description": "OpenStack EC2 Credentials ba* Connection #0 to host 10.0.0.29 left intact
ckend."
    }, {
      "updated": "2013-07-23T12:00:0-00:00",
      "name": "Openstack Keystone Endpoint Filter API",
      "links": [{
        "href": "https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3-os-ep-filter-ext.md",
        "type": "text/html",
        "rel": "described by"
      }],
      "namespace": "http://docs.openstack.org/identity/api/ext/OS-EP-FILTER/v1.0",
      "alias": "OS-EP-FILTER",
      "description": "Openstack Keystone Endpoint Filter API."
    }]
  }
}

At this point we’ve got Keystone up and running and demonstrated that we can use the service both through the command line tools and the API.  The next step is to add additional OpenStack services that will make use of Keystone.

One thing I found confusing the first time I went through a Keystone deployment was the variety of users and passwords that I needed to create, so here’s a summary:

  1. The root or admin user for the database your are using.  In my case this was the MySQL root user password set when I ran the mysql_secure_installation command.
  2. The password that Keystone will use when access it’s own database, set when configuring and initialising the Keystone database.
  3. The Keystone admin user password.  This is effectively the root password for Keystone and is stored in the /etc/keystone/keystone.conf file unencrypted.  Only to be used during initial Keystone deployment and configuration.
  4. Finally the Keystone service admin user that you should create as soon as the service is up and running.  This is the account that you’ll use to perform all ongoing admin tasks.  Best practise would be to create individual accounts for all users that will need admin privileges and assign them to the admin role.

AWS Command Line Tools for Mac OS X

Just a quick guide to get the Amazon Web Services (AWS) command line tools installed and configured on an Apple Mac running Mountain Lion.

The first task was to get PIP installed:

sudo easy_install pip

Then it’s a simple case of using PIP to to install the AWS CLI:

sudo pip install awscli

After a few minutes you should have the CLI tools installed.  The final task is to set up your credentials.  Create the file $HOME/.aws/config, it should contain something like the following:

[default]
aws_access_key = YOURKEYHERE
aws_secret_access_key = YOURSECRETACCESSKEYHERE
region = eu-west-1

You should replace these values with your own access keys and preferred region.  For extra credit, if you’re a Bash shell user, you can enable command completion:

complete -C aws_completer aws

There’s much more information on the AWS CLI available from Amazon.