Cloud Sentry Blog https://cloudsentry.evident.io Powered by Evident.io Thu, 20 Jul 2017 00:40:52 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.6 ../wp-content/uploads/2016/08/cropped-evident-shield-512-32x32.png Cloud Sentry Blog https://cloudsentry.evident.io 32 32 Cloud Security Fitness Guide – Exercise #10: Watch World-Readable and Listable S3 Bucket Policies ../best-practice-10-watch-world-readable-and-listable-s3-bucket-policies/ ../best-practice-10-watch-world-readable-and-listable-s3-bucket-policies/#respond Thu, 20 Jul 2017 00:40:52 +0000 http://evidentio.wpengine.com/2015/11/02/2015112best-practice-10-watch-world-readable-and-listable-s3-bucket-policies/ S3 has been around for quite some time. It may be the oldest Service in the ever expanding Web Services provided by Amazon. As a result, it has some legacy security controls which may lead to a level of confusion when trying to secure it.

S3’s maturity has also made it a target for people to troll for keys, passwords, and data they should not have access too. There have been several recent examples where an AWS key was compromised by publically accessible S3 content.

If you store intellectual property, source code, or other data that is important to your business in S3, it is important to understand how access to this is controlled...

The post Cloud Security Fitness Guide – Exercise #10: Watch World-Readable and Listable S3 Bucket Policies appeared first on Cloud Sentry Blog.

]]>

S3 has been around for quite some time. It may be the oldest Service in the ever expanding Web Services provided by Amazon. As a result, it has some legacy security controls which may lead to a level of confusion when trying to secure it.

S3’s maturity has also made it a target for people to troll for keys, passwords, and data they should not have access too. There have been several recent examples where an AWS key was compromised by publically accessible S3 content.

If you store intellectual property, source code, or other data that is important to your business in S3, it is important to understand how access to this is controlled.

By default, S3 does have a default Deny rule, so if you do nothing, only the account owner will ever be able to use S3. However, a quick review reveals three places where you can configure additional access to S3, IAM Policies, S3 Bucket Policies, and S3 Access Control Lists (ACLs).

Each one of these, or any combination of these, can be used to control access to S3, but as you grow with the service over time, it is possible to lose track of where or what is allowing access. This can open up security holes where you were not aware they existed. This can put your data at risk of loss or compromise.

The risk is compounded when you have multiple people who have managed your S3 service. Each individual may have had their own strategy for how best to secure access. When overlaid, these methods become hard to manage, difficult to visualize, and provide opportunity for missed security checks based on latest best practices.

As an example, S3 predates IAM. The security controls in place before IAM were, and still are, ACLs. Since they predate IAM, the ACL does not evaluate IAM users and you can inadvertently allow access where you thought the default deny rule kicked in.

AWS has a good article that helps explain this, titled “IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources),” so we won’t go into the specific detail as so much to make sure you are aware.

More importantly, here is a graphic from that post that helps explain how these are evaluated together:

Given that you can control access to S3 in three different areas, we highly recommend that you choose one and use it consistently across your AWS account. This will go a long way to help you isolate security of your S3 environment. It helps to simplify where you will go and audit your S3 access as well.

So, as a security best practice, it would be highly recommended to not use any S3 ACL if you can. While these do offer an easy way to configure access, they should be considered a legacy security control and not used.

This leaves S3 bucket policies and IAM Policies. Which one of these you choose going forward should be based on a couple of things. First, which are you more experienced in managing based on your current workflow today?

If you are leveraging one more than the other, and it is working, it is okay to stick with that one. If you haven’t made a decision yet, I am going to reiterate what AWS include in the blog post above:

  • If you’re more interested in “What can this user do in AWS?” then IAM policies are probably the way to go. You can easily answer this by looking up an IAM user and then examining their IAM policies to see what rights they have
  • If you’re more interested in “Who can access this S3 bucket?” then S3 bucket policies will likely suit you better. You can easily answer this by looking up a bucket and examining the bucket policy

Pick one, and stick with it. Make it a policy going forward and try to stay away mixing them. It will be easier in the long run and for the people who follow in your footsteps.

It may also be a valuable use of time once you have chosen a method, to quickly audit your current policies and look for any place where you may have doubled up permissions.

Remember, S3 has a default deny rule, but there is also the ability that once you allow an action, unless you specifically deny someone (or everyone) you may have opened up S3 wider that you had intended.

While an extreme example, this may help. There is a bucket policy in the Hosting a Website on S3 that ensures an anonymous user (a web browser) can read any content in a bucket. If you put this policy on a bucket, it is the effect of a global allow rule to anyone not specifically denied access to an object.

If you were to put an IAM policy in place that denied access to that bucket, it would only prevent authenticated IAM users’ lists in your IAM policy. Everyone would still have the ability to GET objects in that bucket.

Granted, they could not list them because that permission is not there, but if they had the path, nothing would prevent them from getting the object since this rule allows it.

S3 is a mature service and does offers very good security. Take the time to make sure the resultant set of policies does what you expect it to do. Ideally, choose a single type of policy, either the IAM or S3 bucket policies to simplify where to audit this and keep it that way.

This will help ensure your data is kept under your control.

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #10: Watch World-Readable and Listable S3 Bucket Policies appeared first on Cloud Sentry Blog.

]]>
../best-practice-10-watch-world-readable-and-listable-s3-bucket-policies/feed/ 0
Cloud Security Fitness Guide – Exercise #9: Do Not Allow 0.0.0.0/0 Unless You Mean It ../best-practice-9-do-not-allow-00000-unless-you-mean-it/ ../best-practice-9-do-not-allow-00000-unless-you-mean-it/#respond Thu, 20 Jul 2017 00:30:11 +0000 http://evidentio.wpengine.com/2015/10/06/2015106best-practice-9-do-not-allow-00000-unless-you-mean-it/ In the last post, John Martinez wrote about how Autoscaling can help an application deployed on AWS survive an attack. While that is great, there is an actual fiscal cost to that type of mitigation.

Granted, the cost is most likely worth the mitigation, but what else could be done before making that investment?  

In this post we will review a less costly way to mitigate an attack before we scale up to absorb it.  Basically, we want to show you how to block traffic before it triggers a need to scale...

The post Cloud Security Fitness Guide – Exercise #9: Do Not Allow 0.0.0.0/0 Unless You Mean It appeared first on Cloud Sentry Blog.

]]>

In the last post, John Martinez wrote about how Autoscaling can help an application deployed on AWS survive an attack.  While that is great, there is an actual fiscal cost to that type of mitigation.

Granted, the cost is most likely worth the mitigation, but what else could be done before making that investment?  In this post we will review a less costly way to mitigate an attack before we scale up to absorb it.

Basically, we want to show you how to block traffic before it triggers a need to scale.

AWS provides the tools necessary to help control what traffic is allowed, and in this blog, we are going talk about Security Groups and their relatives, the Network Access Control List (NACL).

While these groups are not traditional firewalls, they are very effective at controlling network and port traffic. If you do anything on AWS other than store data on S3 or use Route53 for DNS, you will are more than likely to run into thea need to configure Security Groups.  

Much like a basic firewall, the principle purpose of a Security Group is to allow only the traffic you want, in the direction you want. Rather than be negative, it may help to focus on what these groups allow.

An old network sage once told me, “Block everything, only allow in what you need,” and in many cases he was spot on. If you try to figure out what to block all the time, that may be your full time job and a pretty negative one at that.

It is much brighter to focus on what to allow, assuming that if you don’t allow it, it is blocked. Yes, the glass is half full.

So, do you need to allow all traffic from 0.0.0.0/0?  In AWS Network terms, that means everyone, every machine, everywhere has the ability to make a connection to your AWS resources.

Everyone and everything on the Internet can establish a connection to your resources! Oh, and by default, this is the access you are granting all of your systems to make outbound connections to, everyone, everything, everywhere.

Another way to look at is is as if each zero was a wild card, so instead of 0.0.0.0/0 see it as *.*.*.*/*. In some cases, this may be exactly what you want.  But in many, it may be exactly what you don’t.

Let’s break this down into two directions.

For the sake of this article, inbound connections are those allowed into your application.  Customers would be an example of inbound connections to a web server, but you may have other applications that need to make inbound connections like the web server to your database.

The question here is, who are you inviting into your house when they ring the doorbell?  Everyone?  If a delivery person arrives with a package at your front door for someone who is in the other room, do they just open the door and invite themselves in?  Insert 0.0.0.0/0 and you have basically opened the door to the world.

For your basic web application, you may actually want to allow everyone access.  Allowing this traffic could be detrimental to your business.  In this case, you should consider allowing access only to the ports your application will respond on.

The most common would be port 80 for unencrypted web traffic and port 443 for the encrypted traffic.  This would cut the allow rule down to two lines.  This one change has effectively reduced the threat surface from thousands of ports down to two.

Not bad.  In this case, you may also want to question if you want to leave port 80 open.  In today’s security landscape, more and more applications are trending toward always encrypting traffic, so only allowing in port 443 is completely acceptable in these cases.

Now that your customers can gain access to just your application, you want to be able to administer the web server, right?  Well, some would say absolutely not, you must deploy infrastructure as code and never actually admin a box manually.

Ok, for those that can, I completely agree.  There is no need for administrative access to the instance.  However, for many, at some point in time, they will need to do something on the instance and need remote access.  This is where you should do two things.

  • Only allow the access from the origin IP and port where you will admin your instance from.  Be very explicit here.  http://checkip.amazonaws.com/ is one way to determine your origin IP and then only allow for the specific port.  For example, by default, SSH would be 22 and RDP would be 3389.  Again, we discourage the defaults, but this would be an allow rule something like 64.79.144.10/32 to that one port.
  • The other recommended security best practice is to only turn this on when needed and remove it when not.  Yes, this adds an additional step and removes the convenience, but if you only allow access when access is needed, you have very effectively not only reduced the attack surface with the explicit rule above, you have limited the time.  This can all be scripted and if you are going through the steps to admin an instance, you should factor in turning on and off remote access to “only when needed.”

So, where should 0.0.0.0/0 be configured?

Yes, that was a trick question.  A good answer would be, “Do not allow 0.0.0.0/0 unless you mean it!”  However, my question was actually about where in AWS  you can configure access.  There are a total of four places.

Security Groups and Network Access Control Lists (NACL) both have inbound and outbound rules.  They are like defensive layers in getting to your application.

security diagram

Here is a graphic to help visualize these two similar features in a VPC. (Keep in mind that if you’re in Classic EC2 and no VPC, you only have Security Groups, no NACLS)

 

As you can see, Security Groups are closest to your application, while NACL are closest to the inbound traffic. So, from a defense in depth perspective, you want to limit the broadest amount of traffic, the furthest away from your application.  With each layer in, you become more and more granular in access.

This is actually not different than many security implementations you may encounter on a daily basis.

The next time you walk into a building, notice how there are broad security boundaries on the exterior and as you proceed through, security becomes more and more granular with some rooms you can freely access, while others are more controlled, and yet others you are not allowed entry.  Inbound access into your application is similar.

In the context above, where should 0.0.0.0/0 be configured?  If you must allow world access to your application, you will need to configure it both on the NACL and the Security Group.  However, you can limit the ports the world has access to with the Security Group.

If you only need to allow specific network access to your application, you can limit it on the NACL, thus preventing anything you did not define making it into your security group.

Think of a port scanner – If denied at the NACL, the scanner cannot discover what ports you have open, since those are configured on the Security Group.  The NACL can also be thought of as the perimeter boundary, or your first checkpoint.

While we are talking about NACL, this is the one place where you can configure an explicit deny rule.  Security Groups are very focused on what you allow, denying everything that is not allowed.  

However, NACL can be configured to deny traffic, and contrary to the first paragraph, that is a good thing.  Why?

Well, first, if you discover you are being attacked and the origin IPs are very specific, you can quickly and easily block them at the NACL.  This is for a DoS where the origin IPs are known.

For a DDoS, this is not so easy, but maybe they are not evenly distributed and you could put up some deny rules to help.  In any event, this is where having familiarity with your logs and partnering with AWS Support to help you identify the attack can help.

So much of the discussion thus far has been about inbound rules, but what about outbound?  Configuring outbound allow rules can be done both with Security Groups and NACLs.

Deciding how to configure them is no different than inbound.  The exception here is that you own the application and you know what it needs to do.

Both Security Groups and NACLs allow all outbound traffic by default. This means that your application servers will have no problem browsing the web or making any connections needed out of the box.  

But the question you should ask yourself is, should they?  Again, there are no limits of what your applications can do, they can browse the internet, they can send email, they can, well, they can do anything.  

In many cases, as a steps to prevent the applications from doing things you may not want them to, you can limit the outbound communications.  

Remember that once you configure a Security Group or NACL, anything not configured will be denied.  Another aspect to carefully consider here is that Security Groups are stateful while NACLs are not.

This means that is you configure outbound rules on your security groups, it will not impact inbound sessions, but if you configure outbound rules on a NACL, you will need to allow the outbound traffic back to the origin IP to establish a session.

As a best practice, it may be easier to think of using Security Groups for ports to access, while NACLs are limited to Networks.  In this example, what ports does your application need to make connections to (and why?)

Configure this per application group on the Security Group, while leaving NACLs open for web applications and implement explicit rules for applications.

For a two tiered web application, with web servers in one group and database in another, security could be implemented by inbound NACL that allows connections to the web servers from the world, while the databased only had inbound connections allowed from the web servers.

For inbound Security Groups, the web servers would only allow port 443 connections, while the database would only allow inbound 3306 (for MySQL) from the web server Security Group (yes, you can use the security groups themselves so you don’t need to keep track of the instance IPs!)

For outbound connections, you could remove both the web server and the database Security Groups outbound rule to prevent them from initiating connections on the internet, while for the web servers allow all outbound traffic to ensure sessions are able to be established and the database would only allow outbound connections to the web server private subnet IP range.  

More examples can be found on the AWS web site for Security in your VPC.

So, the question is now, when should 0.0.0.0/0 be configured?

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #9: Do Not Allow 0.0.0.0/0 Unless You Mean It appeared first on Cloud Sentry Blog.

]]>
../best-practice-9-do-not-allow-00000-unless-you-mean-it/feed/ 0
Cloud Security Fitness Guide – Exercise #8: Use AutoScaling to Dampen DDoS Effects ../aws-security-best-practice-8-use-autoscaling-to-dampen-ddos-effects/ ../aws-security-best-practice-8-use-autoscaling-to-dampen-ddos-effects/#respond Thu, 20 Jul 2017 00:20:28 +0000 http://evidentio.wpengine.com/2015/09/30/2015930aws-security-best-practice-8-use-autoscaling-to-dampen-ddos-effects/  

We’re switching the series up a little bit and going to pay some attention to the network layer for a couple of posts. There are important configuration best practices we should follow.

As part of the Shared Security Responsibility model, AWS is committed to Security in the AWS cloud. This means they secure the foundation upon which applications on AWS are built. 

However, you are responsible to secure the layers on top of the foundation, including how applications react to DoS and DDoS attacks...

The post Cloud Security Fitness Guide – Exercise #8: Use AutoScaling to Dampen DDoS Effects appeared first on Cloud Sentry Blog.

]]>

We’re switching the series up a little bit and going to pay some attention to the network layer for a couple of posts. There are important configuration best practices we should follow.

As part of the Shared Security Responsibility model, AWS is committed to Security in the AWS cloud. This means they secure the foundation upon which applications on AWS are built.

However, you are responsible to secure the layers on top of the foundation, including how applications react to DoS and DDoS attacks.

SharedResponsibility-Evident Amazon Web Services Shared Responsibility Model

DoS and DDoS attacks are defined as attempts at making an application or web site inoperable by means of overwhelming a website. These attacks are either by a direct attacker (DoS) or coordinated via a group of collaborating attackers (DDoS).

The very public nature of AWS makes any resource deployed a prime target for DDoS attacks. These include Elastic Load Balancers (ELB) or EC2 Instances. AWS has an excellent White Paper on how to mitigate DDoS attacks.

The easiest approach to take when trying to prevent a service interruption  is to absorb the attack. There are other more complicated and costly approaches such as deploying advanced and/or application firewalls, and in some cases that’s the approach needed.

However, there’s a relatively lower-cost and effective solution to absorb DDoS attacks: AutoScaling.

Most of the time, a publicly-available site’s traffic will be directed by an ELB. The underlying compute instances that make up the ELB are managed by AWS directly, and are built to scale horizontally and vertically without intervention or advance planning.

Meaning, as traffic to your site increases, so scales the ELB. ELBs also only direct TCP traffic. This means that attack types that use protocols other than TCP will not reach your underlying applications.

However, all of that TCP traffic needs to be directed at something that can process the data contained therein. Those are the EC2 compute instances running web or application servers being managed. When the ELB scales, in most cases the instances  managed needs to scale in proportion.

As described in the AWS DDoS white paper and AutoScaling service documentation, events can be triggered  to automatically launch new EC2 instances running applications in reaction to an increase in network traffic.

Running application instances in an AutoScaling group is good AWS practice anyway, since doing so can automatically give applications resiliency and availability if configured appropriately.

For example, let’s say we set a condition for AutoScaling to launch two new application EC2 instances when the amount of network activity crosses a certain threshold.

This trigger would already allow your site to scale based on normal, legitimate demand. However, if abnormal, attack traffic were to come in to the site, AutoScaling would also trigger a scale up event, launching new EC2 instances to meet demand and process requests.

This means your service remains operational during the attack.  Business continues as normal. Because the attention span of most attackers is short, most of them will move on to their next target.

And once the attack is over, AutoScaling will automatically scale down the number of EC2 instances if configured to do so.

The price to pay for the increase in instance hours to cover the attack is well-justified so that business as usual continues. Think of it as one of the cheapest and most effective insurance policies on the AWS cloud!

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #8: Use AutoScaling to Dampen DDoS Effects appeared first on Cloud Sentry Blog.

]]>
../aws-security-best-practice-8-use-autoscaling-to-dampen-ddos-effects/feed/ 0
Cloud Security Fitness Guide – Exercise #6: Rotate all the Keys Regularly ../top-10-aws-security-best-practices-6-rotate-all-the-keys-regularly/ ../top-10-aws-security-best-practices-6-rotate-all-the-keys-regularly/#respond Thu, 20 Jul 2017 00:10:21 +0000 http://evidentio.wpengine.com/2015/03/21/2015320top-10-aws-security-best-practices-6-rotate-all-the-keys-regularly/ In the previous article, we had a pretty deep discussion on how and why to limit privilege in the AWS IAM service. This time, I'll continue down the IAM path and talk a little about key management for IAM.

I've already discussed why EC2 instances having API access keys is a bad thing. Next time, I'll talk about how users and other automated processes can use roles to get away from the whole key management business. However, there are some places where API keys still have to be used...

The post Cloud Security Fitness Guide – Exercise #6: Rotate all the Keys Regularly appeared first on Cloud Sentry Blog.

]]>

In the previous article, we had a pretty deep discussion on how and why to limit privilege in the AWS IAM service. This time, I’ll continue down the IAM path and talk a little about key management for IAM.

I’ve already discussed why EC2 instances having API access keys is a bad thing. Next time, I’ll talk about how users and other automated processes can use roles to get away from the whole key management business. However, there are some places where API keys still have to be used.

For example, if you manage a Continuous Integration tool like Jenkins outside of AWS and in your on-premise environment, there is no way you could use Roles for EC2.  You’d have to create an IAM user and generate an API Access Key and Secret key to place on that Jenkins server.

AWS recommends that as a best practice that all credentials, passwords and API Access Keys alike, be rotated on a regular basis. If a credential is compromised, this limits the amount of time that a key valid for.

One best practice I followed was that API Access keys were to be rotated every 90 days. My process was simple, but burdensome: 1) An operator tracked the age of an Access Key; 2) The operator created a new Access Key; 3)

The operator then supplied the new Access Key to the automation process; 4) After testing and deploying, the old Access Key was deactivated. Eventually, or at next rotation, the old Access Key was deleted. This process can be made easier with encrypted data snippet mechanisms like Chef’s Encrypted Data Bags.

The AWS Security Blog has a great article outlining a very similar process, and in that process they describe how to get the age of API Access Keys using the AWS CLI iam list-access-keys command.

I find another pair of AWS CLI commands to be useful for getting the age of all users’ (including the Root account user’s) API Access Keys age and other credential data: iam generate-credential-report and iam get-credential-report.

The Credentials Report page describes all of the fields that come out of the generated CSV file. The CSV data is base64 encoded, so you’ll have to do some Linux command-line wizardry to get the data out:

Sample Credential Report Output Sample Credential Report Output

As you can see, the IAM Credentials Report gives me a different view of API Access Key Age from all IAM users in my AWS account.

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #6: Rotate all the Keys Regularly appeared first on Cloud Sentry Blog.

]]>
../top-10-aws-security-best-practices-6-rotate-all-the-keys-regularly/feed/ 0
Cloud Security Fitness Guide – Exercise #7: Use IAM Roles with STS AssumeRole ../aws-security-best-practice-7-use-iam-roles-with-sts-assumerole/ ../aws-security-best-practice-7-use-iam-roles-with-sts-assumerole/#respond Thu, 20 Jul 2017 00:00:17 +0000 http://evidentio.wpengine.com/2015/09/08/201594aws-security-best-practice-7-use-iam-roles-with-sts-assumerole/ We are more than half way through the top ten, so let's finish up the IAM discussion before jumping into some of the top AWS configuration areas! This post will wrap up our discussion of individual people controls leveraging the IAM service by leveraging roles to simplify and create a more secure environment.

Earlier in the series, we covered how and why you should use roles for EC2 instances.  The premise for this was to make it easier for your resources to communicate securely and reduce the management burden by leveraging the AWS Security Token Service...

The post Cloud Security Fitness Guide – Exercise #7: Use IAM Roles with STS AssumeRole appeared first on Cloud Sentry Blog.

]]>

We are more than half way through the top ten, so let’s finish up the IAM discussion before jumping into some of the top AWS configuration areas! This post will wrap up our discussion of individual people controls leveraging the IAM service by leveraging roles to simplify and create a more secure environment.

Earlier in the series, we covered how and why you should use roles for EC2 instances.  The premise for this was to make it easier for your resources to communicate securely and reduce the management burden by leveraging the AWS Security Token Service.

How often is it that you are able to both become more secure and simplify management?  While they sound like opposite pairs, they are actually what we strive for.

Any time you can make it easier for users to be more secure, you are more likely to get adoption.  Whereas, if you make security too complicated, it can actually result in less security, in practice.

As an example, if you forced all users to use a randomly-generated 24-character password that was impossible to memorize, how many would revert to writing their password down or storing it someplace that may not meet security guidelines?

Sure, the password itself may seem to increase security, but in practice, it actually creates bad habits and reduces security.

Today, we are going to extend the roles for EC2 and talk about using roles for your IAM users. Again, this is to make it more secure and easier to maintain that security.  In this example, I’ll use the Evident.io Demo AWS accounts.

When Evident was just starting out as a company, there was a single AWS account used to demonstrate the Evident Security Platform (ESP) and its ability to create custom validations, security checks and integrations. A single AWS account with a couple engineers, that should be pretty easy to secure right?

We disabled root API access and ensured there were no secret keys. Then for the two admins, we enabled MFA tokens. For some of the sales folks, we even created IAM users with read-only permissions and specific policies, so they wouldn’t get in trouble.  Leveraging the built-in ESP IAM Credential Rotation security check, we stayed on top of any keys that needed to be rotated.

Now, ESP was designed to be an enterprise platform and support customers with ten’s, hundreds and even thousands of AWS accounts (there’s a blog about the segregation of duties on ESP.)

To extend our demo, we added a handful of AWS accounts and, by this time, the number of people that had access to the demo account was growing. We could go through each and every AWS account and create new users, generate a password, and restrict that user’s permissions much like we had done in the beginning, but that seemed like a lot of repetitive, manual steps.

A better option was to leverage the users we already created and secured, and just enable them to have access to the additional accounts. It sounds pretty easy, and in practice, it is.

AWS provides a quick walkthrough to help you get started in delegating access to AWS accounts from IAM users in another account. Now, in many cases, you may even have a master AWS account that has no resources running in it that is just used for administrative control and billing access.

In our case, we extended the one AWS account to allow the engineers and sales folks the same access to the other accounts with a few mouse clicks in the console.

If you have more than one AWS Account, it is worth the time to go through the steps outlined by AWS to get a good feel for what you can accomplish by leveraging roles with your IAM users.

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #7: Use IAM Roles with STS AssumeRole appeared first on Cloud Sentry Blog.

]]>
../aws-security-best-practice-7-use-iam-roles-with-sts-assumerole/feed/ 0
Cloud Security Fitness Guide – Exercise #4: Use Roles for EC2 ../top-10-aws-security-best-practices-4-use-roles-for-ec2/ ../top-10-aws-security-best-practices-4-use-roles-for-ec2/#comments Wed, 19 Jul 2017 23:40:12 +0000 http://evidentio.wpengine.com/2015/03/05/201534top-10-aws-security-best-practices-4-use-roles-for-ec2/ By now, you're getting the theme that security on AWS is all about being proactive. The point of proactive security is to reduce the attack surface area for people who desire to do you harm. If there's less area for an attacker, the damage will be smaller. Less damage means more sleep, or more time on the game console, or on the beach, or ... you get the idea!

We've covered proactive measures like making sure the root account is disabled, we've enabled multi-factor authentication on all of our AWS users, and we've also recommended that you reduce the number of administrative users to only those that really need it...

The post Cloud Security Fitness Guide – Exercise #4: Use Roles for EC2 appeared first on Cloud Sentry Blog.

]]>

By now, you’re getting the theme that security on AWS is all about being proactive. The point of proactive security is to reduce the attack surface area for people who desire to do you harm. If there’s less area for an attacker, the damage will be smaller. Less damage means more sleep, or more time on the game console, or on the beach, or … you get the idea!

We’ve covered proactive measures like making sure the root account is disabled, we’ve enabled multi-factor authentication on all of our AWS users, and we’ve also recommended that you reduce the number of administrative users to only those that really need it.

Done? Awesome!

We’re now getting into the meat of proactive security practices.  For this installment, we’re going to go a bit deeper into the DevOps world of deploying applications on AWS.

If you’re deploying an application on AWS that requires more than a simple web server, you’re going to quickly want to take advantage of AWS’ giant list of services. After all, we use AWS because we love building awesome things with these services.

In order for an application on an EC2 instance to store objects in S3, process messages from an SQS queue or any number of other AWS services, it will require permission to access the service’s API. The only way to communicate with the API is with an authentication token.

AWS uses API Access and Secret keys to get that authentication token, and yes, your application running on an EC2 instance will need that key pair to get to S3.

One approach is to create an IAM user, generate an Access Key for that user, and place it in a config file for the application to read.

This presents a huge problem, though. That file is now readable, and depending on the permission given to the IAM user who this key belongs to, this can be a massive security problem. Remember practice #4 (about Admin users)? Well, if that key belonged to an Admin user, it’d have access to all AWS services and resources. Imagine if an EC2 instance with this Access key was compromised?

At Evident.io, one of the top security issues we see is people’s IAM credentials getting compromised. Most of those incidents are due to accidental, non-malicious leaking of API access keys in code committed to a GitHub repo or placed in a config file on a world-readable S3 bucket (more on that in a later tip).

In the past, we mitigated this by using a combination of configuration management, file encryption, EC2 instance metadata, or some other sort of trickery in order to make it harder for someone to read the Access key. None of these were every really that great.

Enter IAM Roles for EC2.

IAM allows for the creation of a Role entity. One of the things you can do with Roles is the ability to assume it using the AWS Security Token Service. In other words, an IAM user can assume a Role to increase their level of privilege. Roles can also be used in combination with external identity providers, such as SAML, to enable Identity Federation to your corporate directory.

Finally, Roles can also be used to allow 3rd parties, such as Evident.io, to access resources on your behalf. In fact, we at Evident.io propose that all 3rd party access should be given via Roles only and the only keys you mange are the ones to your home.

AWS also allows EC2 instances the ability to get Role credentials. So how does my application get to those credentials?

If you’re using any of the official AWS SDKs or the AWS CLI, you don’t have to do anything. The SDKs know to look for the temporary credentials that STS has generated for the EC2 instance. If you’re writing an application that isn’t using one of the AWS SDKs, you can also get to the credentials by looking them up in the EC2 instance metadata service.

From the documentation:

$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access

Results in

{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2012-04-27T22:39:16Z"
}

That Access and Secret Key pair can be used in any code that requires AWS credentials.

From security perspective, this is an awesome feature to have because we’ve taken care of the following security best practices:

1. Reduced the surface area of attack

EC2 Role credentials are unique to an EC2 instance, if an instance is compromised, terminate the instance and let AutoScaling take care of launching a new one. No need to rotate keys like when an IAM Access Key is compromised.

2. Temporary authentication credentials

STS automatically rotates the credentials when the token expires, and the SDKs and CLI know how to handle this automatically.

3. Auditable activity

The AWS CloudTrail service allows you to examine activity from Roles.

4. Automatically generated authentication credentials

The Access key is not statically assigned to an IAM user, so there is no need to store them in a configuration file.

5. Limited privilege

Roles can be assigned IAM policies, so you can create Roles with very specific access to AWS services and resources. If a group of instances should send messages to a specific SNS topic, then you can restrict it to that topic ARN in the policy.

One other big benefit to DevOps shops is that you no longer have to worry about providing Access Keys to deployment scripts or devise a way to decrypt data bags if you’re running your deployment toolchain on EC2. You have a simple, built-in way to get secure AWS Access Keys to your application deployments.

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #4: Use Roles for EC2 appeared first on Cloud Sentry Blog.

]]>
../top-10-aws-security-best-practices-4-use-roles-for-ec2/feed/ 3
Cloud Security Fitness Guide – Exercise #3: Reduce IAM Users with Admin Rights ../top-10-aws-security-best-practices-3-reduce-number-of-iam-users-with-admin-rights/ ../top-10-aws-security-best-practices-3-reduce-number-of-iam-users-with-admin-rights/#respond Wed, 19 Jul 2017 23:30:03 +0000 http://evidentio.wpengine.com/2015/02/26/2015226top-10-aws-security-best-practices-3-reduce-number-of-iam-users-with-admin-rights/ Based on the last two posts, you have disabled your AWS root user; removed any root keys, assigned an MFA to that user, and then either destroyed or intentionally lost them.  Root has no access to your AWS environment, only IAM users, you have created.  Correct?  

In a nutshell, that’s the first two blog posts in this series of top ten security best practices.  So, in this week’s release, we simply have one question.  When you leave your home for vacation and lock the door on the way out.  How many people have access while you’re out on vacation?  Who has a key to your house?

The post Cloud Security Fitness Guide – Exercise #3: Reduce IAM Users with Admin Rights appeared first on Cloud Sentry Blog.

]]>

Based on the last two posts, you have disabled your AWS root user; removed any root keys, assigned an MFA to that user, and then either destroyed or intentionally lost them.  Root has no access to your AWS environment, only IAM users, you have created.  Correct?

In a nutshell, that’s the first two blog posts in this series of top ten security best practices.  So, in this week’s release, we simply have one question.  When you leave your home for vacation and lock the door on the way out.  How many people have access while you’re out on vacation?  Who has a key to your house?

Really, how many people have access while you’re out on vacation?  This is the same question that drives the subject here, and thus this should be a very short post.

When you leave your home, you more than likely have a very good idea of how many keys there are to get back into the house AND who has them, right?  Those keys were handed out with a lot of consideration.

For example, your housekeeper may have full access, but you ran an extensive background check and even they do not have access to the safe hidden in your closet.

Then there is close family or friends; you may have given a spare key, just in case you lock yourself out, or to feed the fish.  But in reality, the number of keys is strictly controlled, and for specific reasons.  The same logic should be applied when considering whom or what should have Admin privileges to your investment in AWS Infrastructure.

Lets consider the federal postal service in the US.  They want to deliver your mail while you’re away.  Do they have a key to your home?  Odds are that they have access to your post office box, but only your post office box.

You have not trusted your local carrier to put the mail on your kitchen counter, but rather have given them access to a very specific place.  If your mailbox keys are compromised while you’re enjoying time away, you have a good understanding of the risk.  However, if they had the keys to your home, and they were lost or compromised, your vacation might be cut short.

This same methodology should be used when giving out access on AWS.  Each key you give out should be reviewed from the perspective of least privilege:

  • How much access does this user or application need in order to perform the task?  What is the risk if the key is lost or compromised?
  • Is there intellectual property or financial data somewhere in that equation?
  • Could the result impact my revenue or reputation?

The more granular you are with access, the more you help protect your business if and when something is compromised. Here are some examples:

If your EC2 application needs to post data to S3, should that same application have the ability to launch more EC2 instances like itself?

No, and AWS IAM has provided you the ability to assign a “Role” to your EC2 instance that will grant that access, but only that access, while removing the ability of any application on that instance the ability to do things you have not specifically authorized.

There is more detail about setting that up here, “Using EC2 Instance Roles.”  The nice thing is, you no longer have to integrate keys into your instances or application; you can simply put your instance in a group that has the permissions it needs.  More will be covered specific to this in an upcoming post titled, “Use Roles for EC2.”

In the same type situation, do you want every IAM user to be able to delete data in your S3 buckets?  

While the most common response in no, in many cases, IAM users are given full access to your AWS environment which includes both creating and deleting resources in all of the services. With the low cost of storage in the cloud, it is a recommended best practice to limit those users and applications that can permanently delete information.

In this case, you can assign a policy to all of your IAM users very easily limiting their ability to remove information.  Again, here AWS has provided a good walk-though on setting this up, “Using IAM Policies to Control Bucket Access.”  We will also dive deeper into this with the upcoming post, “Watch world-readable/listable S3 bucket policies.”

AWS has provided you the ability in many situations to implement the least privilege methodology in many ways that may not be possible, or challenging to implement, in a traditional on-premise infrastructure.

While limiting access is a good best practice, there is always the need to do things that require increased privileges.  You may want to allow a user to delete S3 objects, or you may want to give someone Admin privileges while you are away.

In the vacation scenario, you may want to give someone access to your home while you are gone, but then revoke that access once you are back.  This is a temporary key, and AWS provides you the ability to grant a temporary key to your users and or applications so they can do what they need to, and then destroy that key so it can no longer be used.

The Evident.io Security Platform (ESP) actually leverages this via the Security Token Service to make read only API calls on your behalf.  The credentials expire and are all controlled via AWS, so you do not need to give out static keys.  More examples of this service in action can be found in the AWS documentation on “Granting Temporary Access.”

So, limiting access has sparked your interest and would like some help to see where your environment might be at risk?  All of the most common scenarios discussed here and in AWS Security Best Practices are checked via the Evident.io Security Platform.

The nice thing is that we offer a free 14-day trial, so you can evaluate your security risks, like too many IAM Admin users, but also over 130 other checks.

You are also welcome and encouraged to come chat with us tonight, February 26th, at the AWS Pop-Up Loft in San Francisco and again March 10 to dive deeper into the series of security presentations and workshops!

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #3: Reduce IAM Users with Admin Rights appeared first on Cloud Sentry Blog.

]]>
../top-10-aws-security-best-practices-3-reduce-number-of-iam-users-with-admin-rights/feed/ 0
Cloud Security Fitness Guide – Exercise #2: Enable MFA Tokens Everywhere ../top-10-aws-security-best-practices-2-enable-mfa-tokens-everywhere/ ../top-10-aws-security-best-practices-2-enable-mfa-tokens-everywhere/#comments Wed, 19 Jul 2017 23:20:29 +0000 http://evidentio.wpengine.com/2015/02/19/2015218top-10-aws-security-best-practices-2-enable-mfa-tokens-everywhere/ Here we are, a week later and now following up on to the second installment of our recommended Top Ten Security Best Practices for AWS.  In last week's Account API Access Key blog post, we recommend that you make at least two, no more than three, IAM users to replace the root AWS user.  

Two helping prevent you from a Single Point of Failure (SPOF) and limiting it to three keeps it from getting out of control, but that will be covered in more depth next week.  This week we are going to talk about Multi-Factor Authentication (MFA) what it is, why you want it, where you need it, and why your business is at increased risk without it...

The post Cloud Security Fitness Guide – Exercise #2: Enable MFA Tokens Everywhere appeared first on Cloud Sentry Blog.

]]>

Here we are, a week later and now following up on to the second installment of our recommended Top Ten Security Best Practices for AWS.  In last week’s Account API Access Key blog post, we recommend that you make at least two, but no more than three, IAM users to replace the use of the root AWS user.

A minimum of two administrative users help prevent a Single Point of Failure (SPOF) and limiting it to three keeps it from getting out of control, but that will be covered in more depth next week.  This week we are going to talk about Multi-Factor Authentication (MFA) what it is, why you want it, where you need it, and why your business is at increased risk without it.

So, now that you have IAM users, let’s talk about Multi-Factor Authentication (MFA.)  What is MFA?  Well, “…a simple best practice that adds an extra layer of protection on top of your username and password” is how it is described on the AWS MFA page.  While this is where you can learn more about how to configure it on AWS, what is it?  Simply, it is a layer of security that requires more than one form of authentication.  So, your password could be one form of authentication and anything in addition to that would qualify as an MFA. Security, simplified.

MFA is commonly provided by the addition of a second physical or virtual device that is separate from your username/password combo.  These devices generate random values to supplement the basic username and password combination, thus helping to ensure you are you.  While it is pretty common to see physical tokens dangling on key chains in places where technology companies flourish, now many people also have an app on their mobile devices (probably because they are more likely to have it nearby and may be less likely to lose it).  In addition to alphanumeric values, biometrics continues to gain popularity, with many devices scanning various aspects that uniquely identify us such as our retina, fingerprints, etc.

Why the extra layer of security?  Ever been to an event where you sat in theater style seating with people behind you?  Ever watched the person in front of you thumb their password into a mobile device while ordering a latte?  What about those video cameras, seemingly everywhere, watching us now?  How long before your local traffic camera can read the VIN number as you drive by, if not already?  Look around you, there is a good chance someone else can see and record what is happening.  This is not just for the chance you might use the local library to check email anymore.

Basically, usernames and passwords in and of themselves are not that hard to compromise.  Just look at the recent headlines of large and small companies that were impacted by the loss or compromise a username and password.  This is extending to include everyone, not just people with elevated privileges, but nearly anyone or anything that has access to data.  As an example, consider what your username and password allow you to do or see?  Any risk to you or your company if that were to find it’s way to the top of a popular search engine? There are also numerous accounts of both user and application credentials being caught in popular public source code controls.

Keep in mind that AWS Identity and Access Controls may provide access to not just the infrastructure, but the applications installed and the data being used.

There is also a nearly combative challenge between some more traditional security practices where the powers that be keep increasing the minimum password length, complexity requirements, shortening the time between password changes, or some combination.  While these practices look good on paper, and may get you a compliance check box filled, in reality, they may drive actual users to the opposite behaviors.  Some popular examples are storing passwords in an email or text message (today’s version of writing it on the keyboard or monitor,) using a rotation pattern with a series of similar passwords, incrementing a password by just adding a number, or bracketing an easily remembered word with special characters.

These behaviors are most likely not what those powers that be had in mind.  In these examples, increasing security requirements actually resulted in more security gaps, gaps that are challenging for you to identify and close before they are exploited.  How many of your corporate passwords might be floating in an user’s personal email account, outside of your business controls?

Given the potential risk, and undesirable outcome, adding another layer of security here just makes good business sense.  Let’s be honest, a realistic password policy with an added extra layer of security on top of it helps the business, the users, and you stay secure, right?   So how do you do it on AWS?

In the world of AWS friendly MFA, first you need a physical or virtual device that is supported.  Again, the AWS MFA page is a good starting point for a list of compatible devices.  Not every MFA is supported by AWS, so check first.  When choosing an MFA, consider both how it will integrate into your workflow and how to recover from its loss, as they do get lost and mobile devices do get replaced.

AWS has provided a great three part series on setting up and using MFA for:

Please review these guides and with each layer of security you embed in your people and process, you help prevent your business name from making tomorrow’s headlines as an example.  You also want to make sure you have a disaster recovery plan that includes provisions for the security options you have chosen to implement.

If you have any questions about implementing MFA, please do not hesitate to ask for help.

A quick recap of our past AWS Best Practice posts:

 

 

The post Cloud Security Fitness Guide – Exercise #2: Enable MFA Tokens Everywhere appeared first on Cloud Sentry Blog.

]]>
../top-10-aws-security-best-practices-2-enable-mfa-tokens-everywhere/feed/ 1
Cloud Security Fitness Guide – Exercise #1: Disable Root Account API Access Key ../top-10-aws-security-best-practices-1-disable-root-account-api-access-key/ ../top-10-aws-security-best-practices-1-disable-root-account-api-access-key/#respond Wed, 19 Jul 2017 23:10:57 +0000 http://evidentio.wpengine.com/2015/02/13/2015212top-10-aws-security-best-practices-1-disable-root-account-api-access-key/ Today, we kick off a series on the top 10 security best practices we've come across based on our own experiences. As AWS and Security practitioners on large-scale AWS deployments, we've about seen it all. Most of these are very easy to implement and will go a very long way to ensuring your success on AWS.

In AWS parlance, a "root" user is the login credential you used to create your AWS account with. This user used to be required for some very important aspects of your access to AWS services. Today, it is not really necessary for the operation of your AWS infrastructure. ..

The post Cloud Security Fitness Guide – Exercise #1: Disable Root Account API Access Key appeared first on Cloud Sentry Blog.

]]>

Today, we kick off a series on the top 10 security best practices we’ve come across based on our own experiences. As AWS and Security practitioners on large-scale AWS deployments, we’ve about seen it all. Most of these are very easy to implement and will go a very long way to ensuring your success on AWS.

In AWS parlance, a “root” user is the login credential you used to create your AWS account with. This user was originally  required for some very important aspects of your access to AWS services. Today, the best practices recommend that it is used only to create your initial administrative accounts in IAM.  All future administration should then be done with these newly created IAM accounts.

Now, the root user also has a default generated API access key.  Because of the change in root user use recommendations and the addition of IAM in AWS, it is recommended that you disable, or even better, delete the AWS root API access keys.

Our recommendation for the order and steps to insure access is maintained:

  1. Create IAM admin users (2-3):
    Create 2-3 IAM users with administrative policies via a group.  It is highly recommended that you create at least 2, but no more than 3 IAM administrators.  This provides redundancy in case credentials are lost but limits the number of users with unlimited access to your AWS resources.  Evident Security Platform (ESP) will verify these conditions and generate alerts if there are too few or too many IAM administrators.
  2. Grant access to billing information and tools:
    While still logged in as the root user, go to My Account and fill in the following sections: Alternate Contacts; Security Challenge Questions; and IAM User Access to Billing Information.
  3. Disable/Remove the default AWS root user API access keys:
    While still logged in as the root user, go to the Security Credentials page.  Under the Access Keys section, disable and/or remove all API keys attached to the root AWS account.

It is highly recommended that you complete steps #1 and #2 prior to deleting the root API access keys (particularly if using the CLI tools to perform the above operations).  Particularly step #2 will grant you some important long term benefits including:

  • You will be able to alert internal email distribution lists for support, billing and security related announcements from AWS separately from the email address you used to sign up to AWS with.
  • You will be able to recover your account if you happen to lose your root account credentials or worse, if there’s been a compromise of your root account or IAM credentials.
  • You will be able to set up and get to the billing analytics data via IAM users, which will allow you to work with 3rd party cost management platforms.

A quick recap of our past AWS Best Practice posts:

 

 

The post Cloud Security Fitness Guide – Exercise #1: Disable Root Account API Access Key appeared first on Cloud Sentry Blog.

]]>
../top-10-aws-security-best-practices-1-disable-root-account-api-access-key/feed/ 0
Cloud Security Fitness Guide – Exercise #11: CloudTrail and Encryption ../201631aws-security-best-practice-0-cloudtrail-encryption/ ../201631aws-security-best-practice-0-cloudtrail-encryption/#respond Wed, 19 Jul 2017 23:00:00 +0000 http://evidentio.wpengine.com/?p=301 Moving your architecture to AWS in whole or part also means that your team reaps the rewards of new changes and services that are sometimes deployed very rapidly.  This is a distinguishing feature of the cloud operations, and it is actually a good thing. But let’s not quibble over the merit of change, rather we... Read more »

The post Cloud Security Fitness Guide – Exercise #11: CloudTrail and Encryption appeared first on Cloud Sentry Blog.

]]>

Moving your architecture to AWS in whole or part also means that your team reaps the rewards of new changes and services that are sometimes deployed very rapidly.  This is a distinguishing feature of the cloud operations, and it is actually a good thing.

But let’s not quibble over the merit of change, rather we need to insert some recommendations in the Top Ten AWS Security Best Practices.   After going over this in several AWS Loft events, a couple of local venues, and even a webinar, we realized the need to highlight CloudTrail logging before you do much of anything else.

At the same time, we also want you to start thinking about encryption, all the time.  Both of these should be in place before moving on to #1.  Why first?  Well, let’s talk a little about CloudTrail.

What is AWS CloudTrail?  It is a service AWS made available to you so that much like the story of Hansel and Gretal, you will always have a trail of breadcrumbs to follow back and see details about changes to your AWS environment. Without it, much like the story, you may get lost as there is no way to retroactively enable CloudTrail logs.

Remember that AWS is an API driven environment.  Even if you use the AWS console, behind the scenes there are API calls being made on your behalf.  All of these API calls can be logged via CloudTrail to include the call made, time of the call, who made the call (even if it is AWS or a third party,) where they made the call from (IP,) details about the call, and even if it was a success, error, or something in between.  All of this detail is now available to you, but only if you enable it.

Don’t wait until you need the logs to enable them.  That will only make the situation worse.

Today, AWS has made it easy to enable CloudTrail logs in all your regions all at once with a few mouse clicks in the console. There are some significant steps in the AWS documentation that outline how to setup CloudTrail logs so we won’t duplicate that here. The main point is to enable them in all regions.

Now, there are a couple of security recommendations we want to ensure you consider when enabling CloudTrail.  First and foremost is access controls.  Last year Tim published a blog on Protecting CloudTrail Data. It would be an excellent time to review this now as it is still applicable. Make sure the S3 bucket you designate for CloudTrail logs is encrypted and secure.

Another call out is who can delete the logs.  Tim covers that, but at the same time, consider how long you want to keep them. They are log files, thus, compress well, so storage costs are minimized. However, they will collect over time.

To this, AWS again has provided you with the tools necessary to manage this and keep costs minimized.  Set-up S3 lifecycle policies on the bucket used for CloudTrail data. These policies help to automatically purge older files. A basic rule of thumb is to keep 30-90 days on S3 and move the older files into Glacier for longer term storage. Check your data retention policies to ensure compliance.

Enable it in all regions, even those that you do not plan to use.  When a new region comes on-line, enable it there as soon as possible.  Why?  If you don’t enable it, when you need it, you won’t have it.   Also, consider that if there is a region you won’t use, that there won’t be any CloudTrail activity logs, thus no storage costs.

However, it could be an indication of subversive behavior when your CloudTrail data starts to build in regions you are not using.  Overall, the benefits far outweigh the costs and remember to enable lifecycle management on the bucket to optimize storage costs.

Make sure you are only logging global API calls in only one region.  You may have a region you use more than the others.  This would be good to use for the global API calls.  Services that do not have specific regions such as IAM will log calls in this one region.

At the same time, remember you did this, so you don’t search through a region’s log files for a global change just to realize your global changes are not logged in that region. As a best practice, you don’t want to enable this in more than one region else you start having duplicate entries.
And then there is encryption. Encryption is now prevalent and an easy to use feature that you want to consider enabling everywhere all the time. CloudTrail logs are now encrypted when stored, leveraging S3 Server Side Encryption, but you can also use AWS KMS to help handle the heavy lifting of the key management.

The bottom line is to make sure your data in encrypted from the start.  It is much more challenging to go back and sort through data to try and re-encrypt it after the fact.  Much like enabling the service itself, this will help keep your data secure.

Now is also a good time to start to consider encryption overall. AWS provides encryption for most all data types now both in flight and at rest.  As your usage of AWS continues, enable encryption.  One of the last hold-outs for encryption was boot volumes.

Now that EBS boot volumes can be deployed, the recommendation is to enable encryption everywhere all the time.  Ideally, decryption should briefly happen in memory for processing of data, but in all other aspects, encrypt the data.  It just makes good security.

The top ten security best practices referenced below still apply. This one is just to make sure you have the breadcrumbs in place to track your progress on them.

A quick recap of our past AWS Best Practice posts:

  1. Disable Root API Access Key and Secret Key
  2. Enable MFA Tokens Everywhere
  3. Reduce Number of IAM Users with Admin Rights
  4. Use Roles for EC2
  5. Least Privilege: Limit what IAM Entities Can Do with Strong Policies
  6. Rotate all the Keys Regularly
  7. Use IAM Roles with STS AssumeRole Where Possible
  8. Use AutoScaling to Dampen DDoS Effects
  9. Do Not Allow 0.0.0.0/0 Unless You Mean It
  10. Watch World-Readable and Listable S3 Bucket Policies

The post Cloud Security Fitness Guide – Exercise #11: CloudTrail and Encryption appeared first on Cloud Sentry Blog.

]]>
../201631aws-security-best-practice-0-cloudtrail-encryption/feed/ 0