Wednesday, May 24, 2017

0.0.0.0/0 in AWS Route Tables and Network Rules

Public Safety Announcement:

0.0.0.0/0 should be used sparingly. It means any host on any IP address (or any IPv4 address to be precise) on the Internet can use this route to connect to things on the other side. That means any host in this subnet can contact any host on the Internet and vice versa.

Here's a random sampling of traffic that hit my Firebox Cloud as soon as I set it up on the Internet, and why you might want to open up this type of traffic sparingly. As soon as you do this various nefarious (and accidental) traffic will start hitting any host with access from the Internet.

If you do open up to the Internet you might want security appliances inspecting traffic to and from your hosts prevent malicious traffic from accessing them. For example a WatchGuard Firebox would have prevented anyone using it from being infected by the WannaCry virus if configured properly. You can also use NACLs and Security Groups to limit access as will be noted in upcoming blog posts.

Route Tables: Protecting Your Network

When you set up your network it is very important to understand how route tables work and how they open up access to your network in unintended ways on AWS. Route tables define where traffic can flow on your network. They provide the routes or "roads" traffic can take to get from point A to point B. The following example architecture for a Firebox Cloud Architecture with a management network will be used to explain route tables. These routes are also explained in the AWS NAT documentation.

Let's say you want the resources in a subnet to have Internet access. In this case you create a PUBLIC subnet and put an Internet Gateway in the route table. The Internet Gateway allows resources in that subnet can get to the Internet and hosts on the Internet can connect to resources in this subnet (unless blocked by other security controls such as security groups, NACLs, or a Firebox Cloud):



If we look at the route table we can see that there are two routes.

The first route is added by default for the local VPC IP Range or CIDR. Since I created my VPC with the CIDR 10.0.0.0/16 the local route will allow any host in the associated subnet(s) to send data to or receive data from any host with an IP of in the IP range of 10.0.0.0/16 or 10.0.0.0 to 10.0.255.255. Again this can be further restricted by other controls.

The second route allows any host in the network to send data to or from the Internet via the AWS Internet Gateway (igw-xxxxxxxx) to anywhere: 0.0.0.0/0.



If you want to keep a subnet PRIVATE (meaning the hosts in the subnet cannot directly access the Internet) you need to ensure the subnet does not have a route for an Internet Gateway. In addition you need to ensure that any routes in that subnet to do not in turn route to something that ultimately can route or proxy that traffic to the Internet.

Here's a private subnet:


If we click on the Route Table tab we can see the following:

The local VPC route is again added by default.

In addition we allow any host in this subnet to send traffic to the PRIVATE (trusted) ENI of a WatchGuard Firebox Cloud. That means nothing can get out of our Private Subnet to the Internet without going through our Firebox Cloud if these are the only two subnets in our VPC.

There is NO route to the Internet Gateway. Therefore there is no way for hosts in this subnet to get directly to the Internet from this particular subnet (see caveats at the end of this article).

In other words any host in this subnet trying to send traffic and looking for a route to get it from point A to point B has two options - send it to something else in the VPC or send it to the Firebox Cloud.


In addition, we will also add a route to access an S3 Endpoint as explained previously, to ensure all access to our S3 bucket for managing the Firebox is in the PRIVATE network, never on the Internet and only from our management network to protect our Firebox SSH key:


If I go check out my elastic network interfaces I can verify that my Firebox Cloud Private or Trusted ENI is in this private subnet. That means any traffic to this private ENI must come from within the VPC and this private ENI cannot send data to or receive data from the Internet. The traffic can route through the Firebox to the public ENI which is how we inspect all the traffic getting to and coming from the Internet.


I can look at the details of the ENIs to make sure they are configured correctly. According to the WatchGuard Firebox Cloud documentation, the public Interface should be on eth0 and the private should be on eth1.

eth0 - Public

eth1 - Private



By architecting the network this way, we can make sure traffic that is allowed to access the Firebox management key and also the Firebox CLI is only allowed in the private network (if the rest of our network doesn't have extraneous holes in it). Additionally, we can also add security groups to whitelist and further restrict access as will be discussed in an upcoming blog post.

Now imagine you add a route to the Internet Gateway in one of the subnets that are currently private. You have just allowed them to bypass the Firebox and get to the  Internet without inspection. You may also be sending management of the Firebox over the Internet to the public ENI, instead of to the private or trusted ENI.

Additionally as shown in the diagram for the AWS NAT documentation, traffic to hosts placed inside the public subnet can bypass the NAT, which in this case is our Firebox Cloud. There are other possible configurations but this is the one we are considering at the moment, so if you are using this configuration be aware of this fact and don't put hosts in the Firebox public subnet that you want to be protected by the Firebox cloud.

Here's another scenario: you add a route from this VPC to another VPC and another subnet in your account or another account using VPC Peering. If that other VPC subnet has Internet access, you have just allowed traffic to bypass the Firebox potentially and get to the Internet.

You may be thinking that due to the local route, anything in the private subnets can route to the public subnet, then to the Internet. This is true. A host in the private subnet, without any other controls, could connect to something in the local VPC public subnet. The host in the public subnet with Internet access could be used as a proxy to send data to the Internet (including malware stealing data), so it's best to limit what is in the public subnet and use Security Groups and NACLS to further restrict the traffic, as will be explained.

Understand the your network routes. Data on your network is like water. If you open a hole...it will flow there.

Manually Activating WatchGuard Firebox Cloud Marketplace AMI

In order to use the WatchGuard Firebox Cloud AMI in the automated deployment scripts you will first need to activate the AMI in your account. Unfortunately think you will need to take this manual step before you can use the automation. Perhaps it is a good thing to make sure people cannot instantiate things you don't want them using in your account.

To manually instantiate the WatchGuard Firebox Cloud, in your AWS account navigate to the EC2 service and launch instance. This is basically going to create a virtual machine with WatchGuard Firebox Cloud running on it.



Choose Marketplace and search for WatchGuard. Select the desired option. At the time of this writing the options "Pay As You Go" or "BYOL" (Bring Your Own License). If you would like a license reach out to a WatchGuard partner who can also help you set this up if needed. Pay as you go will, as it sounds, bill you based on what you use. BYOL will allow you to use a license for a flat fee. The Pay As You Go option currently supports T2.Micro instances. Both options support C4.Large and up. Pay As You Go does not include Threat Detection and Response (endpoint protection). Check the WatchGuard web site as these things may change over time.


Complete the steps to instantiate the WatchGuard instance same as you would for any other AWS EC2 instance. If you using BYOL you will need to manually activate the license.

Once you have instantiated the instance, the AMI will be registered in your account, at which point you can run the automated scripts to complete the automated deployment of a WatchGuard Firebox Cloud. You can terminate the instance you launched manually.

"ResourceStatusReason": "The image id '[ami-a844d4c8]' does not exist"

If you're getting this error when creating a CloudFormation stack that means your code is trying to create a stack with an AMI that doesn't exist in your account and/or region, or perhaps it is referencing an old AMI that has been deprecated in favor of a new one.

 "ResourceStatusReason": "The image id '[ami-a844d4c8]' does not exist"

In the case of the Firebox WatchGuard Cloud AMI it currently has a different ID in each region. There is also a separate AMI for the BYOL (bring your own license) vs. Pay as you go AMI.

You can find these AMIs by manually creating a WatchGuard Firebox Cloud from the AWS Marketplace and then looking at the details of the instance you launched to get the AMI ID. I am looking for a better way to get this information but at this time that will help you determine which AMI to use.


In the case of the WatchGuard Cloud Automation templates the AMI ID is used in this template:


The script is being revised to let you select the AMI when you run it and instructions in the readme will be updated.

I was trying to create a script to look up the latest AMI for a particular vendor in the AWS Marketplace to programmatically get around this issue but this is the closest I could come. Maybe a feature request?

aws ec2 describe-images --filters "Name=description,Values=firebox*" | grep 'ImageId\|Description' | sed 's/ *\"ImageId\": "//;s/",//' | sed 's/ *\"Description\": "//;s/"//'

"ResourceStatusReason": "No export named [x] found. Rollback requested by user.",

Was having someone help me test FireboxCloud automation script and got this error.

"ResourceStatusReason": "No export named [x] found. Rollback requested by user.",

What does this mean?

At the bottom of the templates that create the resources there are a list of outputs. For example in this template:

https://github.com/tradichel/FireboxCloudAutomation/blob/master/code/resources/firebox-nat/subnets.yaml

There are outputs at the bottom that look like this:

Outputs:
  FireboxPrivateSubnet:
    Value: !Ref FireboxPrivateSubnet
    Export:
      Name: "FireboxPrivateSubnet"
  FireboxPublicSubnet:
    Value: !Ref FireboxPublicSubnet
    Export:
      Name: "FireboxPublicSubnet"
  FireboxPublicRouteTable:
    Value: !Ref FireboxPublicRouteTable
    Export:
      Name: "FireboxPublicRouteTable"
  FireboxPrivateRouteTable:
    Value: !Ref FireboxPrivateRouteTable
    Export:
      Name: "FireboxPrivateRouteTable"
  FireboxPublicNacl:
    Value: !Ref FireboxPublicSubnetNacl
    Export:
      Name: "FireboxPublicNacl"

There is some fairly new (and awesome) functionality in CloudFormation that allows you to easily import the output of one template into another using ImportValue.

When I create my FireboxCloud I want to put the ENIs (one public and one private) into their respective subnets. I can reference the subnets using !ImportValue as shown below:

Resources:
  FireboxPublicNetworkInterface:
    Type: "AWS::EC2::NetworkInterface"
    Properties:
      Description: Firebox Public Network Interface
      GroupSet:
        - !ImportValue FireboxPublicSecurityGroup
      SubnetId: !ImportValue FireboxPublicSubnet
      SourceDestCheck: false

When the person helping me ran my scripts they got this error:

"ResourceStatusReason": "No export named [x] found. Rollback requested by user.",

It's a little odd that I was not also getting this error. Typically this error is caused when the ImportValue has a typo in the name of the OutputValue it is referencing.

What was odd in this case: I wasn't getting the error. That's because I had added a resource after the fact that reference the output value of a CloudFormation stack I had already created. This seems like it shouldn't be possible and is not correct. If referencing the value in the template itself I should have referenced the name of the resource created in the same template using Ref.

The subsequent problem was that I could not delete the CloudFormation stack because it said the stack was creating an output that was referenced by something else (itself!). I got around this by updating the stack and removing the incorrect ImportValue to a resource in the stack and replacing it with Ref. After that was fixed I could delete the stack.

Tuesday, May 23, 2017

Firebox Cloud Automation

As noted in this blog post, security automation can help prevent errors that lead to security problems.

For this reason I want to completely automate the deployment of security devices in my AWS account. Over the next few blog posts we will be automating the use of a WatchGuard Firebox Cloud on AWS. I also want to only allow configuration from within the private network.

Here's a picture of what we are going to create.


The code can be found in this GitHub repository for automation of a WatchGuard Firebox Cloud on AWS.

The goal of this code is to completely configure our Firebox Cloud without leaving the private network by deploying only with code from source control. 

Note that the Firebox Cloud by default opens up the required route to the subnet it lives in for management access. In order to access the CLI from a Lambda or EC2 instance we will need to put it in this same subnet. It is advisable to lock down management ports and/or create separate network interfaces and subnets for other resources that should not have access to the management interface and port.

Notice that in our diagram above the public ENI is an Internet-accessible subnet connected to the Internet Gateway. The private ENI is in a private subnet with no access outside of our VPC.

Along the way I'll explain some security best practices including those already explained in previous blog posts on secure access from Lambda to an S3 bucket.

For step by step instructions to run the code check out my Secplicity blog post on How to Automate Deployment of a WatchGuard Firebox Cloud on AWS.

Follow me on Twitter @TeriRadichel and at Secplicity for updates!

Saturday, May 20, 2017

Creating Paramiko and Cryptography Virtual Environment to Run On AWS Lambda

A prior blog post explained how to obtain the dependencies to successfully build Paramiko and Cryptography on an AWS EC2 instance in a virtual environment. This post will show how to package up those dependencies for a Lambda function using EC2.

I've created some networking in order to automate deployment of a WatchGuard Firebox Cloud which I am using for my EC2 instance below. If you are unsure how to set up the networking that securely allows Internet access you could run the CloudFormation templates in my FireboxCloudAutomation Github repo and use that networking to complete the steps below. Stay tuned for more networking information here and on Secplicity. It is highly recommended that you strictly limit any SSH access to instances in your VPC and ideally remove that access over the network when not in use. You can also create a bastion host.

For now I will manually create an EC2 instance. Might automate this later. I am not deploying production systems here simply testing. I would automate all of this if actually using it in a production environment.

First instantiate the EC2 instance.


Choose the AWS Linux AMI which has the necessary dependencies matching what is on a Lambda function.


Choose your instance type. Smallest is probably fine.


Configure networking - this is where I am using the networking I created for the WatchGuard Firebox Cloud as noted above so I will have SSH access to my new instance without having wide open networking. Choose the Firebox VPC, the Public Subnet which allows Internet access, and auto-assign an IP. Can't connect without that.



Tag your instance with a name which is helpful for finding it in the console.

Create or use a restrictive SSH security group. The ONLY port we need open for this is SSH port 22 and I only need to be able to access it from My IP address as selected below. Then in theory the only way someone could get to this instance would be to get onto my network (which could be done of course, but we are limiting the attack vector as much as possible). Also I haven't thoroughly reviewed these software packages. If for some reason they had some malware that would reach out to a C2 server, it wouldn't be able to reach that server due to my network rules so I feel a bit safer with this configuration.

Select a key that will be used to SSH into this instance. KEYS ARE PASSWORDS. Protect them.


Wait for the EC2 instance status checks to pass and the indicator below to turn green.


Right click on the instance to get the command to connect to the instance. 


Follow the instructions to connect to the instance. If having problems read this blog post on connecting to EC2 instances using SSH.

Once connected to EC2 instance in terminal:


Run the commands from the my post that explains how to build Paramiko and Cryptography.

Note that you will likely want to use Python 2.7 due to inconsistencies between EC2 (Python 3.4 or 3.5) and Lambda function (Python 3.6). You can probably make it work but this will get you up and running faster:

sudo yum update -y
sudo pip install virtualenv --upgrade
cd /tmp
virtualenv -p /usr/bin/python2.7 python27
source python27/bin/activate
sudo yum install gcc -y
#probably don't need these but just in case libs are missing
#sudo yum install python27-devel.x86_64 -y
#sudo yum install openssl-devel.x86_64 -y
#sudo yum install libffi-devel.x86_64 -y
pip install --upgrade pip
pip install paramiko

Commands to create virtual environment also found here with comments:

Following these directions I zip up the files in the lib/python2.7/site-packages dir

Zip the files as explained on this page:

#change to the site-packages directory
cd python27/lib/python2.7/site-packages
zip -r9 /tmp/lambda.zip . -x \*__pycache__\*

We also need the files in the lib64 directory:

cd ../../..
cd lib64/python2.7/site-packages
zip -g -r9 /tmp/lambda.zip . -x \*__pycache__\*

Now we should have a lambda.zip file in the tmp directory on the EC2 instance:


Now I can deactivate the virtual environment because I'm done with it.

deactivate

Run this from local machine to copy down the zip we just created to the folder where your python file lives that you want to include in the zip file for the Lambda function.

#scp username@ipaddress:pathtofile localsystempath
#change the key and IP address below
scp -i "[yourkey].pem" ec2-user@[ip-address]:/tmp/lambda.zip lambda.zip

Now I can add my own python files to the zip for use in a lambda function, again as explained on this page to run SSH commands from a lambda function:


Heres the code I use to add my fireboxconfig.py to the lambda.zip file I downloaded. I actually copy the lambda.zip to a new file, add my python file and upload it to an S3 bucket so the zip file can be used in my Lambda CloudFormation Template.

For more on that check out my next blog post where I'll explain how I use it in the rest of the code on GitHub to automate configuration of a WatchGuard Firebox Cloud.  I'm using the Lambda function with Paramiko to connect to a WatchGuard Firebox Cloud and configure via the Firebox CLI (command line interface). For more on the WatchGuard CLI check out the latest version of the WatchGuard Firebox Cloud CLI Commands.

Questions or Suggestions - DM me on twitter @teriradichel








Friday, May 19, 2017

invalid ELF header - Import Error

If you see this error when running an AWS lambda function:
{
  "errorMessage": "/var/task/cryptography/hazmat/bindings/_constant_time.abi3.so: invalid ELF header",
  "errorType": "ImportError"
}
...then you need to include required libraries used by your Lambda function.

The problem arises when attempting to package up libraries from the OS on which you are developing and the OS to which you are deploying has different dependency requirements.

For example the libraries required on Windows are different than the libraries required by an AWS Linux EC2 instance when dealing with C libraries.

The solution is to do this packaging with virtual env on an EC2 instance, which will then package up compatible libraries for your Lambda function.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html

http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

Installing Paramiko and Crytography in Python Virtual Environment

This blog post describes how to run SSH jobs from an AWS Lambda function:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/

It seemed like that would be the solution for running SSH in an AWS Lambda function for the purposes of automating configuration of an WatchGuard Firebox Cloud.

The only issue was when attempting to run the code I realized that additional libraries are required. I started with Python 3.6 because why not? It's the most up to date version of Python on Lambda and sounds like Paramiko will work with that version. Turns out Paramiko must be packaged up in the Lambda code. That in turn requires the cryptography package which in turn uses some C libraries. Packaging this up on a mac or Windows machine would include OS specific libraries that wouldn't work in a Lambda function, which presumably runs on something like an AWS EC2 Linux instance.

Always looking for a quick fix I reached out to my friend, Google. There I found some recommendations suggesting creation of the virtual environment on an EC2 instance. However that wasn't as straightforward as one might hope. The required libraries were not all installed by default and the names of the libraries are different than the documentation and various blog posts on the topic. Basically in order to create a python virtual environment you'll need to install gcc, as well as specific versions of python-devel and openssl-devel. I describe how to find and install those libraries in a bit more detail in my previous posts.

Here's what I came up with. Looks so simple now...and by the way by the time I wrote this something changed so make sure you check what packages are available to install as noted in my recent blog posts. I also show how to use the list command below to find all the packages with python3 in the name.

#update the ec2 instance
sudo yum update -y

#see what versions of python 3 are available
sudo yum list | grep python3

#install the one we want
sudo yum install python35.x86_64 -y

#switch to temp directory
cd /tmp

#create virtual environment
virtualenv -p /usr/bin/python3.5 python35

#activate virtual environment
source python35/bin/activate

#install dependencies
sudo yum install gcc -y
sudo yum install python35-devel.x86_64 -y
sudo yum install openssl-devel.x86_64 -y
sudo yum install libffi-devel.x86_64 -y

#install cryptography and parmaiko paramiko
pip install cryptography
pip install paramiko

And finally - it works.

Successfully installed asn1crypto-0.22.0 cffi-1.10.0 cryptography-1.8.1 idna-2.5 packaging-16.8 paramiko-2.1.2 pyasn1-0.2.3 six-1.10.0

Great but guess what. Tried running this on Lambda and get missing library errors.

Digging further I figured out how to find out what versions of Python are available on Lambda by using this blog post:

https://www.linkedin.com/pulse/running-python-3-aws-lambda-lyndon-swan

Ran this code in my lambda function:

args = ("whereis","python3")
popen = subprocess.Popen(args, stdout=subprocess.PIPE)
popen.wait()
output = popen.stdout.read()
print(output)

Looks like only Python 3.4 and Python 3.6 are available and all of the above is based on 3.5.

b'python3: /usr/bin/python3 /usr/bin/python3.4m /usr/bin/python3.4 /usr/lib/python3.4 /usr/lib64/python3.4 /usr/local/lib/python3.4 /usr/include/python3.4m /var/lang/bin/python3.6m /var/lang/bin/python3.6-config /var/lang/bin/python3 /var/lang/bin/python3.6m-config /var/lang/bin/python3.6 /usr/share/man/man1/python3.1.gz'

Options would be go back to 2.7 or try to use 3.4 since 3.6 doesn't appear to be available on EC2 instances. *sigh*.  Let's see if we can build a 3.4 virtual environment.

#see what versions of python 3 are available on EC2 instance
sudo yum list | grep python3

#output gives us python34.x86_64

#install the one we want
sudo yum install python34.x86_64 -y

#create virtual environment
virtualenv -p /usr/bin/python3.4 python34

#activate virtual environment
source python34/bin/activate

#install dependencies
sudo yum install gcc -y
sudo yum install python34-devel.x86_64 -y
sudo yum install openssl-devel.x86_64 -y
sudo yum install libffi-devel.x86_64 -y

#install cryptography and parmaiko paramiko
pip install paramiko

Installing collected packages: pyasn1, paramiko

Successfully installed paramiko-2.1.2 pyasn1-0.2.3

Great. But it didn't run on Lambda either.

{ "errorMessage": "No module named '_cffi_backend'", "errorType": "ModuleNotFoundError"}

Presumably I need to set up my lambda function to use 3.4 as noted above but lets roll back to 2.7 and see if that works. Since EC2 instances use 2.7 by default we won't hopefully need all the extra packages.

#update the ec2 isntance
sudo yum update -y

#switch to temp directory
cd /tmp

#create virtual environment
virtualenv -p /usr/bin/python2.7 python27

#activate virtual environment
source python27/bin/activate

#install dependencies
#sudo yum install gcc -y
#sudo yum install openssl-devel.x86_64 -y
#sudo yum install libffi-devel.x86_64 -y

#install cryptography and parmaiko paramiko
pip install paramiko

Successfully installed asn1crypto-0.22.0 cffi-1.10.0 cryptography-1.8.1 enum34-1.1.6 idna-2.5 ipaddress-1.0.18 paramiko-2.1.2 pyasn1-0.2.3 pycparser-2.17

And... testing it on a 2.7 Lambda function, it works. No missing libaries.

Read on if you want to see how the Lambda function is set up to use Paramiko and Cryptography to connect to configure a WatchGuard Firebox Cloud via the Command Line Interface and SSH.

No such file or directory #include <openssl/opensslv.h>

Similar to my last post on the include pyconfig.h missing on AWS EC2 instances, when attempting to run this command in a virtual environment to create a lambda install package:

pip install cryptography

The next error is:

build/temp.linux-x86_64-3.5/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
     #include <openssl/opensslv.h>
                                  ^
    compilation terminated.
    error: command 'gcc' failed with exit status 1

Once again run the yum list command to find the correct library. In this case the name is not quite so apparent. I found a number of libraries using this command:

sudo yum list | grep openssl

Such as the following:

openssl.x86_64                        1:1.0.1k-15.99.amzn1          installed   
apr-util-openssl.x86_64               1.4.1-4.17.amzn1              amzn-main   
krb5-pkinit-openssl.x86_64            1.14.1-27.41.amzn1            amzn-main   
openssl.i686                          1:1.0.1k-15.99.amzn1          amzn-main   
openssl-devel.x86_64                  1:1.0.1k-15.99.amzn1          amzn-main   
openssl-perl.x86_64                   1:1.0.1k-15.99.amzn1          amzn-main   
openssl-static.x86_64                 1:1.0.1k-15.99.amzn1          amzn-main   
openssl097a.i686                      0.9.7a-12.1.9.amzn1           amzn-main   
openssl097a.x86_64                    0.9.7a-12.1.9.amzn1           amzn-main   
openssl098e.i686                      0.9.8e-29.19.amzn1            amzn-main   
openssl098e.x86_64                    0.9.8e-29.19.amzn1            amzn-main   
xmlsec1-openssl.i686                  1.2.20-5.3.amzn1              amzn-main   
xmlsec1-openssl.x86_64                1.2.20-5.3.amzn1              amzn-main   
xmlsec1-openssl-devel.x86_64          1.2.20-5.3.amzn1              amzn-main 

Looks like we want this one based on a little research: openssl-devel.x86_64

sudo yum install openssl-devel.x86_64 

Yep, that seems to do the trick. See my next post for complete list of commands to install the python SSH library paramiko which requires the cryptography library on an AWS EC2 instance in a virtual environment.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html


http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

No such file or directory include <pyconfig.h>


If you get an error that looks like this when trying to run pip install (such as pip install cryptography or pip install paramiko) on AWS EC2 instance:

build/temp.linux-x86_64-3.5/_openssl.c:12:24: fatal error: pyconfig.h: No such file or directory
  include <pyconfig.h>
  compilation terminated.
  error: command 'gcc' failed with exit status 1

...then you need to install the python development tools. Many blog posts explain this with answers like this for python 2 or python 3:

install python-dev

install python3-dev

On AWS however the libraries have different names. First run this command to list the available libraries that can be installed:

sudo yum list python3 | grep python3

In my case, I see that the library I need on this particular instance is not python3-dev but rather python35-devel.x86_64, which means to get this library I will instead run this command:

sudo install python3-dev python35-devel.x86_64

Note that you will need to run the version of library that is compatible with the version of python you are using.


---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html


http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

unable to execute 'gcc': No such file or directory error: command 'gcc' failed with exit status 1

If you receive this error trying to run an installation script:

 unable to execute 'gcc': No such file or directory
 error: command 'gcc' failed with exit status 1

Install gcc for compiling C code

sudo yum install gcc

Note however that it is not recommended to run this on proaction systems. Only run on development systems where code needs to be compiled and on the systems that are used to build and deploy software in a very well controlled and audited environment. If you leave this running on production systems anyone that gets onto the machine can write or download code to the machine and compile it. This poses an additional attack vector.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html

http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

Thursday, May 18, 2017

Permission denied (publickey). or Timeout trying to SSH to an AWS EC2 Instance

If you are trying to SSH into an AWS EC2 instance and having problems here are some things to check. Although screen shots are specific to AWS the same principles apply to SSH problems on other networks as well.

Permission denied (publickey).

Make sure you are using the correct EC2 key that was assigned to the instance or created when the instance was launched. You should have downloaded this key to your local machine. The key name is listed on the summary page for the EC2 instance:



Ensure that you have not changed the contents of the file in any way. Renaming it should be fine.

Change the permission of the key file so it is read only for the creator of the key by running this command: 

chmod 400 your_key_file.pem

After you run this command you can type this command to verify the permissions of your file: 

ls -al

If set correctly the permissions will look like this:

-rw-------@  1 tradichel  1426935984  1692 May 18 21:00 your_key_file.pem 

Make sure you have navigated to the directory where the key file is located or are using the correct path to the key in your ssh command:

ssh -i your_key_file.pem ec2-user@54.191.224.43

Make sure you have included the user name in your ssh command. The default username for an AWS linux instance is: ec2-user

ssh -i your_key_file.pem ec2-user@54.191.224.43

Check that you are using the correct public IP address


If you have connected repeatedly to the same IP address that has used different EC2 keys over time, you may need to delete the existing key for the IP address from your known hosts file. You will see the location of your known hosts file if you run the ssh command with -vvv (verbose):

ssh -vvv -i your_key_file.pem ec2-user@54.191.224.43

The known hosts file location will look like this on a mac:

debug1: Found key in /Users/username/.ssh/known_hosts:2

You can simply delete the entire file or the offending entry.

Timeout

Make sure you have the following network configuration which will allow SSH traffic to reach your instance on port 22 and send responses back to the SSH client on ephemeral ports:
Random Failures with Active Directory and SSH

If you are using Active Directory as a means of connecting to an EC2 instance, there are a myriad of issues that may be occurring, often related to network ports. Active Directory requires a number of ports to work correctly and will differ depending on configuration. AD can dynamically determine which address to use based on DNS settings. If you find connections randomly failing likely there is something wrong with the network rules and some of the IP addresses in the corresponding DNS rules have been left out. When the connection works the connection randomly picked an address that has the rules set up properly. When the connection fails, the address randomly picked an address that was not set up properly. Additionally backhauling connections to a data center there may result in latency or other network problems along the way that cause failures. There are a variety of ways to architect Active Directory logins to overcome these problems, but in general, check that ALL the required addresses are allowed in your networking rules not just a subset. 

Wednesday, May 17, 2017

Manual AWS Console Updates When Using CloudFormation

Manual vs. CloudFormation Updates

Consider the following scenario:
  1. A DevOps person runs a CloudFormation template to create a stack. Let's say it's a network stack that defines all the traffic that can go in and out of your network.
  2. A nefarious or ill-advised person logs into the AWS console and manually changes the networking rules and opens ports to allow evil traffic. For example perhaps the person creates rules that open ports for WanaCryptor intentionally or unintentionally (though hopefully no one is running SMBv1 on AWS!)
  3. DevOps person re-runs the networking stack via CloudFormation to restore the network to the desired state.
Does this work?

No.

How Does CloudFormation Know When To Make a Change?

CloudFormation seems to only know about the changes it made and differences in the template that it is running vs. the last template it ran. CloudFormation will compare the old unchanged template and new template and go "Cool, everything's all good. Move on".

I just manually changed an S3 endpoint policy, bucket policies and IAM policies in the console and then re-ran the original CloudFormation stacks and my policy changes remained intact.

How Does This Impact Security of Your AWS Account?

If you have a critical, security related stack and you want to maintain that stack in a secure state, you should structure your account to ONLY allow CloudFormation (if that is your tool of choice) and write security policies to allow the appropriate people to only update these stacks through your well-audited deployment mechanism. You might also be able to use Config rules and other event triggers but that seems more complicated and error prone than a straight-forward process of locking down how things are deployed in your account. If you only find out about a problem AFTER it happened and then fix it, might be too late. I explain this in more detail in this white paper on Event Driven Security Automation.

How Can Manual Problems Be Fixed?

In order to fix this problem a change can be made to the template that forces an update. In the case of my policies, I can alter the policies in my template to force them to be updated, for example by changing the SID. Deleting something from a template, running the template, then recreating can work in some cases. Manually deleting things created outside CloudFormation in the console is an option. However, deleting resources is not an option when you have existing systems running that cannot be taken offline that are using the network you are trying to delete. In fact, if you try to do this your CloudFormation stack may end up in a state that is difficult, if not impossible, to fix - though some new features have been added to CloudFormation to override certain problems. You could create a new network and move resources so the new network so you can delete the old one, but that also can be very complicated.

Recommendation for Deploying Security Controls on AWS

For this reason...I highly recommend that if you use CloudFormation, for critical stacks such as security appliances and networking, make that the only option for deployment and create appropriate stack policies so those stacks cannot be altered to an undesirable state. In fact, I would recommend that in production accounts, only automated processes should be used for deployments. In QA, using only automated deployments ensures your testing is accurate. Using only automated mechanisms in your development environment will ensure your automation works. If you MUST provide manual access, create a sandbox for testing and clicking buttons. You could also find something other than CloudFormation to automate and control how resources are deployed. CloudFormation is not the only way to do this, however it offers a lot of built in security benefits.

CloudFormation Can Improve Security In Your AWS Account, When Used Properly