Creating Paramiko and Cryptography Virtual Environment to Run On AWS Lambda

A prior blog post explained how to obtain the dependencies to successfully build Paramiko and Cryptography on an AWS EC2 instance in a virtual environment. This post will show how to package up those dependencies for a Lambda function using EC2.

I've created some networking in order to automate deployment of a WatchGuard Firebox Cloud which I am using for my EC2 instance below. If you are unsure how to set up the networking that securely allows Internet access you could run the CloudFormation templates in my FireboxCloudAutomation Github repo and use that networking to complete the steps below. Stay tuned for more networking information here and on Secplicity. It is highly recommended that you strictly limit any SSH access to instances in your VPC and ideally remove that access over the network when not in use. You can also create a bastion host.

For now I will manually create an EC2 instance. Might automate this later. I am not deploying production systems here simply testing. I would automate all of this if actually using it in a production environment.

First instantiate the EC2 instance.


Choose the AWS Linux AMI which has the necessary dependencies matching what is on a Lambda function.


Choose your instance type. Smallest is probably fine.


Configure networking - this is where I am using the networking I created for the WatchGuard Firebox Cloud as noted above so I will have SSH access to my new instance without having wide open networking. Choose the Firebox VPC, the Public Subnet which allows Internet access, and auto-assign an IP. Can't connect without that.



Tag your instance with a name which is helpful for finding it in the console.

Create or use a restrictive SSH security group. The ONLY port we need open for this is SSH port 22 and I only need to be able to access it from My IP address as selected below. Then in theory the only way someone could get to this instance would be to get onto my network (which could be done of course, but we are limiting the attack vector as much as possible). Also I haven't thoroughly reviewed these software packages. If for some reason they had some malware that would reach out to a C2 server, it wouldn't be able to reach that server due to my network rules so I feel a bit safer with this configuration.

Select a key that will be used to SSH into this instance. KEYS ARE PASSWORDS. Protect them.


Wait for the EC2 instance status checks to pass and the indicator below to turn green.


Right click on the instance to get the command to connect to the instance. 


Follow the instructions to connect to the instance. If having problems read this blog post on connecting to EC2 instances using SSH.

Once connected to EC2 instance in terminal:


Run the commands from the my post that explains how to build Paramiko and Cryptography.

Note that you will likely want to use Python 2.7 due to inconsistencies between EC2 (Python 3.4 or 3.5) and Lambda function (Python 3.6). You can probably make it work but this will get you up and running faster:

sudo yum update -y
sudo pip install virtualenv --upgrade
cd /tmp
virtualenv -p /usr/bin/python2.7 python27
source python27/bin/activate
sudo yum install gcc -y
#probably don't need these but just in case libs are missing
#sudo yum install python27-devel.x86_64 -y
#sudo yum install openssl-devel.x86_64 -y
#sudo yum install libffi-devel.x86_64 -y
pip install --upgrade pip
pip install paramiko

Commands to create virtual environment also found here with comments:

Following these directions I zip up the files in the lib/python2.7/site-packages dir

Zip the files as explained on this page:

#change to the site-packages directory
cd python27/lib/python2.7/site-packages
zip -r9 /tmp/lambda.zip . -x \*__pycache__\*

We also need the files in the lib64 directory:

cd ../../..
cd lib64/python2.7/site-packages
zip -g -r9 /tmp/lambda.zip . -x \*__pycache__\*

Now we should have a lambda.zip file in the tmp directory on the EC2 instance:


Now I can deactivate the virtual environment because I'm done with it.

deactivate

Run this from local machine to copy down the zip we just created to the folder where your python file lives that you want to include in the zip file for the Lambda function.

#scp username@ipaddress:pathtofile localsystempath
#change the key and IP address below
scp -i "[yourkey].pem" ec2-user@[ip-address]:/tmp/lambda.zip lambda.zip

Now I can add my own python files to the zip for use in a lambda function, again as explained on this page to run SSH commands from a lambda function:


Heres the code I use to add my fireboxconfig.py to the lambda.zip file I downloaded. I actually copy the lambda.zip to a new file, add my python file and upload it to an S3 bucket so the zip file can be used in my Lambda CloudFormation Template.

For more on that check out my next blog post where I'll explain how I use it in the rest of the code on GitHub to automate configuration of a WatchGuard Firebox Cloud.  I'm using the Lambda function with Paramiko to connect to a WatchGuard Firebox Cloud and configure via the Firebox CLI (command line interface). For more on the WatchGuard CLI check out the latest version of the WatchGuard Firebox Cloud CLI Commands.

Questions or Suggestions - DM me on twitter @teriradichel








invalid ELF header - Import Error

If you see this error when running an AWS lambda function:
{
  "errorMessage": "/var/task/cryptography/hazmat/bindings/_constant_time.abi3.so: invalid ELF header",
  "errorType": "ImportError"
}
...then you need to include required libraries used by your Lambda function.

The problem arises when attempting to package up libraries from the OS on which you are developing and the OS to which you are deploying has different dependency requirements.

For example the libraries required on Windows are different than the libraries required by an AWS Linux EC2 instance when dealing with C libraries.

The solution is to do this packaging with virtual env on an EC2 instance, which will then package up compatible libraries for your Lambda function.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html

http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

Installing Paramiko and Crytography in Python Virtual Environment

This blog post describes how to run SSH jobs from an AWS Lambda function:
https://aws.amazon.com/blogs/compute/scheduling-ssh-jobs-using-aws-lambda/

It seemed like that would be the solution for running SSH in an AWS Lambda function for the purposes of automating configuration of an WatchGuard Firebox Cloud.

The only issue was when attempting to run the code I realized that additional libraries are required. I started with Python 3.6 because why not? It's the most up to date version of Python on Lambda and sounds like Paramiko will work with that version. Turns out Paramiko must be packaged up in the Lambda code. That in turn requires the cryptography package which in turn uses some C libraries. Packaging this up on a mac or Windows machine would include OS specific libraries that wouldn't work in a Lambda function, which presumably runs on something like an AWS EC2 Linux instance.

Always looking for a quick fix I reached out to my friend, Google. There I found some recommendations suggesting creation of the virtual environment on an EC2 instance. However that wasn't as straightforward as one might hope. The required libraries were not all installed by default and the names of the libraries are different than the documentation and various blog posts on the topic. Basically in order to create a python virtual environment you'll need to install gcc, as well as specific versions of python-devel and openssl-devel. I describe how to find and install those libraries in a bit more detail in my previous posts.

Here's what I came up with. Looks so simple now...and by the way by the time I wrote this something changed so make sure you check what packages are available to install as noted in my recent blog posts. I also show how to use the list command below to find all the packages with python3 in the name.

#update the ec2 instance
sudo yum update -y

#see what versions of python 3 are available
sudo yum list | grep python3

#install the one we want
sudo yum install python35.x86_64 -y

#switch to temp directory
cd /tmp

#create virtual environment
virtualenv -p /usr/bin/python3.5 python35

#activate virtual environment
source python35/bin/activate

#install dependencies
sudo yum install gcc -y
sudo yum install python35-devel.x86_64 -y
sudo yum install openssl-devel.x86_64 -y
sudo yum install libffi-devel.x86_64 -y

#install cryptography and parmaiko paramiko
pip install cryptography
pip install paramiko

And finally - it works.

Successfully installed asn1crypto-0.22.0 cffi-1.10.0 cryptography-1.8.1 idna-2.5 packaging-16.8 paramiko-2.1.2 pyasn1-0.2.3 six-1.10.0

Great but guess what. Tried running this on Lambda and get missing library errors.

Digging further I figured out how to find out what versions of Python are available on Lambda by using this blog post:

https://www.linkedin.com/pulse/running-python-3-aws-lambda-lyndon-swan

Ran this code in my lambda function:

args = ("whereis","python3")
popen = subprocess.Popen(args, stdout=subprocess.PIPE)
popen.wait()
output = popen.stdout.read()
print(output)

Looks like only Python 3.4 and Python 3.6 are available and all of the above is based on 3.5.

b'python3: /usr/bin/python3 /usr/bin/python3.4m /usr/bin/python3.4 /usr/lib/python3.4 /usr/lib64/python3.4 /usr/local/lib/python3.4 /usr/include/python3.4m /var/lang/bin/python3.6m /var/lang/bin/python3.6-config /var/lang/bin/python3 /var/lang/bin/python3.6m-config /var/lang/bin/python3.6 /usr/share/man/man1/python3.1.gz'

Options would be go back to 2.7 or try to use 3.4 since 3.6 doesn't appear to be available on EC2 instances. *sigh*.  Let's see if we can build a 3.4 virtual environment.

#see what versions of python 3 are available on EC2 instance
sudo yum list | grep python3

#output gives us python34.x86_64

#install the one we want
sudo yum install python34.x86_64 -y

#create virtual environment
virtualenv -p /usr/bin/python3.4 python34

#activate virtual environment
source python34/bin/activate

#install dependencies
sudo yum install gcc -y
sudo yum install python34-devel.x86_64 -y
sudo yum install openssl-devel.x86_64 -y
sudo yum install libffi-devel.x86_64 -y

#install cryptography and parmaiko paramiko
pip install paramiko

Installing collected packages: pyasn1, paramiko

Successfully installed paramiko-2.1.2 pyasn1-0.2.3

Great. But it didn't run on Lambda either.

{ "errorMessage": "No module named '_cffi_backend'", "errorType": "ModuleNotFoundError"}

Presumably I need to set up my lambda function to use 3.4 as noted above but lets roll back to 2.7 and see if that works. Since EC2 instances use 2.7 by default we won't hopefully need all the extra packages.

#update the ec2 isntance
sudo yum update -y

#switch to temp directory
cd /tmp

#create virtual environment
virtualenv -p /usr/bin/python2.7 python27

#activate virtual environment
source python27/bin/activate

#install dependencies
#sudo yum install gcc -y
#sudo yum install openssl-devel.x86_64 -y
#sudo yum install libffi-devel.x86_64 -y

#install cryptography and parmaiko paramiko
pip install paramiko

Successfully installed asn1crypto-0.22.0 cffi-1.10.0 cryptography-1.8.1 enum34-1.1.6 idna-2.5 ipaddress-1.0.18 paramiko-2.1.2 pyasn1-0.2.3 pycparser-2.17

And... testing it on a 2.7 Lambda function, it works. No missing libaries.

Read on if you want to see how the Lambda function is set up to use Paramiko and Cryptography to connect to configure a WatchGuard Firebox Cloud via the Command Line Interface and SSH.

No such file or directory #include <openssl/opensslv.h>

Similar to my last post on the include pyconfig.h missing on AWS EC2 instances, when attempting to run this command in a virtual environment to create a lambda install package:

pip install cryptography

The next error is:

build/temp.linux-x86_64-3.5/_openssl.c:434:30: fatal error: openssl/opensslv.h: No such file or directory
     #include <openssl/opensslv.h>
                                  ^
    compilation terminated.
    error: command 'gcc' failed with exit status 1

Once again run the yum list command to find the correct library. In this case the name is not quite so apparent. I found a number of libraries using this command:

sudo yum list | grep openssl

Such as the following:

openssl.x86_64                        1:1.0.1k-15.99.amzn1          installed   
apr-util-openssl.x86_64               1.4.1-4.17.amzn1              amzn-main   
krb5-pkinit-openssl.x86_64            1.14.1-27.41.amzn1            amzn-main   
openssl.i686                          1:1.0.1k-15.99.amzn1          amzn-main   
openssl-devel.x86_64                  1:1.0.1k-15.99.amzn1          amzn-main   
openssl-perl.x86_64                   1:1.0.1k-15.99.amzn1          amzn-main   
openssl-static.x86_64                 1:1.0.1k-15.99.amzn1          amzn-main   
openssl097a.i686                      0.9.7a-12.1.9.amzn1           amzn-main   
openssl097a.x86_64                    0.9.7a-12.1.9.amzn1           amzn-main   
openssl098e.i686                      0.9.8e-29.19.amzn1            amzn-main   
openssl098e.x86_64                    0.9.8e-29.19.amzn1            amzn-main   
xmlsec1-openssl.i686                  1.2.20-5.3.amzn1              amzn-main   
xmlsec1-openssl.x86_64                1.2.20-5.3.amzn1              amzn-main   
xmlsec1-openssl-devel.x86_64          1.2.20-5.3.amzn1              amzn-main 

Looks like we want this one based on a little research: openssl-devel.x86_64

sudo yum install openssl-devel.x86_64 

Yep, that seems to do the trick. See my next post for complete list of commands to install the python SSH library paramiko which requires the cryptography library on an AWS EC2 instance in a virtual environment.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html


http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

No such file or directory include <pyconfig.h>


If you get an error that looks like this when trying to run pip install (such as pip install cryptography or pip install paramiko) on AWS EC2 instance:

build/temp.linux-x86_64-3.5/_openssl.c:12:24: fatal error: pyconfig.h: No such file or directory
  include <pyconfig.h>
  compilation terminated.
  error: command 'gcc' failed with exit status 1

...then you need to install the python development tools. Many blog posts explain this with answers like this for python 2 or python 3:

install python-dev

install python3-dev

On AWS however the libraries have different names. First run this command to list the available libraries that can be installed:

sudo yum list python3 | grep python3

In my case, I see that the library I need on this particular instance is not python3-dev but rather python35-devel.x86_64, which means to get this library I will instead run this command:

sudo install python3-dev python35-devel.x86_64

Note that you will need to run the version of library that is compatible with the version of python you are using.


---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html


http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

unable to execute 'gcc': No such file or directory error: command 'gcc' failed with exit status 1

If you receive this error trying to run an installation script:

 unable to execute 'gcc': No such file or directory
 error: command 'gcc' failed with exit status 1

Install gcc for compiling C code

sudo yum install gcc

Note however that it is not recommended to run this on proaction systems. Only run on development systems where code needs to be compiled and on the systems that are used to build and deploy software in a very well controlled and audited environment. If you leave this running on production systems anyone that gets onto the machine can write or download code to the machine and compile it. This poses an additional attack vector.

---

Trying to set up a Python virtual environment that has the correct libraries and/or works on AWS Lambda? Check out these blog posts which are specific to Paramiko and Cryptography but explain how to determine which libraries to use and package up a virtual environment that has the correct underlying libraries:

http://websitenotebook.blogspot.com/2017/05/installing-paramiko-and-crytography-in.html

http://websitenotebook.blogspot.com/2017/05/creating-paramiko-and-cryptography.html

Permission denied (publickey). or Timeout trying to SSH to an AWS EC2 Instance

If you are trying to SSH into an AWS EC2 instance and having problems here are some things to check. Although screen shots are specific to AWS the same principles apply to SSH problems on other networks as well.

Permission denied (publickey).

Make sure you are using the correct EC2 key that was assigned to the instance or created when the instance was launched. You should have downloaded this key to your local machine. The key name is listed on the summary page for the EC2 instance:



Ensure that you have not changed the contents of the file in any way. Renaming it should be fine.

Change the permission of the key file so it is read only for the creator of the key by running this command: 

chmod 400 your_key_file.pem

After you run this command you can type this command to verify the permissions of your file: 

ls -al

If set correctly the permissions will look like this:

-rw-------@  1 tradichel  1426935984  1692 May 18 21:00 your_key_file.pem 

Make sure you have navigated to the directory where the key file is located or are using the correct path to the key in your ssh command:

ssh -i your_key_file.pem ec2-user@54.191.224.43

Make sure you have included the user name in your ssh command. The default username for an AWS linux instance is: ec2-user

ssh -i your_key_file.pem ec2-user@54.191.224.43

Check that you are using the correct public IP address


If you have connected repeatedly to the same IP address that has used different EC2 keys over time, you may need to delete the existing key for the IP address from your known hosts file. You will see the location of your known hosts file if you run the ssh command with -vvv (verbose):

ssh -vvv -i your_key_file.pem ec2-user@54.191.224.43

The known hosts file location will look like this on a mac:

debug1: Found key in /Users/username/.ssh/known_hosts:2

You can simply delete the entire file or the offending entry.

Timeout

Make sure you have the following network configuration which will allow SSH traffic to reach your instance on port 22 and send responses back to the SSH client on ephemeral ports:
Random Failures with Active Directory and SSH

If you are using Active Directory as a means of connecting to an EC2 instance, there are a myriad of issues that may be occurring, often related to network ports. Active Directory requires a number of ports to work correctly and will differ depending on configuration. AD can dynamically determine which address to use based on DNS settings. If you find connections randomly failing likely there is something wrong with the network rules and some of the IP addresses in the corresponding DNS rules have been left out. When the connection works the connection randomly picked an address that has the rules set up properly. When the connection fails, the address randomly picked an address that was not set up properly. Additionally backhauling connections to a data center there may result in latency or other network problems along the way that cause failures. There are a variety of ways to architect Active Directory logins to overcome these problems, but in general, check that ALL the required addresses are allowed in your networking rules not just a subset. 

Manual AWS Console Updates When Using CloudFormation

Manual vs. CloudFormation Updates

Consider the following scenario:
  1. A DevOps person runs a CloudFormation template to create a stack. Let's say it's a network stack that defines all the traffic that can go in and out of your network.
  2. A nefarious or ill-advised person logs into the AWS console and manually changes the networking rules and opens ports to allow evil traffic. For example perhaps the person creates rules that open ports for WanaCryptor intentionally or unintentionally (though hopefully no one is running SMBv1 on AWS!)
  3. DevOps person re-runs the networking stack via CloudFormation to restore the network to the desired state.
Does this work?

No.

How Does CloudFormation Know When To Make a Change?

CloudFormation seems to only know about the changes it made and differences in the template that it is running vs. the last template it ran. CloudFormation will compare the old unchanged template and new template and go "Cool, everything's all good. Move on".

I just manually changed an S3 endpoint policy, bucket policies and IAM policies in the console and then re-ran the original CloudFormation stacks and my policy changes remained intact.

How Does This Impact Security of Your AWS Account?

If you have a critical, security related stack and you want to maintain that stack in a secure state, you should structure your account to ONLY allow CloudFormation (if that is your tool of choice) and write security policies to allow the appropriate people to only update these stacks through your well-audited deployment mechanism. You might also be able to use Config rules and other event triggers but that seems more complicated and error prone than a straight-forward process of locking down how things are deployed in your account. If you only find out about a problem AFTER it happened and then fix it, might be too late. I explain this in more detail in this white paper on Event Driven Security Automation.

How Can Manual Problems Be Fixed?

In order to fix this problem a change can be made to the template that forces an update. In the case of my policies, I can alter the policies in my template to force them to be updated, for example by changing the SID. Deleting something from a template, running the template, then recreating can work in some cases. Manually deleting things created outside CloudFormation in the console is an option. However, deleting resources is not an option when you have existing systems running that cannot be taken offline that are using the network you are trying to delete. In fact, if you try to do this your CloudFormation stack may end up in a state that is difficult, if not impossible, to fix - though some new features have been added to CloudFormation to override certain problems. You could create a new network and move resources so the new network so you can delete the old one, but that also can be very complicated.

Recommendation for Deploying Security Controls on AWS

For this reason...I highly recommend that if you use CloudFormation, for critical stacks such as security appliances and networking, make that the only option for deployment and create appropriate stack policies so those stacks cannot be altered to an undesirable state. In fact, I would recommend that in production accounts, only automated processes should be used for deployments. In QA, using only automated deployments ensures your testing is accurate. Using only automated mechanisms in your development environment will ensure your automation works. If you MUST provide manual access, create a sandbox for testing and clicking buttons. You could also find something other than CloudFormation to automate and control how resources are deployed. CloudFormation is not the only way to do this, however it offers a lot of built in security benefits.

CloudFormation Can Improve Security In Your AWS Account, When Used Properly

Accessing Files in S3 via a Lambda Function in a VPC using an S3 Endpoint

This post explores creation of a lambda function inside a VPC that retrieves a file from an S3 bucket over an S3 endpoint. The Lambda function below is written in Python. 

Why Use a VPC S3 Endpoint? 

Traffic to a VPC Endpoint creates a private connection between the specified VPC and AWS service. By creating the appropriate policies on our bucket and the role used by our Lambda function, we can enforce any requests for files in the bucket from the Lambda function to use the S3 endpoint and remain within the Amazon network. If we only allow GetObject via the endpoint, any requests or files must come from within our VPC. Putting a policy on the VPC endpoint we can limit what S3 actions the Lambda role can take on our bucket over the network. By putting additional restrictions on the bucket policy we can limit who can upload to the bucket, enforce MFA and specific IP addresses. All these things work together to protect the data in the bucket. Of course you have to remember that anyone who has permissions to change the policies would be able to remove these restrictions and get to the data in the bucket, so give permissions to change permissions sparingly and consider segregation of duties.

CloudFormation Templates

The following resources need to be created before we can write a lambda and run our test:

VPC: 

Subnet and Security group for Lambda:

S3 bucket: 

Lambda Role:

S3 Bucket Policy that allows our Lambda role to access the bucket:

And S3 Endpoint and Policy that grants access to bucket via route in subnet route table and S3 Endpoint Policy:

Lambda Function:

Resources:

The S3 Endpoint was created and assigned to the desired route table. The Route Tables tab displays the reference to the subnet and route table where the route has been added for the S3 VPC Endpoint.


A look at the route table for the subnet shows the route listed for the S3 endpoint. The route table has a route that allows access to S3 via a Prefix List. 

Wait...what's a Prefix List? A service (S3) in a VPC endpoint is identified by a prefix list—the name and ID of a service for a region. A prefix list ID uses the form pl-xxxxxxx and that ID needs to be added to the outbound rules of the security group to allow resources in that security group to access the service (in this case S3 in the Oregon or us-west-2 region). Basically it appears to allow traffic to be routed to that service within the AWS network.



A security group was created and an an egress (outbound) rule was added to it that allows access to S3 via a Prefix List. 



The Lambda function shows that it has been created inside a VPC, using our subnet and specified security group. You'll notice one thing that's a bit odd - the destination for our S3 VPC Endpoint rule is blank. But will it still work?


Lambda Python Code

The Lambda Python code should allow retrieving a file from the bucket - in my case retrieving a key for use in administration purposes:

from __future__ import print_function
import boto3
import os
import subprocess

def configure_firebox(event, context):
    
    s3=boto3.client('s3')
    
    bucket=os.environ['Bucket']
    key="firebox-cli-ec2-key.pem"
    
    response = s3.get_object(Bucket=bucket, Key=key) 
   ...


Success

If you have successfully created your networking and policies, you will be able to access the files in your bucket over the S3 endpoint (and only over the S3 endpoint if you desire). In fact you can restrict access to the bucket to the S3 endpoint only using the following policy:

{
   "Version": "2012-10-17",
   "Id": "Policy1415115909152",
   "Statement": [
     {
       "Sid": "Access-to-specific-VPCE-only",
       "Action": "s3:*",
       "Effect": "Deny",
       "Resource": ["arn:aws:s3:::examplebucket",
                    "arn:aws:s3:::examplebucket/*"],
       "Condition": {
         "StringNotEquals": {
           "aws:sourceVpce": "vpce-1a2b3c4d"
         }
       },
       "Principal": "*"
     }
   ]
}


Time Out

If you get a timeout error, likely one of the networking rules is not set up correctly. 

  • Make sure the route is in the route table associated with the subnet used by the Lambda function.
  • Make sure the Security Group used by the Lambda function has the Prefix List
  • Make sure the Lambda function is assigned to the correct subnet and security group that show the rules above.
  • Make sure the S3 endpoint policy allows access to the bucket by the Lambda role.

Access Denied

If this error occurs:
An error occurred (AccessDenied) when calling the GetObject operation: Access Denied: ClientError
Check the following:
  • The bucket policy allows s3:Get on the individual keys (files): "arn:aws:s3:::MyExampleBucket/*"
  • The bucket policy allows s3:ListBucket for the Lambda role on the bucket itself: "arn:aws:s3:::MyExampleBucket"
  • The Lambda role allows access to the S3 bucket.
  • Make sure the file name and the bucket name are correct.
  • Make sure you have a principal in your S3 Endpoint Policy. CloudFormation allows creating a policy without a principal at the time of this writing and results in this error.

errorMessage: Bad handler

When trying to create an AWS Lambda function if you get this error message:

errorMessage: Bad handler 

Make sure when specifying the handler you use the format [file name].[function name].

For example if you have the function configure_firebox and the python file name is fireboxconfig.py then specify the handler as fireboxconfig.configure_firebox

If you are specifying the name correctly and still getting the error, take a look at your zip file and make sure the python file you are referencing is at the root of the zip file, not in any folders when the code is unzipped.

If you are trying to use the zip function to zip a file into the archive without the directories use the -j switch:

zip -j ./resources/firebox-lambda/fireboxconfig.zip ./resources/firebox-lambda/python/fireboxconfig.py 


Also check for any spelling errors in your file or handler name.

In CloudFormation:

FireboxConfigurationLambda:
    Type: "AWS::Lambda::Function"
    Properties: 
      Code:
        S3Bucket: !ImportValue FireboxPrivateBucket
        S3Key: fireboxconfig.zip
      Description: Firebox Lambda to Execute CLI Commands
      Environment:
        Variables:
          Test: Value
      FunctionName: ConfigureFirebox
      Handler: fireboxconfig.configure_firebox
      KmsKeyArn: !ImportValue FireboxKmsKeyArn
      MemorySize: 128
      Role: !ImportValue FireboxLambdaCLIRoleArn
      Runtime: python3.6
      Timeout: 3
      VpcConfig:
        SecurityGroupIds:
          - !ImportValue FireboxManagementSecurityGroup
        SubnetIds:
          - !ImportValue FireboxCLISubnet

More:

http://docs.aws.amazon.com/lambda/latest/dg/python-programming-model-handler-types.html

https://github.com/tradichel/FireboxCloudAutomation/blob/master/code/resources/firebox-lambda/lambda.yaml

The provided execution role does not have permissions to call CreateNetworkInterface on EC2

If you get this error when attempting to create an AWS Lamba function:

The provided execution role does not have permissions to call CreateNetworkInterface on EC2

You need to grant Lambda some additional permissions:

ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface

There is an existing managed policy provided by AWS which has all the permissions required by a lambda function named AWSLambdaVPCAccessExecutionRole.

You can attach a managed policy to a role using CloudFormation by using the ManagedPolicyArns Property of an IAM role.

Type: "AWS::IAM::Role"
Properties: 
  AssumeRolePolicyDocument:
    JSON object
  ManagedPolicyArns:
    - String
  Path: String
  Policies:
    - Policies
  RoleName: String

For example:

FireboxRole: 
    Type: "AWS::IAM::Role"
    Properties: 
      RoleName: "FireboxLambdaRole"
      AssumeRolePolicyDocument: 
        Version: "2012-10-17"
        Statement: 
          - 
            Effect: "Allow"
            Principal: 
              Service: 
                - "lambda.amazonaws.com"
            Action: 
              - "sts:AssumeRole"
      Path: "/"
      ManagedPolicyArns: 
        - "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"

For more information see: 

http://docs.aws.amazon.com/lambda/latest/dg/vpc.html

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-role.html

https://github.com/tradichel/FireboxCloudAutomation/blob/master/code/resources/firebox-cli/clirole.yaml