Takes Cloudtrail logs from S3 and puts them in Elasticsearch.
You need to create some policies and attach them to a role. That will give your Lambda function the ability to execute. The easiest way to do this is to create a single IAM policy that grants all the permissions you need, and then attach that one policy to the Lambda role. (You could have a few policies—one for elasticsearch, one for S3, one for CloudWatch Logs—and then attach 3 policies to the one role)
The IAM policy allows 3 things: Reading your S3 bucket to get cloudtrail, posting records to your ElasticSearch cluster, and CloudWatch Logs for writing any errors or logging.
- Edit the
lambda-iam-policy.json
file- Add in the bucket name for your bucket.
- Add in the domain name you assigned to your ElasticSearch domain.
- Create an IAM policy, name it like
cloudtrail_efk
and set its contents to be thelambda-iam-policy.json
file. - Create an AWS IAM role.
- Choose Lambda as the service for the role.
- Attach the policy you created.
- Attach
AWSLambdaVPCAccessExecutionRole
as well.
- Pull this repo
pip install requests -t .
- Make any changes you need
- Tag appropriately (use semver)
zip -r cloudtrail_efk.zip cloudtrail2ES.py *
- Create a new lambda in the AWS console with Python 3
- Set the handler to be
cloudtrail2ES.lambda_handler
- Fill in your environment variables. Example below
- Test the lambda function:
- Edit
test-cloudtrail-event.json
to have the correct bucket and a real key (filename in S3) - Try the test and make sure your data is showing up in ES
- Publish a Lambda version that matches your Git tag
Example environment variables:
ES_INDEX: cloudtrail
ES_HOST: foo.example.com:9200
ES_USER: cloudtrail_lambda
ES_PASS: very_good_password
- Go to your S3 Bucket in the console.
- Click on Properties
- Click on Events
- Click + Add Notification
- Name the event
- For Events, select "All object create events"
- For Prefix, enter an appropriate prefix. Probably
AWSLogs/
- For Send to, select the lambda function you created
- Click Save.