[ ![Build Status] travis-image ] travis [ ![Release] release-image ] releases [ license-image ] license ![Built with Grunt] grunt-image
This is an example [AWS Lambda] aws-lambda application for processing a [Kinesis] aws-kinesis stream of events ([introductory blog post] blog-post). It reads the stream of simple JSON events generated by our event generator. Our AWS Lambda function aggregates and buckets events and stores them in [DynamoDB] aws-dynamodb.
This was built by the Data Science team at [Snowplow Analytics] snowplow, who use AWS Lambda in their projects.
Running this requires an Amazon AWS account, and will incur charges.
See also: Spark Streaming Example Project | [Spark Example Project] spark-example-project
We have implemented a super-simple analytics-on-write stream processing job using AWS Lambda. Our AWS Lambda function, written in JavaScript, reads a Kinesis stream containing events in a JSON format:
{
"timestamp": "2015-06-05T12:54:43.064528",
"type": "Green",
"id": "4ec80fb1-0963-4e35-8f54-ce760499d974"
}
Our job counts the events by type
and aggregates these counts into 1 minute buckets. The job then takes these aggregates and saves them into a table in DynamoDB:
Assuming git, [Vagrant] vagrant-install and [VirtualBox] virtualbox-install installed:
host$ git clone https://github.com/snowplow/aws-lambda-nodejs-example-project.git
host$ cd aws-lambda-example-project
host$ vagrant up && vagrant ssh
guest$ cd /vagrant
guest# npm install grunt
guest$ npm install
guest$ grunt --help
You can follow along in [the release blog post] blog-post to get the project up and running yourself.
The following steps assume that you are running inside Vagrant, as per the Developer Quickstart above.
First we need to configure a default AWS profile:
$ aws configure
AWS Access Key ID [None]: ...
AWS Secret Access Key [None]: ...
Default region name [None]: us-east-1
Default output format [None]: json
Now we can create our DynamoDB table, Kinesis stream, and IAM role. We will be using [CloudFormation] [http://aws.amazon.com/cloudformation] to make our new role. Using Grunt, we can create all like so:
$ grunt init
Running "dynamo:default" (dynamo) task
{ TableDescription:
{ AttributeDefinitions: [ [Object], [Object], [Object] ],
CreationDateTime: Sun Jun 28 2015 13:04:02 GMT-0700 (PDT),
ItemCount: 0,
KeySchema: [ [Object], [Object] ],
LocalSecondaryIndexes: [ [Object] ],
ProvisionedThroughput:
{ NumberOfDecreasesToday: 0,
ReadCapacityUnits: 20,
WriteCapacityUnits: 20 },
TableName: 'my-table',
TableSizeBytes: 0,
TableStatus: 'CREATING' } }
Running "createRole:default" (createRole) task
{ ResponseMetadata: { RequestId: 'd29asdff0-1dd0-11e5-984e-35a24700edda' },
StackId: 'arn:aws:cloudformation:us-east-1:84asdf429716:stack/kinesisDynamo/d2af8730-1dd0-11e5-854a-50d5017c76e0' }
Running "kinesis:default" (kinesis) task
{}
Done, without errors.
Wait a minute to ensure our IAM service role gets created. Now we connect the new service role to access Kinesis, CloudWatch, Lambda, and DynamoDB. We will attach an admin policy to the lambda exec role to easily access the services. Using Grunt, our AWS Lambda function gets assembled into a zip file for upload to the AWS Lambda service. Once it's zipped, we attach a service role to it:
$ grunt role
Running "attachRole:default" (attachRole) task
{ ResponseMetadata: { RequestId: '36ac7877-1dca-11e5-b439-d1da60d122be' } }
Running "packaging:default" (packaging) task
[email protected] ../../../../var/folders/3t/7nlz8rzs2mq5fg_sf3x4j7_m0000gn/T/1435519004662.0046/node_modules/aws-lambda-example-project
├── [email protected]
├── [email protected]
├── [email protected] ([email protected])
├── [email protected] ([email protected])
├── [email protected] ([email protected], [email protected], [email protected], [email protected])
├── [email protected]
├── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected], [email protected])
└── [email protected] ([email protected], [email protected], [email protected])
Created package at dist/aws-lambda-example-project_0-1-0_latest.zip
...
In deploy this project to Lambda with the grunt deploy
command:
$ grunt deploy
Running "deployLambda:default" (deployLambda) task
Trying to create AWS Lambda Function...
Created AWS Lambda Function...
The final step to getting this projected ready to start processing events is to associate our Kinesis stream to the Lambda function with this command:
$ grunt connect
Running "associateStream:default" (associateStream) task
arn:aws:kinesis:us-east-1:844709429716:stream/my-stream
{ BatchSize: 100,
EventSourceArn: 'arn:aws:kinesis:us-east-1:2349429716:stream/my-stream',
FunctionArn: 'arn:aws:lambda:us-east-1:2349429716:function:ProcessKinesisRecordsDynamo',
LastModified: Sun Jun 28 2015 12:38:37 GMT-0700 (PDT),
LastProcessingResult: 'No records processed',
State: 'Creating',
StateTransitionReason: 'User action',
UUID: 'f4efc-fe72-4337-9907-89d4e64c' }
Done, without errors.
We need to start sending events to our new Kinesis stream. We have created a helper method to do this - run the below and leave it running in a tab:
$ grunt events
Writing Kineis Event: {"timestamp":"2015-06-29T20:12:21.625Z","type":"Red"}
{ SequenceNumber: '49552099319153062484931809176874704852938278389141209090',
ShardId: 'shardId-000000000000' }
Writing Kineis Event: {"timestamp":"2015-06-29T20:12:22.200Z","type":"Red"}
{ SequenceNumber: '49552099319153062484931809176875913778757893018315915266',
ShardId: 'shardId-000000000000' }
Writing Kineis Event: {"timestamp":"2015-06-29T20:12:22.708Z","type":"Green"}
{ SequenceNumber: '49552099319153062484931809176877122704577507716210098178',
ShardId: 'shardId-000000000000' }
...
First head over to the AWS Lambda service console, then review the logs in CloudWatch.
Finally, let's check the data in our DynamoDB table. Make sure you are in the correct AWS region, then click on my-table
and hit the Explore Table
button:
For each BucketStart and EventType pair, we see a Count, plus some CreatedAt and UpdatedAt metadata for debugging purposes. Our bucket size is 1 minute, and we have 5 discrete event types, hence the matrix of rows that we see.
- Various improvements for the [0.2.0 release] 020-milestone
- Expanding our analytics-on-write thinking into our new Icebucket icebucket project
- [Tim Bell] tim-b for his blog post [Writing Functions for AWS Lambda Using NPM and Grunt] tim-b-post
- Ian Meyers and his Amazon-Kinesis-Aggregators-Project, a true inspiration for streaming analytics-on-write
AWS Lambda Example Project is copyright 2015 Snowplow Analytics Ltd.
Licensed under the [Apache License, Version 2.0] license (the "License"); you may not use this software except in compliance with the License.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.