You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To submit a template to the Serverless Patterns Collection, submit an issue with the following information.
IMPORTANT
Patterns are intended to be primarily IaC-focused implementations of 2-4 AWS services, with minimum custom code. They should be commonly used combinations that help developers get started quickly. If you have a utility, demo, or application, submit these to the Serverless Repos Collection instead.
ONLY SUBMIT ONE PATTERN CHANGE PER PR. Multiple patterns or files spanning multiple pattern directories will be automatically rejected.
Patterns may take up to 4-6 weeks to review, test, and merge but there is no SLA and can take significantly longer due to other work the team has.
THIS PROCESS HAS BEEN SIMPLIFIED. All the information below must be provided in the "example-pattern.json" file cloned from the model **
Note the following information for the model:
Description (intro.text) should be a 300-500 word explanation of how the pattern works.
This pattern sets up Amazon Cloudwatch metric stream and associates that with Amazon Kinesis Firehose. Through this setup you can continuously stream metrics to a destination of choice with near-real-time delivery and low latency. There are various destinations supported, which include Amazon Simple Storage Service (S3) and several third party provider destinations like Datadog, NewRelic, Splunk and Sumo Logic, but in this pattern we use S3. This setup also provides capability to stream all CloudWatch metrics, or use filters to stream only specified metrics. Each of the metric streams can include up to 1000 filters that can either include or exclude namespaces or specific metrics. Another limitation for a single metric stream is it can either include or exclude the metrics, but not both. If any new metrics are added matching the filters in place, an existing metric stream will automatically include them.
Traditionally, AWS customers relied on polling CloudWatch metrics using API's, which was used in all sorts of monitoring, alerting and cost management tools. Since the introduction of metric streams, customers now have the ability to create low-latency scalable streams of metrics with ability to filter them at a namespace level, for example to include or exclude metrics at a namespace level. Further to that, if there is a requirement to filter at a more granular level, Metric Name Filtering in metric streams comes into play, addressing the need for more precise filtering capabilities.
One of the good features of metric streams is that, it allows you to create metric name filers on metrics which may not exist yet on your AWS account. For example, you can define metrics for AWS/EC2 namespace if you know that the application will produce metrics for this namespace, but that application may yet to be deployed in the account. In this case those metrics will not exist in your AWS account unless the service is provisioned.
This pattern also creates the required roles and policies for the services, with the right level of permissions required. The roles and policies can be expanded if additional services come into play, based on principle of least privilege.
Resources should like to AWS documentation and AWS blogs related to the post (1-5 maximum).
Author bio may include a LinkedIn and/or Twitter reference and a 1-sentence bio.
Name: Kiran Ramamurthy
Linkedin handle: www.linkedin.com/in/kiran-ramamurthy
Description (up to 255 chars): I am a Senior Partner Solutions Architect for Enterprise Transformation. I work predominantly with partners and specialize in migrations and modernization.
You must ensure that the sections of the model README.md are completed in full.
To submit a template to the Serverless Patterns Collection, submit an issue with the following information.
IMPORTANT
Patterns are intended to be primarily IaC-focused implementations of 2-4 AWS services, with minimum custom code. They should be commonly used combinations that help developers get started quickly. If you have a utility, demo, or application, submit these to the Serverless Repos Collection instead.
ONLY SUBMIT ONE PATTERN CHANGE PER PR. Multiple patterns or files spanning multiple pattern directories will be automatically rejected.
Patterns may take up to 4-6 weeks to review, test, and merge but there is no SLA and can take significantly longer due to other work the team has.
To learn more about submitting a pattern, read the publishing guidelines page.
Use the model template located at https://github.com/aws-samples/serverless-patterns/tree/main/_pattern-model to set up a README, template and any associated code.
THIS PROCESS HAS BEEN SIMPLIFIED. All the information below must be provided in the "example-pattern.json" file cloned from the model **
Note the following information for the model:
This pattern sets up Amazon Cloudwatch metric stream and associates that with Amazon Kinesis Firehose. Through this setup you can continuously stream metrics to a destination of choice with near-real-time delivery and low latency. There are various destinations supported, which include Amazon Simple Storage Service (S3) and several third party provider destinations like Datadog, NewRelic, Splunk and Sumo Logic, but in this pattern we use S3. This setup also provides capability to stream all CloudWatch metrics, or use filters to stream only specified metrics. Each of the metric streams can include up to 1000 filters that can either include or exclude namespaces or specific metrics. Another limitation for a single metric stream is it can either include or exclude the metrics, but not both. If any new metrics are added matching the filters in place, an existing metric stream will automatically include them.
Traditionally, AWS customers relied on polling CloudWatch metrics using API's, which was used in all sorts of monitoring, alerting and cost management tools. Since the introduction of metric streams, customers now have the ability to create low-latency scalable streams of metrics with ability to filter them at a namespace level, for example to include or exclude metrics at a namespace level. Further to that, if there is a requirement to filter at a more granular level, Metric Name Filtering in metric streams comes into play, addressing the need for more precise filtering capabilities.
One of the good features of metric streams is that, it allows you to create metric name filers on metrics which may not exist yet on your AWS account. For example, you can define metrics for AWS/EC2 namespace if you know that the application will produce metrics for this namespace, but that application may yet to be deployed in the account. In this case those metrics will not exist in your AWS account unless the service is provisioned.
This pattern also creates the required roles and policies for the services, with the right level of permissions required. The roles and policies can be expanded if additional services come into play, based on principle of least privilege.
Name: Kiran Ramamurthy
Linkedin handle: www.linkedin.com/in/kiran-ramamurthy
Description (up to 255 chars): I am a Senior Partner Solutions Architect for Enterprise Transformation. I work predominantly with partners and specialize in migrations and modernization.
You must ensure that the sections of the model README.md are completed in full.
GitHub PR for template:
#2461
Additional Info:
The text was updated successfully, but these errors were encountered: