github.com/mweagle/Sparta@v1.15.0/docs/index.json (about)

     1  [
     2  {
     3  	"uri": "/getting_started/",
     4  	"title": "Getting Started",
     5  	"tags": [],
     6  	"description": "Getting started with Sparta",
     7  	"content": "To build a Sparta application, follow these steps:\n  go get -u -v https://github.com/mweagle/Sparta/...\n  Configure your AWS Credentials according to the go SDK docs. The most reliable approach is to use environment variables as in:\n$ env | grep AWS AWS_DEFAULT_REGION=us-xxxx-x AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx AWS_REGION=us-xxxx-x AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxxxxxxxx   Create a sample main.go file as in:\npackage main import ( \u0026#34;context\u0026#34; \u0026#34;fmt\u0026#34; \u0026#34;os\u0026#34; \u0026#34;github.com/aws/aws-sdk-go/aws/session\u0026#34; sparta \u0026#34;github.com/mweagle/Sparta\u0026#34; spartaCF \u0026#34;github.com/mweagle/Sparta/aws/cloudformation\u0026#34; \u0026#34;github.com/sirupsen/logrus\u0026#34; ) // Standard AWS λ function  func helloWorld(ctx context.Context) (string, error) { logger, loggerOk := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) if loggerOk { logger.Info(\u0026#34;Accessing structured logger 🙌\u0026#34;) } return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } //////////////////////////////////////////////////////////////////////////////// // Main func main() { lambdaFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) sess := session.Must(session.NewSession()) awsName, awsNameErr := spartaCF.UserAccountScopedStackName(\u0026#34;MyHelloWorldStack\u0026#34;, sess) if awsNameErr != nil { fmt.Print(\u0026#34;Failed to create stack name\\n\u0026#34;) os.Exit(1) } var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFunctions = append(lambdaFunctions, lambdaFn) err := sparta.Main(awsName, \u0026#34;Simple Sparta HelloWorld application\u0026#34;, lambdaFunctions, nil, nil) if err != nil { os.Exit(1) } }   Build with go run main.go provision --s3Bucket YOUR_S3_BUCKET_NAME where YOUR_S3_BUCKET_NAME is an S3 bucket to which your account has write privileges.\n  The following Example Service section provides more details regarding how Sparta transforms your application into a self-deploying service.\n"
     8  },
     9  {
    10  	"uri": "/reference/operations/magefile/",
    11  	"title": "Magefiles",
    12  	"tags": [],
    13  	"description": "",
    14  	"content": "Magefile To support cross platform development and usage, Sparta uses magefiles rather than Makefiles. Most projects can start with the magefile.go sample below. The Magefiles provide a discoverable CLI, but are entirely optional. go run main.go XXXX style invocation remains available as well.\nDefault Sparta magefile.go This magefile.go can be used, unchanged, for most standard Sparta projects.\n// +build mage  // File: magefile.go  package main import ( spartaMage \u0026#34;github.com/mweagle/Sparta/magefile\u0026#34; ) // Provision the service func Provision() error { return spartaMage.Provision() } // Describe the stack by producing an HTML representation of the CloudFormation // template func Describe() error { return spartaMage.Describe() } // Delete the service, iff it exists func Delete() error { return spartaMage.Delete() } // Status report if the stack has been provisioned func Status() error { return spartaMage.Status() } // Version information func Version() error { return spartaMage.Version() } $ mage -l Targets: delete the service, iff it exists describe the stack by producing an HTML representation of the CloudFormation template provision the service status report if the stack has been provisioned version information Sparta Magefile Helpers There are several magefile helpers available in the Sparta package. These are in addition to and often delegate to, the core mage libraries.\n"
    15  },
    16  {
    17  	"uri": "/reference/archetypes/event_bridge/",
    18  	"title": "Event Bridge",
    19  	"tags": [],
    20  	"description": "",
    21  	"content": "The EventBridge lambda event source allows you to trigger lambda functions in response to either cron schedules or account events. There are two different archetype functions available.\nScheduled Scheduled Lambdas execute either at fixed times or periodically depending on the schedule expression.\nTo create a scheduled function use a constructor as in:\nimport ( spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // EventBridge reactor function func echoEventBridgeEvent(ctx context.Context, msg json.RawMessage) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) var eventData map[string]interface{} err := json.Unmarshal(msg, \u0026amp;eventData) logger.WithFields(logrus.Fields{ \u0026#34;error\u0026#34;: err, \u0026#34;message\u0026#34;: eventData, }).Info(\u0026#34;EventBridge event\u0026#34;) return nil, err } func main() { // ...  eventBridgeReactorFunc := spartaArchetype.EventBridgeReactorFunc(echoEventBridgeEvent) lambdaFn, lambdaFnErr := spartaArchetype.NewEventBridgeScheduledReactor(eventBridgeReactorFunc, \u0026#34;rate(1 minute)\u0026#34;, nil) // ... } When the scheduled event is triggered, the log statement outputs the full payload:\n{ \u0026#34;error\u0026#34;: null, \u0026#34;level\u0026#34;: \u0026#34;info\u0026#34;, \u0026#34;message\u0026#34;: { \u0026#34;account\u0026#34;: \u0026#34;123412341234\u0026#34;, \u0026#34;detail\u0026#34;: {}, \u0026#34;detail-type\u0026#34;: \u0026#34;Scheduled Event\u0026#34;, \u0026#34;id\u0026#34;: \u0026#34;f453bd1e-ccea-9df4-4e40-938097e82869\u0026#34;, \u0026#34;region\u0026#34;: \u0026#34;us-west-2\u0026#34;, \u0026#34;resources\u0026#34;: [ \u0026#34;arn:aws:events:us-west-2:123412341234:rule/SpartaEventBridge-0271594-EventBridgexmainechoEven-2WMCXA1LWGZY\u0026#34; ], \u0026#34;source\u0026#34;: \u0026#34;aws.events\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2020-02-23T00:47:31Z\u0026#34;, \u0026#34;version\u0026#34;: \u0026#34;0\u0026#34; }, \u0026#34;msg\u0026#34;: \u0026#34;EventBridge event\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2020-02-23T00:48:08Z\u0026#34; } See the scheduled event payload documentation.\nEvents Lambda functions can also be triggered via EventBridge by providing an event patterns to select which events should trigger your function\u0026rsquo;s execution.\nTo create an event subscriber use a constructor as in:\nfunc echoEventBridgeEvent(ctx context.Context, msg json.RawMessage) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) var eventData map[string]interface{} err := json.Unmarshal(msg, \u0026amp;eventData) logger.WithFields(logrus.Fields{ \u0026#34;error\u0026#34;: err, \u0026#34;message\u0026#34;: eventData, }).Info(\u0026#34;EventBridge event\u0026#34;) return nil, err } func main() { // ...  eventBridgeReactorFunc := spartaArchetype.EventBridgeReactorFunc(echoEventBridgeEvent) lambdaFn, lambdaFnErr := spartaArchetype.NewEventBridgeEventReactor(eventBridgeReactorFunc, map[string]interface{}{ \u0026#34;source\u0026#34;: []string{\u0026#34;aws.ec2\u0026#34;}, }, nil) // ... } The event payload data depends on what event is being subscribed to. See the event patterns documentation for more information.\n"
    22  },
    23  {
    24  	"uri": "/reference/decorators/application_load_balancer/",
    25  	"title": "Application Load Balancer",
    26  	"tags": [],
    27  	"description": "",
    28  	"content": "The ApplicationLoadBalancerDecorator allows you to expose lambda functions as Application Load Balancer targets.\nThis can be useful to provide HTTP(S) access to one or more Lambda functions without requiring an API-Gateway service.\nLambda Function Application Load Balancer (ALB) lambda targets must satisfy a prescribed Lambda signature:\nimport ( awsEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func(context.Context, awsEvents.ALBTargetGroupRequest) awsEvents.ALBTargetGroupResponse See the ALBTargetGroupRequest and ALBTargetGroupResponse godoc entries for more information.\nAn example ALB-eligible target function might look like:\n// ALB eligible lambda function func helloNewWorld(ctx context.Context, albEvent awsEvents.ALBTargetGroupRequest) (awsEvents.ALBTargetGroupResponse, error) { return awsEvents.ALBTargetGroupResponse{ StatusCode: 200, StatusDescription: fmt.Sprintf(\u0026#34;200 OK\u0026#34;), Body: \u0026#34;Some other handler\u0026#34;, IsBase64Encoded: false, Headers: map[string]string{}, }, nil } Once you\u0026rsquo;ve defined your ALB-compatible functions, the next step is to register them with the decorator responsible for configuring them as ALB listener targets.\nDefinition The ApplicationLoadBalancerDecorator satisfies the ServiceDecoratorHookHandler interface and adds a set of CloudFormation Resources to support properly publishing your Lambda functions.\nSince this access path requires an Application Load Balancer, the first step is to define the SecurityGroup associated with the ALB so that incoming requests can be accepted.\nThe following definition will create an Security Group that accepts public traffic on port 80:\nsgResName := sparta.CloudFormationResourceName(\u0026#34;ALBSecurityGroup\u0026#34;, \u0026#34;ALBSecurityGroup\u0026#34;) sgRes := \u0026amp;gocf.EC2SecurityGroup{ GroupDescription: gocf.String(\u0026#34;ALB Security Group\u0026#34;), SecurityGroupIngress: \u0026amp;gocf.EC2SecurityGroupIngressPropertyList{ gocf.EC2SecurityGroupIngressProperty{ IPProtocol: gocf.String(\u0026#34;tcp\u0026#34;), FromPort: gocf.Integer(80), ToPort: gocf.Integer(80), CidrIP: gocf.String(\u0026#34;0.0.0.0/0\u0026#34;), }, }, } The subnets for our Application Load Balancer are supplied as an environment variable (TEST_SUBNETS) of the form id1,id2:\nsubnetList := strings.Split(os.Getenv(\u0026#34;TEST_SUBNETS\u0026#34;), \u0026#34;,\u0026#34;) subnetIDs := make([]gocf.Stringable, len(subnetList)) for eachIndex, eachSubnet := range subnetList { subnetIDs[eachIndex] = gocf.String(eachSubnet) } The next step is to define the ALB and associate it with both the account Subnets and SecurityGroup we just defined:\nalb := \u0026amp;gocf.ElasticLoadBalancingV2LoadBalancer{ Subnets: gocf.StringList(subnetIDs...), SecurityGroups: gocf.StringList(gocf.GetAtt(sgResName, \u0026#34;GroupId\u0026#34;)), } This ElasticLoadBalancingV2LoadBalancer instance is provided to NewApplicationLoadBalancerDecorator to create the decorator that will annotate the CloudFormation template with the required resources.\nalbDecorator, albDecoratorErr := spartaDecorators.NewApplicationLoadBalancerDecorator(alb, 80, \u0026#34;HTTP\u0026#34;, lambdaFn) The NewApplicationLoadBalancerDecorator accepts four arguments:\n The ElasticLoadBalancingV2LoadBalancer that handles this service\u0026rsquo;s incoming requests The port (80) that incoming requests will be accepted The protocol (HTTP) for incoming requests The default *sparta.LambdaAWSInfo instance (lambdaFn) to use as the ALB\u0026rsquo;s DefaultAction handler in case no other conditional target matches the incoming request.  Conditional Targets Services may expose more than one Lambda function on that same port using multiple ListenerRule entries.\nFor instance, to register a second lambda function lambdaFn2 with the same Application Load Balancer at the /newhello path, add a ConditionalEntry as in:\nalbDecorator.AddConditionalEntry(gocf.ElasticLoadBalancingV2ListenerRuleRuleCondition{ Field: gocf.String(\u0026#34;path-pattern\u0026#34;), PathPatternConfig: \u0026amp;gocf.ElasticLoadBalancingV2ListenerRulePathPatternConfig{ Values: gocf.StringList(gocf.String(\u0026#34;/newhello*\u0026#34;)), }, }, lambdaFn2) This will create a rule that associates the _/newhello*_ path with lambdaFn2. Requests that do not match the incoming path will fallback to the default handler (lambdaFn). See the RuleCondition documentation for the full set of conditions that can be expressed.\nAdditional Resources The next step is to ensure that the Security Group that we associated with our ALB is included in the final template. This is done by including it in the ApplicationLoadBalancerDecorator.Resources map which allows you to provide additional CloudFormation resources that should be included in the final template:\n// Finally, tell the ALB decorator we have some additional resources that need to be // included in the CloudFormation template albDecorator.Resources[sgResName] = sgRes Workflow Hooks With the decorator fully configured, the final step is to provide it as part of the WorkflowHooks struct:\n// Supply it to the WorkflowHooks and get going...  workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorators: []sparta.ServiceDecoratorHookHandler{ albDecorator, }, } err := sparta.MainEx(awsName, \u0026#34;Simple Sparta application that demonstrates how to make Lambda functions an ALB Target\u0026#34;, lambdaFunctions, nil, nil, workflowHooks, false) Output As part of the provisioning workflow, the ApplicationLoadBalancerDecorator will include the Application Load Balancer discovery information in the Outputs section as in:\nINFO[0056] Stack Outputs ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0056] ApplicationLoadBalancerDNS80 Description=\u0026#34;ALB DNSName (port: 80, protocol: HTTP)\u0026#34; Value=MyALB-ELBv2-44R3J0MV1D37-943334334.us-west-2.elb.amazonaws.com INFO[0056] ApplicationLoadBalancerName80 Description=\u0026#34;ALB Name (port: 80, protocol: HTTP)\u0026#34; Value=MyALB-ELBv2-44R3J0MV1D37 INFO[0056] ApplicationLoadBalancerURL80 Description=\u0026#34;ALB URL (port: 80, protocol: HTTP)\u0026#34; Value=\u0026#34;http://MyALB-ELBv2-44R3J0MV1D37-943334334.us-west-2.elb.amazonaws.com:80\u0026#34; Testing Using curl we can verify the newly provisioned Application Load Balancer behavior. The default lambda function echoes the incoming request and is available at the ALB URL basepath:\ncurl http://MyALB-ELBv2-44R3J0MV1D37-943334334.us-west-2.elb.amazonaws.com returns\n{ \u0026#34;httpMethod\u0026#34;: \u0026#34;GET\u0026#34;, \u0026#34;path\u0026#34;: \u0026#34;/\u0026#34;, \u0026#34;headers\u0026#34;: { \u0026#34;accept\u0026#34;: \u0026#34;*/*\u0026#34;, \u0026#34;host\u0026#34;: \u0026#34;MyALB-ELBv2-44R3J0MV1D37-943334334.us-west-2.elb.amazonaws.com\u0026#34;, \u0026#34;user-agent\u0026#34;: \u0026#34;curl/7.54.0\u0026#34;, \u0026#34;x-amzn-trace-id\u0026#34;: \u0026#34;Root=1-5d507bf4-ca98d1ad44ac0fe56ec6a9ae\u0026#34;, \u0026#34;x-forwarded-for\u0026#34;: \u0026#34;24.17.9.178\u0026#34;, \u0026#34;x-forwarded-port\u0026#34;: \u0026#34;80\u0026#34;, \u0026#34;x-forwarded-proto\u0026#34;: \u0026#34;http\u0026#34; }, \u0026#34;requestContext\u0026#34;: { \u0026#34;elb\u0026#34;: { \u0026#34;targetGroupArn\u0026#34;: \u0026#34;arn:aws:elasticloadbalancing:us-west-2:123412341234:targetgroup/MyALB-ALBDe-1OJX6J3VGX369/1dab61286efaebb6\u0026#34; } }, \u0026#34;isBase64Encoded\u0026#34;: false, \u0026#34;body\u0026#34;: \u0026#34;\u0026#34; } The conditional lambda function behavior is exposed at /newhello as in:\ncurl http://MyALB-ELBv2-44R3J0MV1D37-943334334.us-west-2.elb.amazonaws.com/newhello Some other handler "
    29  },
    30  {
    31  	"uri": "/reference/decorators/cloudmap/",
    32  	"title": "CloudMap Service Discovery",
    33  	"tags": [],
    34  	"description": "",
    35  	"content": "The CloudMapServiceDecorator allows your service to register a service instance for your application.\nFor example, an application that provisions a SQS queue and an AWS Lambda function that consumes messages from that queue may need a way for the Lambda function to discover the dynamically provisioned queue.\nSparta supports an environment-based discovery service but that discovery is limited to a single Service.\nThe CloudMapServiceDecorator leverages the CloudMap service to support intra- and inter-service resource discovery.\nDefinition The first step is to create an instance of the CloudMapServiceDecorator type that can be used to register additional resources.\nimport ( spartaDecorators \u0026#34;github.com/mweagle/Sparta/decorator\u0026#34; ) func main() { ... cloudMapDecorator, cloudMapDecoratorErr := spartaDecorators.NewCloudMapServiceDecorator(gocf.String(\u0026#34;SpartaServices\u0026#34;), gocf.String(\u0026#34;SpartaSampleCloudMapService\u0026#34;)) ... } The first argument is the Cloud Map Namespace ID value to which the service (MyService) will publish.\nThe decorator satisfies the ServiceDecoratorHookHandler. The instance should be provided as a WorkflowHooks.ServiceDecorators element to MainEx as in:\nfunc main() { // ...  cloudMapDecorator, cloudMapDecoratorErr := spartaDecorators.NewCloudMapServiceDecorator(gocf.String(\u0026#34;SpartaServices\u0026#34;), gocf.String(\u0026#34;SpartaSampleCloudMapService\u0026#34;)) workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorators: []sparta.ServiceDecoratorHookHandler{ cloudMapDecorator, }, } // ...  err := sparta.MainEx(awsName, \u0026#34;Simple Sparta application that demonstrates core functionality\u0026#34;, lambdaFunctions, nil, nil, workflowHooks, false) } Registering The returned CloudMapServiceDecorator instance satisfies the ServiceDecoratorHookHandler interface. When invoked, it updates the content of you CloudFormation template with the resources and permissions as described below. CloudMapServiceDecorator implicitly creates a new AWS::ServiceDiscovery::Service resource to which your resources will be published.\nLambda Functions The CloudMapServiceDecorator.PublishLambda function publishes Lambda function information to the (NamespaceID, ServiceID) pair.\nlambdaFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) cloudMapDecorator.PublishLambda(\u0026#34;lambdaDiscoveryName\u0026#34;, lambdaFn, nil) The default properties published include the Lambda Outputs and Type information:\n{ \u0026#34;Id\u0026#34;: \u0026#34;CloudMapResbe2b7c536074312c-VuIPjfjuFaoc\u0026#34;, \u0026#34;Attributes\u0026#34;: { \u0026#34;Arn\u0026#34;: \u0026#34;arn:aws:lambda:us-west-2:123412341234:function:MyHelloWorldStack-123412341234_Hello_World\u0026#34;, \u0026#34;Name\u0026#34;: \u0026#34;lambdaDiscoveryName\u0026#34;, \u0026#34;Ref\u0026#34;: \u0026#34;MyHelloWorldStack-123412341234_Hello_World\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::Lambda::Function\u0026#34; } } Other Resources The CloudMapServiceDecorator.PublishResource function publishes arbitrary CloudFormation resource outputs information to the (NamespaceID, ServiceID) pair.\nFor instance, to publish SQS information in the context of a standard ServiceDecorator\nfunc createSQSResourceDecorator(cloudMapDecorator *spartaDecorators.CloudMapServiceDecorator) sparta.ServiceDecoratorHookHandler { return sparta.ServiceDecoratorHookFunc(func(context map[string]interface{}, serviceName string, template *gocf.Template, S3Bucket string, S3Key string, buildID string, awsSession *session.Session, noop bool, logger *logrus.Logger) error { sqsResource := \u0026amp;gocf.SQSQueue{} template.AddResource(\u0026#34;SQSResource\u0026#34;, sqsResource) return cloudMapDecorator.PublishResource(\u0026#34;SQSResource\u0026#34;, \u0026#34;SQSResource\u0026#34;, sqsResource, nil) }) } The default properties published include the SQS Outputs and Type information:\n{ \u0026#34;Id\u0026#34;: \u0026#34;CloudMapRes21cf275e8bbbe136-CqWZ27gdLHf8\u0026#34;, \u0026#34;Attributes\u0026#34;: { \u0026#34;Arn\u0026#34;: \u0026#34;arn:aws:sqs:us-west-2:123412341234:MyHelloWorldStack-123412341234-SQSResource-S9DWKIFKP14U\u0026#34;, \u0026#34;Name\u0026#34;: \u0026#34;SQSResource\u0026#34;, \u0026#34;QueueName\u0026#34;: \u0026#34;MyHelloWorldStack-123412341234-SQSResource-S9DWKIFKP14U\u0026#34;, \u0026#34;Ref\u0026#34;: \u0026#34;https://sqs.us-west-2.amazonaws.com/123412341234/MyHelloWorldStack-123412341234-SQSResource-S9DWKIFKP14U\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::SQS::Queue\u0026#34; } } Enabling Publishing instances to CloudMap only makes them available for other services to discover them. Call the EnableDiscoverySupport with your *sparta.LambdaAWSInfo instance as the only argument. This function updates your Lambda function\u0026rsquo;s environment to include the provisioned ServiceInstance and also the IAM role privileges to authorize:\n servicediscovery:DiscoverInstances servicediscovery:GetNamespace servicediscovery:ListInstances servicediscovery:GetService  For instance:\nfunc main() { // ...  lambdaFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) // ...  cloudMapDecorator.EnableDiscoverySupport(lambdaFn) // ... } Invoking With the resources published and the lambda role properly updated, the last step is to dynamically discover the provisioned resources via CloudMap. Call DiscoverInstancesWithContext with the the set of key-value pairs to use for discovery as below:\nfunc helloWorld(ctx context.Context) (string, error) { // ...  props := map[string]string{ \u0026#34;Type\u0026#34;: \u0026#34;AWS::SQS::Queue\u0026#34;, } results, resultsErr := spartaDecorators.DiscoverInstancesWithContext(ctx, props, logger) logger.WithFields(logrus.Fields{ \u0026#34;Instances\u0026#34;: results, \u0026#34;Error\u0026#34;: resultsErr, }).Info(\u0026#34;Discovered instances!\u0026#34;) // ... } Given the previous example of a single lambda function and an SQS-queue provisioning decorator, the DiscoverInstancesWithContext would return the matching instance with data similar to:\n{ \u0026#34;Instances\u0026#34;: [ { \u0026#34;Attributes\u0026#34;: { \u0026#34;Arn\u0026#34;: \u0026#34;arn:aws:sqs:us-west-2:123412341234:MyHelloWorldStack-123412341234-SQSResource-S9DWKIFKP14U\u0026#34;, \u0026#34;Name\u0026#34;: \u0026#34;SQSResource\u0026#34;, \u0026#34;QueueName\u0026#34;: \u0026#34;MyHelloWorldStack-123412341234-SQSResource-S9DWKIFKP14U\u0026#34;, \u0026#34;Ref\u0026#34;: \u0026#34;https://sqs.us-west-2.amazonaws.com/123412341234/MyHelloWorldStack-123412341234-SQSResource-S9DWKIFKP14U\u0026#34;, \u0026#34;Type\u0026#34;: \u0026#34;AWS::SQS::Queue\u0026#34; }, \u0026#34;HealthStatus\u0026#34;: \u0026#34;HEALTHY\u0026#34;, \u0026#34;InstanceId\u0026#34;: \u0026#34;CloudMapResd1a507076543ccd0-Fln1ITi5cf0y\u0026#34;, \u0026#34;NamespaceName\u0026#34;: \u0026#34;SpartaServices\u0026#34;, \u0026#34;ServiceName\u0026#34;: \u0026#34;SpartaSampleCloudMapService\u0026#34; } ] } "
    36  },
    37  {
    38  	"uri": "/reference/archetypes/codecommit/",
    39  	"title": "CodeCommit",
    40  	"tags": [],
    41  	"description": "",
    42  	"content": "The CodeCommit Lambda event source allows you to trigger lambda functions in response to CodeCommit repository events.\nEvents Lambda functions triggered in response to CodeCommit evetms use a combination of events and branches to manage which state changes trigger your lambda function.\nTo create an event subscriber use a constructor as in:\n// CodeCommit reactor function func reactorFunc(ctx context.Context, event awsLambdaEvents.CodeCommitEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: event, }).Info(\u0026#34;Event received\u0026#34;) return \u0026amp;event, nil } func main() { // ...  handler := spartaArchetype.NewCodeCommitReactor(reactorFunc) reactor, reactorErr := spartaArchetype.NewCodeCommitReactor(handler, gocf.String(\u0026#34;MyRepositoryName\u0026#34;), nil, // Defaults to \u0026#39;all\u0026#39; branches  nil, // Defaults to \u0026#39;all\u0026#39; events  nil) // Additional IAM privileges  ... } "
    43  },
    44  {
    45  	"uri": "/reference/eventsources/codecommit/",
    46  	"title": "CodeCommit",
    47  	"tags": [],
    48  	"description": "",
    49  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to CodeCommit Events.\nGoal Assume that we\u0026rsquo;re supposed to write a Lambda function that is triggered in response to any event emitted by a CodeCommit repository.\nGetting Started Our lambda function is relatively short:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func echoCodeCommit(ctx context.Context, event awsLambdaEvents.CodeCommitEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: event, }).Info(\u0026#34;Event received\u0026#34;) return \u0026amp;event, nil } Our lambda function doesn\u0026rsquo;t need to do much with the repository message other than log and return it.\nSparta Integration With echoCodeCommit() defined, the next step is to integrate the go function with your application.\nOur lambda function only needs logfile write privileges, and since these are enabled by default, we can use an empty sparta.IAMRoleDefinition value:\nfunc appendCloudWatchLogsHandler(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoCodeCommit), echoCodeCommit, sparta.IAMRoleDefinition{}) The next step is to add a CodeCommitPermission value that represents the notification settings.\nrepositoryName := gocf.String(\u0026#34;MyTestRepository\u0026#34;) codeCommitPermission := sparta.CodeCommitPermission{ BasePermission: sparta.BasePermission{ SourceArn: repositoryName, }, RepositoryName: repositoryName.String(), Branches: branches, // may be nil  Events: events, // may be nil } The sparta.CodeCommitPermission struct provides fields that proxy the RepositoryTrigger values.\nAdd Permission With the subscription information configured, the final step is to add the sparta.CodeCommitPermission to our sparta.LambdaAWSInfo value:\nlambdaFn.Permissions = append(lambdaFn.Permissions, codeCommitPermission) The entire function is therefore:\nfunc appendCodeCommitHandler(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoCodeCommit), echoCodeCommit, sparta.IAMRoleDefinition{}) repositoryName := gocf.String(\u0026#34;MyTestRepository\u0026#34;) codeCommitPermission := sparta.CodeCommitPermission{ BasePermission: sparta.BasePermission{ SourceArn: repositoryName, }, RepositoryName: repositoryName.String(), } lambdaFn.Permissions = append(lambdaFn.Permissions, codeCommitPermission) return append(lambdaFunctions, lambdaFn) } Wrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all CodeCmmit-triggered lambda functions:\n Define the lambda function (echoCodeCommit). If needed, create the required IAMRoleDefinition with appropriate privileges. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Create a CodeCommitPermission value. Define the necessary permission fields. Append the CodeCommitPermission value to the lambda function\u0026rsquo;s Permissions slice. Include the reference in the call to sparta.Main().  Other Resources   Consider the archectype package to encapsulate these steps.\n  Use the AWS CLI to inspect the configured triggers:\n$ aws codecommit get-repository-triggers --repository-name=TestCodeCommitRepo { \u0026#34;configurationId\u0026#34;: \u0026#34;7dd7933a-a26c-4514-9ab8-ad8cc133f874\u0026#34;, \u0026#34;triggers\u0026#34;: [ { \u0026#34;name\u0026#34;: \u0026#34;MyHelloWorldStack-mweagle_main_echoCodeCommit\u0026#34;, \u0026#34;destinationArn\u0026#34;: \u0026#34;arn:aws:lambda:us-west-2:123412341234:function:MyHelloWorldStack-mweagle_main_echoCodeCommit\u0026#34;, \u0026#34;branches\u0026#34;: [], \u0026#34;events\u0026#34;: [ \u0026#34;all\u0026#34; ] } ] }  Use the AWS CLI to test the configured trigger:  $ aws codecommit test-repository-triggers --repository-name TestCodeCommitRepo --triggers name=MyHelloWorldStack-mweagle-MyHelloWorldStack-mweagle_main_echoCodeCommit,destinationArn=arn:aws:lambda:us-west-2:123412341234:function:MyHelloWorldStack-mweagle_main_echoCodeCommit,branches=mainline,preprod,events=all { \u0026#34;successfulExecutions\u0026#34;: [ \u0026#34;MyHelloWorldStack-mweagle-MyHelloWorldStack-mweagle_main_echoCodeCommit\u0026#34; ], \u0026#34;failedExecutions\u0026#34;: [] }   "
    50  },
    51  {
    52  	"uri": "/reference/step/services/dynamodb/",
    53  	"title": "Amazon DynamoDb",
    54  	"tags": [],
    55  	"description": "",
    56  	"content": " TODO: Document Dynamo integration.\n "
    57  },
    58  {
    59  	"uri": "/reference/step/services/sagemaker/",
    60  	"title": "Amazon SageMaker",
    61  	"tags": [],
    62  	"description": "",
    63  	"content": " TODO: Document SageMaker integration.\n "
    64  },
    65  {
    66  	"uri": "/reference/step/services/sns/",
    67  	"title": "Amazon SNS",
    68  	"tags": [],
    69  	"description": "",
    70  	"content": " TODO: Document SNS integration.\n "
    71  },
    72  {
    73  	"uri": "/reference/step/services/sqs/",
    74  	"title": "Amazon SQS",
    75  	"tags": [],
    76  	"description": "",
    77  	"content": " TODO: Document SQS integration.\n "
    78  },
    79  {
    80  	"uri": "/reference/step/services/batch/",
    81  	"title": "AWS Batch",
    82  	"tags": [],
    83  	"description": "",
    84  	"content": " TODO: Document Batch integration.\n "
    85  },
    86  {
    87  	"uri": "/reference/step/services/fargate/",
    88  	"title": "AWS Fargate",
    89  	"tags": [],
    90  	"description": "",
    91  	"content": " TODO: Document Fargate integration.\n "
    92  },
    93  {
    94  	"uri": "/reference/step/services/glue/",
    95  	"title": "AWS Glue",
    96  	"tags": [],
    97  	"description": "",
    98  	"content": " TODO: Document Glue integration.\n "
    99  },
   100  {
   101  	"uri": "/reference/step/lambda/",
   102  	"title": "Lambda",
   103  	"tags": [],
   104  	"description": "",
   105  	"content": "Step Functions AWS Step Functions are a powerful way to express long-running, complex workflows comprised of Lambda functions. With Sparta 0.20.2, you can build a State Machine as part of your application. This section walks through the three steps necessary to provision a sample \u0026ldquo;Roll Die\u0026rdquo; state machine using a single Lambda function. See SpartaStep for the full source.\nLambda Functions The first step is to define the core Lambda function Task that will be our Step function\u0026rsquo;s core logic. In this example, we\u0026rsquo;ll define a rollDie function:\ntype lambdaRollResponse struct { Roll int `json:\u0026#34;roll\u0026#34;` } // Standard AWS λ function func lambdaRollDie(ctx context.Context) (lambdaRollResponse, error) { return lambdaRollResponse{ Roll: rand.Intn(5) + 1, }, nil } State Machine Our state machine is simple: we want to keep rolling the die until we get a \u0026ldquo;good\u0026rdquo; result, with a delay in between rolls:\nTo do this, we use the new github.com/mweagle/Sparta/aws/step functions to define the other states.\n// Make all the Step states lambdaTaskState := step.NewTaskState(\u0026#34;lambdaRollDie\u0026#34;, lambdaFn) successState := step.NewSuccessState(\u0026#34;success\u0026#34;) delayState := step.NewWaitDelayState(\u0026#34;tryAgainShortly\u0026#34;, 3*time.Second) lambdaChoices := []step.ChoiceBranch{ \u0026amp;step.Not{ Comparison: \u0026amp;step.NumericGreaterThan{ Variable: \u0026#34;$.roll\u0026#34;, Value: 3, }, Next: delayState, }, } choiceState := step.NewChoiceState(\u0026#34;checkRoll\u0026#34;, lambdaChoices...). WithDefault(successState) The Sparta state types correspond to their AWS States Spec equivalents:\n successState : SucceedState delayState : a specialized WaitState choiceState: ChoiceState  The choiceState is the most interesting state: based on the JSON response of the lambdaRollDie, it will either transition to a delay or the success end state.\nSee godoc for the complete set of types.\nThe lambdaTaskState uses a normal Sparta function as in:\nlambdaFn, _ := sparta.NewAWSLambda(\u0026#34;StepRollDie\u0026#34;, lambdaRollDie, sparta.IAMRoleDefinition{}) lambdaFn.Options.MemorySize = 128 lambdaFn.Options.Tags = map[string]string{ \u0026#34;myAccounting\u0026#34;: \u0026#34;tag\u0026#34;, } The final step is to hook up the state transitions for states that don\u0026rsquo;t implicitly include them, and create the State Machine:\n// Hook up the transitions lambdaTaskState.Next(choiceState) delayState.Next(lambdaTaskState) // Startup the machine with a user-scoped name for account uniqueness stateMachineName := spartaCF.UserScopedStackName(\u0026#34;StateMachine\u0026#34;) startMachine := step.NewStateMachine(stateMachineName, lambdaTaskState) At this point we have a potentially well-formed Lambda-powered State Machine. The final step is to attach this machine to the normal service definition.\nService Decorator The return type from step.NewStateMachine(...) is a *step.StateMachine instance that exposes a ServiceDecoratorHook. Adding the hook to your service\u0026rsquo;s Workflow Hooks (similar to provisioning a service-scoped CloudWatch Dashboard) will include it in the CloudFormation template serialization:\n// Setup the hook to annotate workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorator: startMachine.StateMachineDecorator(), } userStackName := spartaCF.UserScopedStackName(\u0026#34;SpartaStep\u0026#34;) err := sparta.MainEx(userStackName, \u0026#34;Simple Sparta application that demonstrates AWS Step functions\u0026#34;, lambdaFunctions, nil, nil, workflowHooks, false) With the decorator attached, the next service provision request will include the state machine as above.\n$ go run main.go provision --s3Bucket weagle INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗┌─┐┌─┐┬─┐┌┬┐┌─┐ Version : 1.0.2 INFO[0000] ╚═╗├─┘├─┤├┬┘ │ ├─┤ SHA : b37b93e INFO[0000] ╚═╝┴ ┴ ┴┴└─ ┴ ┴ ┴ Go : go1.9.2 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: SpartaStep-mweagle LinkFlags= Option=provision UTC=\u0026#34;2018-01-29T14:33:36Z\u0026#34; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Provisioning service BuildID=f7ade93d3900ab4b01c468c1723dedac24cbfa93 CodePipelineTrigger= InPlaceUpdates=false NOOP=false Tags= INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=1 INFO[0000] Checking S3 versioning Bucket=weagle VersioningEnabled=true INFO[0000] Checking S3 region Bucket=weagle Region=us-west-2 INFO[0000] Running `go generate` INFO[0000] Compiling binary Name=Sparta.lambda.amd64 INFO[0010] Creating code ZIP archive for upload TempName=./.sparta/SpartaStep_mweagle-code.zip INFO[0010] Lambda code archive size Size=\u0026#34;13 MB\u0026#34; INFO[0010] Uploading local file to S3 Bucket=weagle Key=SpartaStep-mweagle/SpartaStep_mweagle-code.zip Path=./.sparta/SpartaStep_mweagle-code.zip Size=\u0026#34;13 MB\u0026#34; INFO[0020] Calling WorkflowHook ServiceDecoratorHook=\u0026#34;github.com/mweagle/Sparta/aws/step.(*StateMachine).StateMachineDecorator.func1\u0026#34; WorkflowHookContext=\u0026#34;map[]\u0026#34; INFO[0020] Uploading local file to S3 Bucket=weagle Key=SpartaStep-mweagle/SpartaStep_mweagle-cftemplate.json Path=./.sparta/SpartaStep_mweagle-cftemplate.json Size=\u0026#34;3.7 kB\u0026#34; INFO[0021] Creating stack StackID=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaStep-mweagle/6ff65180-0501-11e8-935b-50a68d01a629\u0026#34; INFO[0094] CloudFormation provisioning metrics: INFO[0094] Operation duration Duration=54.73s Resource=SpartaStep-mweagle Type=\u0026#34;AWS::CloudFormation::Stack\u0026#34; INFO[0094] Operation duration Duration=19.02s Resource=IAMRole49969e8a894b9eeea02a4936fb9519f2bd67dbe6 Type=\u0026#34;AWS::IAM::Role\u0026#34; INFO[0094] Operation duration Duration=18.69s Resource=StatesIAMRolee00aa3484b0397c676887af695abfd160104318a Type=\u0026#34;AWS::IAM::Role\u0026#34; INFO[0094] Operation duration Duration=2.60s Resource=StateMachine59f153f18068faa0b7fb588350be79df422ba5ef Type=\u0026#34;AWS::StepFunctions::StateMachine\u0026#34; INFO[0094] Operation duration Duration=2.28s Resource=StepRollDieLambda7d9f8ab476995f16b91b154f68e5f5cc42601ebf Type=\u0026#34;AWS::Lambda::Function\u0026#34; INFO[0094] Stack provisioned CreationTime=\u0026#34;2018-01-29 14:33:56.7 +0000 UTC\u0026#34; StackId=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaStep-mweagle/6ff65180-0501-11e8-935b-50a68d01a629\u0026#34; StackName=SpartaStep-mweagle INFO[0094] ════════════════════════════════════════════════ INFO[0094] SpartaStep-mweagle Summary INFO[0094] ════════════════════════════════════════════════ INFO[0094] Verifying IAM roles Duration (s)=0 INFO[0094] Verifying AWS preconditions Duration (s)=0 INFO[0094] Creating code bundle Duration (s)=10 INFO[0094] Uploading code Duration (s)=10 INFO[0094] Ensuring CloudFormation stack Duration (s)=73 INFO[0094] Total elapsed time Duration (s)=94 Testing With the stack provisioned, the final step is to test the State Machine and see how lucky our die roll is. Navigate to the Step Functions service dashboard in the AWS Console and find the State Machine that was provisioned. Click the New Execution button and accept the default JSON. This sample state machine doesn\u0026rsquo;t interrogate the incoming data so the initial JSON is effectively ignored.\nFor this test the first roll was a 4, so there was only 1 path through the state machine. Depending on your die roll, you may see different state machine paths through the WaitState.\nWrapping Up AWS Step Functions are a powerful tool that allows you to orchestrate long running workflows using AWS Lambda. State functions are useful to implement the Saga pattern as in here and here. They can also be used to compose Lambda functions into more complex workflows that include parallel operations on shared data.\nNotes  Minimal State machine validation is done at this time. See Tim Bray for more information. Value interrogation is defined by JSONPath expressions  "
   106  },
   107  {
   108  	"uri": "/reference/interceptors/xray_interceptor/",
   109  	"title": "XRayInterceptor",
   110  	"tags": [],
   111  	"description": "",
   112  	"content": " TODO: Document the XRayInterceptor\n 🎉 Sparta v1.7.0: The Time Machine Edition 🕰 🎉 For those times when you wish you could go back in time and enable debug logging for a single request.https://t.co/BP60qQpKva#serverless #go\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  Sparta v1.7.0 adds `Interceptors`: user defined hooks called during the lambda event handling flow to support cross-cutting concerns. The first interceptor is an XRay annotation and metadata interceptor. https://t.co/HaxPd6P4KE\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  ☢️ The XRayInterceptor works with your existing logging calls and can optionally publish ring-buffered log messages to the XRay segment *only if* there was an error. No error, no worries. ☢️https://t.co/bqp8MXiCHt\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  In case of an error, visit the Trace in the XRay console and dig into a state-preserving representation of the request, including the AWS Request ID. Your log statement can publish a PII-aware redacted event. pic.twitter.com/Tm5aFgUhcc\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  An opportunity to make your Future-3am-Self much more rested in fewer than 6 lines of code. https://t.co/Y7D5rBM0k7 pic.twitter.com/zv3JQWi41P\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  Last thing - you can also enable BuildID segment annotation so that you can use XRay filter expressions. Navigate your service\u0026#39;s evolution and failure profile over time. First rule of debug club: what\u0026#39;s running?\nPencils down until re:Invent...https://t.co/EtazcpzYTT\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  Shout out to https://t.co/om0ljHBfDe for guidance.\n\u0026mdash; Matt Weagle (@mweagle) November 12, 2018  "
   113  },
   114  {
   115  	"uri": "/reference/archetypes/cloudwatch/",
   116  	"title": "CloudWatch",
   117  	"tags": [],
   118  	"description": "",
   119  	"content": "The CloudWatch Logs Lambda event source allows you to trigger lambda functions in response to either cron schedules or account events. There are three different archetype functions available.\nScheduled Scheduled Lambdas execute either at fixed times or periodically depending on the schedule expression.\nTo create a scheduled function use a constructor as in:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // CloudWatch reactor function func reactorFunc(ctx context.Context, cwLogs awsLambdaEvents.CloudwatchLogsEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: cwLogs, }).Info(\u0026#34;Cron triggered\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ...  handler := spartaArchetype.CloudWatchLogsReactorFunc(reactorFunc) subscriptions := map[string]string{ \u0026#34;every5Mins\u0026#34;: \u0026#34;rate(5 minutes)\u0026#34;, } lambdaFn, lambdaFnErr := spartaArchetype.NewCloudWatchScheduledReactor(handler, subscriptions, nil) } Events Lambda functions triggered in response to CloudWatch Events use event patterns to select which events should trigger your function\u0026rsquo;s execution.\nTo create an event subscriber use a constructor as in:\n// CloudWatch reactor function func reactorFunc(ctx context.Context, cwLogs awsLambdaEvents.CloudwatchLogsEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: cwLogs, }).Info(\u0026#34;Cron triggered\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ...  handler := spartaArchetype.CloudWatchLogsReactorFunc(reactorFunc) subscriptions := map[string]string{ \u0026#34;ec2StateChanges\u0026#34;: map[string]interface{}{ \u0026#34;source\u0026#34;: []string{\u0026#34;aws.ec2\u0026#34;}, \u0026#34;detail-type\u0026#34;: []string{\u0026#34;EC2 Instance state change\u0026#34;}, }, } lambdaFn, lambdaFnErr := spartaArchetype.NewCloudWatchEventedReactor(handler, subscriptions, nil) } Generic Both NewCloudWatchScheduledReactor and NewCloudWatchEventedReactor are convenience functions for the generic NewCloudWatchReactor constructor. For example, it\u0026rsquo;s possible to create a scheduled lambda execution using the generic constructor as in:\nfunc main() { // ...  subscriptions := map[string]sparta.CloudWatchEventsRule{ \u0026#34;every5Mins\u0026#34;: sparta.CloudWatchEventsRule{ ScheduleExpression: \u0026#34;rate(5 minutes)\u0026#34;, }, } lambdaFn, lambdaFnErr := spartaArchetype.NewCloudWatchReactor(handler, subscriptions, nil) } "
   120  },
   121  {
   122  	"uri": "/reference/archetypes/dynamodb/",
   123  	"title": "DynamoDB",
   124  	"tags": [],
   125  	"description": "",
   126  	"content": "To create a DynamoDB reactor that subscribes via an EventSourceMapping, use the NewDynamoDBReactor constructor as in:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // DynamoDB reactor function func reactorFunc(ctx context.Context, dynamoEvent awsLambdaEvents.DynamoDBEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: dynamoEvent, }).Info(\u0026#34;DynamoDB Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ...  handler := spartaArchetype.DynamoDBReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewDynamoDBReactor(handler, \u0026#34;DYNAMO_DB_ARN_OR_CLOUDFORMATION_REF_VALUE\u0026#34;, \u0026#34;TRIM_HORIZON\u0026#34;, 10, nil) } "
   127  },
   128  {
   129  	"uri": "/reference/archetypes/kinesis/",
   130  	"title": "Kinesis",
   131  	"tags": [],
   132  	"description": "",
   133  	"content": "To create a Kinesis Stream reactor that subscribes via an EventSourceMapping, use the NewKinesisReactor constructor as in:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // KinesisStream reactor function func reactorFunc(ctx context.Context, kinesisEvent awsLambdaEvents.KinesisEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: kinesisEvent, }).Info(\u0026#34;Kinesis Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ...  handler := spartaArchetype.KinesisReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewKinesisReactor(handler, \u0026#34;KINESIS_STREAM_ARN_OR_CLOUDFORMATION_REF_VALUE\u0026#34;, \u0026#34;TRIM_HORIZON\u0026#34;, 10, nil) } "
   134  },
   135  {
   136  	"uri": "/reference/archetypes/kinesis_firehose/",
   137  	"title": "Kinesis Firehose",
   138  	"tags": [],
   139  	"description": "",
   140  	"content": "There are two ways to create a Firehose Transform reactor that transforms a KinesisFirehoseEventRecord with a Lambda function:\n NewKinesisFirehoseLambdaTransformer  Transform using a Lambda function   NewKinesisFirehoseTransformer  Transform using a go text/template declaration    NewKinesisFirehoseLambdaTransformer import ( awsEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // KinesisStream reactor function func reactorFunc(ctx context.Context, record *awsEvents.KinesisFirehoseEventRecord) (*awsEvents.KinesisFirehoseResponseRecord, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Record\u0026#34;: record, }).Info(\u0026#34;Kinesis Firehose Event\u0026#34;) responseRecord = \u0026amp;awsEvents.KinesisFirehoseResponseRecord{ RecordID: record.RecordID, Result: awsEvents.KinesisFirehoseTransformedStateOk, Data: record.Data, } return responseRecord, nil } func main() { // ...  handler := spartaArchetype.KinesisFirehoseReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewKinesisFirehoseLambdaTransformer(handler, 5*time.Minute /* Duration: recommended minimum of 1m */) // ... } This is the lowest level transformation type supported and it enables the most flexibility.\nNewKinesisFirehoseTransformer Another option for creating Kinesis Firehose Transformers is to leverage the text/template package to define a transformation template. For instance:\n{{/* file: transform.template */}} {{if eq (.Record.Data.JMESPathAsString \u0026#34;sector\u0026#34;) \u0026#34;TECHNOLOGY\u0026#34;}} { \u0026#34;region\u0026#34; : \u0026#34;{{ .Record.KinesisEventHeader.Region }}\u0026#34;, \u0026#34;ticker_symbol\u0026#34; : {{ .Record.Data.JMESPath \u0026#34;ticker_symbol\u0026#34;}} } {{else}} {{ KinesisFirehoseDrop }} {{end}} A new *sparta.LambdaAWSInfo instance can be created from transform.template as in:\nfunc main() { // ...  hooks := \u0026amp;sparta.WorkflowHooks{} reactorFunc, reactorFuncErr := archetype.NewKinesisFirehoseTransformer(\u0026#34;transform.template\u0026#34;, 5*time.Minute, hooks) // ...  var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFunctions = append(lambdaFunctions, reactorFunc) err := sparta.MainEx(awsName, \u0026#34;Simple Sparta application that demonstrates core functionality\u0026#34;, lambdaFunctions, nil, nil, hooks, false) } The template execution context includes the the following:\nData Model  Data (string)  The data available in the Kinesis Firehose Record. Values can be extracted from the Data content by either JMESPath expressions (JMESPath, JMESPathAsString, JMESPathAsFormattedString) or regexp capture groups (RegExpGroup, RegExpGroupAsString, RegExpGroupAsJSON). See for more information   RecordID (string)  The specific record id being processed   Metadata (struct)  The metadata associated with the specific record being processed   ApproximateArrivalTimestamp (awsEvents.MilliSecondsEpochTime)  The time at which the record arrived   KinesisEventHeader (struct)  Metadata associated with the set of records being processed    Functions Functions available in the template\u0026rsquo;s FuncMap include:\n KinesisFirehoseDrop: indicates that the record should be marked as KinesisFirehoseTransformedStateDropped The set of masterminds/sprig functions available in TxtFuncMap  "
   141  },
   142  {
   143  	"uri": "/reference/archetypes/rest/",
   144  	"title": "REST Service",
   145  	"tags": [],
   146  	"description": "",
   147  	"content": "The rest package provides convenience functions to define a serverless REST style service.\nThe package uses three concepts:\n Routes: URL paths that resolve to a single go struct Resources: go structs that optionally define HTTP method (GET, POST, etc.). ResourceDefinition: an interface that go structs must implement in order to support resource-based registration.  Routes Routes are similar many HTTP-routing libraries. They support path parameters.\nResources Resources are the targets of Routes. There is a one to one mapping of URL Routes to go structs. These struct types must define one or more member functions that comply with the valid function signatures for AWS Lambda.\nFor example:\nimport ( spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; ) // TodoItemResource is the /todo/{id} resource type TodoItemResource struct { spartaAccessor.S3Accessor } func (svc *TodoItemResource) Get(ctx context.Context, apigRequest TodoRequest) (interface{}, error) { // ...  return spartaAPIGateway.NewResponse(http.StatusOK, \u0026#34;All good!\u0026#34;), nil } As the resource will be exposed over API-Gateway, the return type must be a struct type created by spartaAPIGateway.NewResponse so that the API-Gateway integration mappings can properly transform the response.\nResource Definition The last component is to bind the Routes and Resources together by implementing the ResourceDefinition interface. This interface defines a single function that supplies the binding information.\nFor instance, the TodoItemResource type might define a REST resource like:\n// ResourceDefinition returns the Sparta REST definition for the Todo item func (svc *TodoItemResource) ResourceDefinition() (spartaREST.ResourceDefinition, error) { return spartaREST.ResourceDefinition{ URL: \u0026#34;/todo/{id}\u0026#34;, MethodHandlers: spartaREST.MethodHandlerMap{ // GET  http.MethodGet: spartaREST.NewMethodHandler(svc.Get, http.StatusOK). Options(\u0026amp;sparta.LambdaFunctionOptions{ MemorySize: 128, Timeout: 10, }). StatusCodes(http.StatusInternalServerError). Privileges(svc.S3Accessor.KeysPrivilege(\u0026#34;s3:GetObject\u0026#34;), svc.S3Accessor.BucketPrivilege(\u0026#34;s3:ListBucket\u0026#34;)), }, }, nil } The ResourceDefinition function returns a struct that defines the:\n Route (URL) MethodHandlers (GET, POST, etc.)  and for each MethodHandler, the:\n HTTP verb to struct function mapping Optional LambdaFunctionOptions Expected HTTP status codes (StatusCodes)  Limiting the number of allowed status codes reduces API Gateway creation time   Additional IAM Privileges needed for this method  Registration With the REST resource providing its API-Gateway binding information, the final step is to supply the ResourceDefinition implementing instance to RegisterResource and return the set of extracted *LambdaAWSInfo structs:\ntodoItemResource := \u0026amp;todoResources.TodoItemResource{} registeredFuncs, registeredFuncsErr := spartaREST.RegisterResource(api, todoItemResource) See the SpartaTodoBackend for a complete example that implements the TodoBackend API in a completely serverless way!\n"
   148  },
   149  {
   150  	"uri": "/reference/archetypes/s3/",
   151  	"title": "S3",
   152  	"tags": [],
   153  	"description": "",
   154  	"content": "There are two different S3-based constructors depending on whether your lambda function should use an Object Key Name filter. The S3 subscriber is preconfigured to be notified of both s3:ObjectCreated:* and s3:ObjectRemoved:* events.\nObject Key Name Filtering Object key name filtering only invokes a lambda function when objects with the given prefix are created.\nTo subscribe to object events created by objects with a given prefix, use the NewS3ScopedReactor constructor as in:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // KinesisStream reactor function func reactorFunc(ctx context.Context, s3Event awsLambdaEvents.S3Event) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: s3Event, }).Info(\u0026#34;S3 Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ...  handler := spartaArchetype.S3ReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewS3ScopedReactor(handler, \u0026#34;S3_BUCKET_ARN_OR_CLOUDFORMATION_REF\u0026#34;, \u0026#34;/my/special/prefix\u0026#34;, nil) } All Events To subscribe to all S3 bucket events, use the NewS3Reactor version:\nfunc main() { // ...  handler := spartaArchetype.S3ReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewS3Reactor(handler, \u0026#34;/my/special/prefix\u0026#34;, nil) } "
   155  },
   156  {
   157  	"uri": "/reference/archetypes/sns/",
   158  	"title": "SNS",
   159  	"tags": [],
   160  	"description": "",
   161  	"content": "To create a SNS reactor that subscribes via an subscription configuration, use the NewSNSReactor constructor as in:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // DynamoDB reactor function func reactorFunc(ctx context.Context, snsEvent awsLambdaEvents.SNSEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: snsEvent, }).Info(\u0026#34;SNS Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ...  handler := spartaArchetype.SNSReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewDynamoDBReactor(handler, \u0026#34;SNS_ARN_OR_CLOUDFORMATION_REF_VALUE\u0026#34;, nil) } "
   162  },
   163  {
   164  	"uri": "/reference/decorators/cloudfront_distribution/",
   165  	"title": "CloudFront Distribution",
   166  	"tags": [],
   167  	"description": "",
   168  	"content": "The CloudFrontDistributionDecorator associates a CloudFront Distribution with your S3-backed website. It is implemented as a ServiceDecoratorHookHandler as a single service can only provision one CloudFront distribution.\nSample usage:\n//////////////////////////////////////////////////////////////////////////////// // CloudFront settings const subdomain = \u0026#34;mySiteSubdomain\u0026#34; // The domain managed by Route53. const domainName = \u0026#34;myRoute53ManagedDomain.net\u0026#34; // The site will be available at // https://mySiteSubdomain.myRoute53ManagedDomain.net  // The S3 bucketname must match the subdomain.domain // name pattern to serve as a CloudFront Distribution target var bucketName = fmt.Sprintf(\u0026#34;%s.%s\u0026#34;, subdomain, domainName) func distroHooks(s3Site *sparta.S3Site) *sparta.WorkflowHooks { // Commented out demonstration of how to front the site  // with a CloudFront distribution.  // Note that provisioning a distribution will incur additional  // costs  hooks := \u0026amp;sparta.WorkflowHooks{} siteHookDecorator := spartaDecorators.CloudFrontSiteDistributionDecorator(s3Site, subdomain, domainName, gocf.String(os.Getenv(\u0026#34;SPARTA_ACM_CLOUDFRONT_ARN\u0026#34;))) hooks.ServiceDecorators = []sparta.ServiceDecoratorHookHandler{ siteHookDecorator, } return hooks } "
   169  },
   170  {
   171  	"uri": "/reference/operations/cloudwatch_alarms/",
   172  	"title": "CloudWatch Alarms",
   173  	"tags": [],
   174  	"description": "",
   175  	"content": "The CloudWatchErrorAlarmDecorator associates a CloudWatch alarm and destination with your Lambda function.\nSample usage:\nlambdaFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) lambdaFn.Decorators = []sparta.TemplateDecoratorHandler{ spartaDecorators.CloudWatchErrorAlarmDecorator(1, 1, 1, gocf.String(\u0026#34;MY_SNS_ARN\u0026#34;)), } "
   176  },
   177  {
   178  	"uri": "/reference/operations/cloudwatch_dashboard/",
   179  	"title": "CloudWatch Dashboard",
   180  	"tags": [],
   181  	"description": "",
   182  	"content": "The DashboardDecorator creates a CloudWatch Dashboard that produces a single CloudWatch Dashboard to summarize your stack\u0026rsquo;s behavior.\nSample usage:\nfunc workflowHooks(connections *service.Connections, lambdaFunctions []*sparta.LambdaAWSInfo, websiteURL *gocf.StringExpr) *sparta.WorkflowHooks { // Setup the DashboardDecorator lambda hook  workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorators: []sparta.ServiceDecoratorHookHandler{ spartaDecorators.DashboardDecorator(lambdaFunctions, 60), serviceResourceDecorator(connections, websiteURL), }, } return workflowHooks } A sample dashboard for the SpartaGeekwire project is:\nRelated to this, see the recently announced AWS Lambda Application Dashboard.\n"
   183  },
   184  {
   185  	"uri": "/reference/operations/codedeploy_service_update/",
   186  	"title": "CodeDeploy Service Update",
   187  	"tags": [],
   188  	"description": "",
   189  	"content": " TODO: Document the CodeDeployServiceUpdateDecorator decorator. See also the Deployment Strategy page.\n "
   190  },
   191  {
   192  	"uri": "/reference/decorators/lambda_versioning/",
   193  	"title": "Lambda Versioning Decorator",
   194  	"tags": [],
   195  	"description": "",
   196  	"content": " TODO: LambdaVersioningDecorator\n "
   197  },
   198  {
   199  	"uri": "/reference/decorators/publish_outputs/",
   200  	"title": "Publishing Outputs",
   201  	"tags": [],
   202  	"description": "",
   203  	"content": "CloudFormation stack outputs can be used to advertise information about a service.\nSparta provides different publishing output decorators depending on the type of CloudFormation resource output:\n Ref: PublishRefOutputDecorator Fn::Att: PublishAttOutputDecorator  Publishing Resource Ref Values For example, to publish the dynamically lambda resource name for a given AWS Lambda function, use PublishRefOutputDecorator such as:\nlambdaFunctionName := \u0026#34;Hello World\u0026#34; lambdaFn, _ := sparta.NewAWSLambda(lambdaFunctionName, helloWorld, sparta.IAMRoleDefinition{}) lambdaFn.Decorators = append(lambdaFn.Decorators, spartaDecorators.PublishRefOutputDecorator(fmt.Sprintf(\u0026#34;%s FunctionName\u0026#34;, lambdaFunctionName), fmt.Sprintf(\u0026#34;%s Lambda ARN\u0026#34;, lambdaFunctionName))) } Publishing Resource Att Values For example, to publish the dynamically determined ARN for a given AWS Lambda function, use PublishAttOutputDecorator such as:\nlambdaFunctionName := \u0026#34;Hello World\u0026#34; lambdaFn, _ := sparta.NewAWSLambda(lambdaFunctionName, helloWorld, sparta.IAMRoleDefinition{}) lambdaFn.Decorators = append(lambdaFn.Decorators, spartaDecorators.PublishAttOutputDecorator(fmt.Sprintf(\u0026#34;%s FunctionARN\u0026#34;, lambdaFunctionName), fmt.Sprintf(\u0026#34;%s Lambda ARN\u0026#34;, lambdaFunctionName), \u0026#34;Arn\u0026#34;)) } "
   204  },
   205  {
   206  	"uri": "/reference/decorators/s3_artifact_publisher/",
   207  	"title": "S3 Artifact Publisher",
   208  	"tags": [],
   209  	"description": "",
   210  	"content": "The S3ArtifactPublisherDecorator enables a service to publish objects to S3 locations as part of the service lifecycle.\nThis decorator is implemented as a ServiceDecoratorHookHandler which is supplied to MainEx. For example:\nhooks := \u0026amp;sparta.WorkflowHooks{} payloadData := map[string]interface{}{ \u0026#34;SomeValue\u0026#34;: gocf.Ref(\u0026#34;AWS::StackName\u0026#34;), } serviceHook := spartaDecorators.S3ArtifactPublisherDecorator(gocf.String(\u0026#34;MY-S3-BUCKETNAME\u0026#34;), gocf.Join(\u0026#34;\u0026#34;, gocf.String(\u0026#34;metadata/\u0026#34;), gocf.Ref(\u0026#34;AWS::StackName\u0026#34;), gocf.String(\u0026#34;.json\u0026#34;)), payloadData) hooks.ServiceDecorators = []sparta.ServiceDecoratorHookHandler{serviceHook} "
   211  },
   212  {
   213  	"uri": "/example_service/step1/",
   214  	"title": "Overview",
   215  	"tags": [],
   216  	"description": "",
   217  	"content": "Sparta is a framework for developing and deploying go based AWS Lambda-backed microservices. To help understand what that means we\u0026rsquo;ll begin with a \u0026ldquo;Hello World\u0026rdquo; lambda function and eventually deploy that to AWS. Note that we\u0026rsquo;re not going to handle all error cases to keep the example code to a minimum.\nPlease be aware that running Lambda functions may incur costs. Be sure to decommission Sparta stacks after you are finished using them (via the delete command line option) to avoid unwanted charges. It\u0026rsquo;s likely that you\u0026rsquo;ll be well under the free tier, but secondary AWS resources provisioned during development (eg, Kinesis streams) are not pay-per-invocation.\n Preconditions Sparta uses the AWS SDK for Go to interact with AWS APIs. Before you get started, ensure that you\u0026rsquo;ve properly configured the SDK credentials.\nNote that you must use an AWS region that supports Lambda. Consult the Global Infrastructure page for the most up to date release information.\nLambda Definition The first place to start is with the lambda function definition.\n// Standard AWS λ function func helloWorld(ctx context.Context) (string, error) { return \u0026#34;Hello World!\u0026#34;, nil } The ctx parameter includes the following entries:\n The AWS LambdaContext A *logrus.Logger instance (sparta.ContextKeyLogger) A per-request annotated *logrus.Entry instance (sparta.ContextKeyRequestLogger)  Creation The next step is to create a Sparta-wrapped version of the helloWorld function.\nvar lambdaFunctions []*sparta.LambdaAWSInfo helloWorldFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) We first declare an empty slice lambdaFunctions to which all our service\u0026rsquo;s lambda functions will be appended. The next step is to register a new lambda target via NewAWSLambda. NewAWSLambda accepts three parameters:\n string: The function name. A sanitized version of this value is used as the FunctionName. func(...): The go function to execute. string|IAMRoleDefinition : Either a string literal that refers to a pre-existing IAM Role under which the lambda function will be executed, OR a sparta.IAMRoleDefinition value that will be provisioned as part of this deployment and used as the execution role for the lambda function.  In this example, we\u0026rsquo;re defining a new IAMRoleDefinition as part of the stack. This role definition will automatically include privileges for actions such as CloudWatch logging, and since our function doesn\u0026rsquo;t access any additional AWS services that\u0026rsquo;s all we need.    Delegation The final step is to define a Sparta service under your application\u0026rsquo;s main package and provide the non-empty slice of lambda functions:\nsparta.Main(\u0026#34;MyHelloWorldStack\u0026#34;, \u0026#34;Simple Sparta application that demonstrates core functionality\u0026#34;, lambdaFunctions, nil, nil) sparta.Main accepts five parameters:\n serviceName : The string to use as the CloudFormation stackName. Note that there can be only a single stack with this name within a given AWS account, region pair.  The serviceName is used as the stable identifier to determine when updates should be applied rather than new stacks provisioned, as well as the target of a delete command line request. Consider using UserScopedStackName to generate unique, stable names across a team.   serviceDescription: An optional string used to describe the stack. []*LambdaAWSInfo : Slice of sparta.lambdaAWSInfo that define a service *API : Optional pointer to data if you would like to provision and associate an API Gateway with the set of lambda functions.  We\u0026rsquo;ll walk through how to do that in another section, but for now our lambda function will only be accessible via the AWS SDK or Console.   *S3Site : Optional pointer to data if you would like to provision an static website on S3, initialized with local resources.  We\u0026rsquo;ll walk through how to do that in another section, but for now our lambda function will only be accessible via the AWS SDK or Console.    Delegating main() to Sparta.Main() transforms the set of lambda functions into a standalone executable with several command line options. Run go run main.go --help to see the available options.\nPutting It Together Putting everything together, and including the necessary imports, we have:\n// File: main.go package main import ( \u0026#34;context\u0026#34; sparta \u0026#34;github.com/mweagle/Sparta\u0026#34; ) // Standard AWS λ function func helloWorld(ctx context.Context) (string, error) { return \u0026#34;Hello World!\u0026#34;, nil } func main() { var lambdaFunctions []*sparta.LambdaAWSInfo helloWorldFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) lambdaFunctions = append(lambdaFunctions, helloWorldFn) sparta.Main(\u0026#34;MyHelloWorldStack\u0026#34;, \u0026#34;Simple Sparta application that demonstrates core functionality\u0026#34;, lambdaFunctions, nil, nil) } Running It Next download the Sparta dependencies via:\n go get ./...  in the directory that you saved main.go. Once the packages are downloaded, first get a view of what\u0026rsquo;s going on by the describe command (replacing $S3_BUCKET with an S3 bucket you own):\n$ go run main.go --level info describe --out ./graph.html --s3Bucket $S3_BUCKET INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.13.0 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : 03cdb90 INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.13.3 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: MyHelloWorldStack-123412341234 LinkFlags= Option=describe UTC=\u0026quot;2019-12-07T20:01:48Z\u0026quot; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Provisioning service BuildID=none CodePipelineTrigger= InPlaceUpdates=false NOOP=true Tags= INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=1 INFO[0000] Skipping S3 preconditions check due to -n/-noop flag Bucket=weagle Region=us-west-2 VersioningEnabled=false INFO[0000] Running `go generate` INFO[0000] Compiling binary Name=Sparta.lambda.amd64 INFO[0001] Creating code ZIP archive for upload TempName=./.sparta/MyHelloWorldStack_123412341234-code.zip INFO[0001] Lambda code archive size Size=\u0026quot;24 MB\u0026quot; INFO[0001] Skipping S3 upload due to -n/-noop flag Bucket=weagle File=MyHelloWorldStack_123412341234-code.zip Key=MyHelloWorldStack-123412341234/MyHelloWorldStack_123412341234-code-ec0d6f8bae7b6a7abaa77db394c96265e213d20d.zip Size=\u0026quot;24 MB\u0026quot; INFO[0001] Skipping Stack creation due to -n/-noop flag Bucket=weagle TemplateName=MyHelloWorldStack_123412341234-cftemplate.json INFO[0001] ════════════════════════════════════════════════ INFO[0001] MyHelloWorldStack-123412341234 Summary INFO[0001] ════════════════════════════════════════════════ INFO[0001] Verifying IAM roles Duration (s)=0 INFO[0001] Verifying AWS preconditions Duration (s)=0 INFO[0001] Creating code bundle Duration (s)=1 INFO[0001] Uploading code Duration (s)=0 INFO[0001] Ensuring CloudFormation stack Duration (s)=0 INFO[0001] Total elapsed time Duration (s)=1 Then open graph.html in your browser (also linked here ) to see what will be provisioned.\nSince everything looks good, we\u0026rsquo;ll provision the stack via provision and verify the lambda function. Note that the $S3_BUCKET value must be an S3 bucket to which you have write access since Sparta uploads the lambda package and CloudFormation template to that bucket as part of provisioning.\nINFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.13.0 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : 03cdb90 INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.13.3 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: MyHelloWorldStack-123412341234 LinkFlags= Option=provision UTC=\u0026quot;2019-12-07T19:53:24Z\u0026quot; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Using `git` SHA for StampedBuildID Command=\u0026quot;git rev-parse HEAD\u0026quot; SHA=b114e329ed37b532e1f7d2e727aa8211d9d5889c INFO[0000] Provisioning service BuildID=b114e329ed37b532e1f7d2e727aa8211d9d5889c CodePipelineTrigger= InPlaceUpdates=false NOOP=false Tags= INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=1 INFO[0000] Checking S3 versioning Bucket=weagle VersioningEnabled=true INFO[0000] Checking S3 region Bucket=weagle Region=us-west-2 INFO[0000] Running `go generate` INFO[0001] Compiling binary Name=Sparta.lambda.amd64 INFO[0002] Creating code ZIP archive for upload TempName=./.sparta/MyHelloWorldStack_123412341234-code.zip INFO[0002] Lambda code archive size Size=\u0026quot;24 MB\u0026quot; INFO[0002] Uploading local file to S3 Bucket=weagle Key=MyHelloWorldStack-123412341234/MyHelloWorldStack_123412341234-code.zip Path=./.sparta/MyHelloWorldStack_123412341234-code.zip Size=\u0026quot;24 MB\u0026quot; INFO[0011] Uploading local file to S3 Bucket=weagle Key=MyHelloWorldStack-123412341234/MyHelloWorldStack_123412341234-cftemplate.json Path=./.sparta/MyHelloWorldStack_123412341234-cftemplate.json Size=\u0026quot;2.2 kB\u0026quot; INFO[0011] Issued CreateChangeSet request StackName=MyHelloWorldStack-123412341234 INFO[0016] Issued ExecuteChangeSet request StackName=MyHelloWorldStack-123412341234 INFO[0033] CloudFormation Metrics ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0033] Operation duration Duration=8.26s Resource=MyHelloWorldStack-123412341234 Type=\u0026quot;AWS::CloudFormation::Stack\u0026quot; INFO[0033] Operation duration Duration=1.35s Resource=HelloWorldLambda80576f7b21690b0cb485a6b69c927aac972cd693 Type=\u0026quot;AWS::Lambda::Function\u0026quot; INFO[0033] Stack provisioned CreationTime=\u0026quot;2019-11-28 00:05:04.508 +0000 UTC\u0026quot; StackId=\u0026quot;arn:aws:cloudformation:us-west-2:123412341234:stack/MyHelloWorldStack-123412341234/bab01fb0-1172-11ea-84a9-0ab88639bbc6\u0026quot; StackName=MyHelloWorldStack-123412341234 INFO[0033] ════════════════════════════════════════════════ INFO[0033] MyHelloWorldStack-123412341234 Summary INFO[0033] ════════════════════════════════════════════════ INFO[0033] Verifying IAM roles Duration (s)=0 INFO[0033] Verifying AWS preconditions Duration (s)=0 INFO[0033] Creating code bundle Duration (s)=1 INFO[0033] Uploading code Duration (s)=9 INFO[0033] Ensuring CloudFormation stack Duration (s)=22 INFO[0033] Total elapsed time Duration (s)=33 Once the stack has been provisioned (CREATE_COMPLETE), login to the AWS console and navigate to the Lambda section.\nTesting Find your Lambda function in the list of AWS Lambda functions and click the hyperlink. The display name will be prefixed by the name of your stack (MyHelloWorldStack in our example):\nOn the Lambda details page, click the Test button:\nAccept the and name the Hello World event template sample (our Lambda function doesn\u0026rsquo;t consume the event data) and click Save and test. The execution result pane should display something similar to:\nCleaning Up To prevent unauthorized usage and potential charges, make sure to delete your stack before moving on:\n$ go run main.go delete INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗┌─┐┌─┐┬─┐┌┬┐┌─┐ Version : 1.0.2 INFO[0000] ╚═╗├─┘├─┤├┬┘ │ ├─┤ SHA : b37b93e INFO[0000] ╚═╝┴ ┴ ┴┴└─ ┴ ┴ ┴ Go : go1.9.2 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: MyHelloWorldStack LinkFlags= Option=delete UTC=\u0026quot;2018-01-27T22:01:59Z\u0026quot; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Stack existence check Exists=true Name=MyHelloWorldStack INFO[0000] Delete request submitted Response=\u0026quot;{\\n\\n}\u0026quot; Conclusion Congratulations! You\u0026rsquo;ve just deployed your first \u0026ldquo;serverless\u0026rdquo; service. The following sections will dive deeper into what\u0026rsquo;s going on under the hood as well as how to integrate your lambda function(s) into the broader AWS landscape.\nNext Steps Walkthrough what Sparta actually does to deploy your application in the next section.\n"
   218  },
   219  {
   220  	"uri": "/reference/operations/cicd/",
   221  	"title": "CI/CD",
   222  	"tags": [],
   223  	"description": "",
   224  	"content": " TODO: Document the SpartaCodePipeline example. Also see the Medium Post\n "
   225  },
   226  {
   227  	"uri": "/reference/eventsources/cloudformation/",
   228  	"title": "CloudFormation",
   229  	"tags": [],
   230  	"description": "",
   231  	"content": " TODO: CloudFormation source documentation\n "
   232  },
   233  {
   234  	"uri": "/reference/eventsources/cloudwatchevents/",
   235  	"title": "CloudWatch Events",
   236  	"tags": [],
   237  	"description": "",
   238  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to different types of CloudWatch Events. This overview is based on the SpartaApplication sample code if you\u0026rsquo;d rather jump to the end result.\nGoal Assume that we\u0026rsquo;re supposed to write a simple \u0026ldquo;HelloWorld\u0026rdquo; CloudWatch event function that has two requirements:\n Run every 5 minutes to provide a heartbeat notification to our alerting system via a logfile entry Log EC2-related events for later processing  Getting Started The lambda function is relatively small:\nfunc echoCloudWatchEvent(ctx context.Context, event map[string]interface{}) (map[string]interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: event, }).Info(\u0026#34;Request received\u0026#34;) return event, nil } Our lambda function doesn\u0026rsquo;t need to do much with the event other than log and return it.\nSparta Integration With echoCloudWatchEvent() implemented, the next step is to integrate the go function with Sparta. This is done by the appendCloudWatchEventHandler in the SpartaApplication application.go source.\nOur lambda function only needs logfile write privileges, and since these are enabled by default, we can use an empty sparta.IAMRoleDefinition value:\nfunc appendCloudWatchEventHandler(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoCloudWatchEvent), echoCloudWatchEvent, sparta.IAMRoleDefinition{}) The next step is to add a CloudWatchEventsPermission value that includes the two rule triggers.\ncloudWatchEventsPermission := sparta.CloudWatchEventsPermission{} cloudWatchEventsPermission.Rules = make(map[string]sparta.CloudWatchEventsRule, 0) Our two rules will be inserted into the Rules map in the next steps.\nCron Expression Our first requirement is that the lambda function write a heartbeat to the logfile every 5 mins. This can be configured by adding a scheduled event:\ncloudWatchEventsPermission.Rules[\u0026#34;Rate5Mins\u0026#34;] = sparta.CloudWatchEventsRule{ ScheduleExpression: \u0026#34;rate(5 minutes)\u0026#34;, } The ScheduleExpression value can either be a rate or a cron expression. The map keyname is used when adding the rule during stack provisioning.\nEvent Pattern The other requirement is that our lambda function be notified when matching EC2 events are created. To support this, we\u0026rsquo;ll add a second Rule:\ncloudWatchEventsPermission.Rules[\u0026#34;EC2Activity\u0026#34;] = sparta.CloudWatchEventsRule{ EventPattern: map[string]interface{}{ \u0026#34;source\u0026#34;: []string{\u0026#34;aws.ec2\u0026#34;}, \u0026#34;detail-type\u0026#34;: []string{\u0026#34;EC2 Instance State-change Notification\u0026#34;}, }, } The EC2 event pattern is the go JSON-compatible representation of the event pattern that CloudWatch Events will use to trigger our lambda function. This structured value will be marshaled to a String during CloudFormation Template marshaling.\nSparta does NOT attempt to validate either ScheduleExpression or EventPattern values prior to calling CloudFormation. Syntax errors in either value will be detected during provisioning when the Sparta CloudFormation CustomResource calls putRule to add the lambda target. This error will cause the CloudFormation operation to fail. Any API errors will be logged \u0026amp; are viewable in the CloudFormation Logs Console.\n Add Permission With the two rules configured, the final step is to add the sparta.CloudWatchPermission to our sparta.LambdaAWSInfo value:\nlambdaFn.Permissions = append(lambdaFn.Permissions, cloudWatchEventsPermission) return append(lambdaFunctions, lambdaFn) Our entire function is therefore:\nfunc appendCloudWatchEventHandler(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoCloudWatchEvent), echoCloudWatchEvent, sparta.IAMRoleDefinition{}) cloudWatchEventsPermission := sparta.CloudWatchEventsPermission{} cloudWatchEventsPermission.Rules = make(map[string]sparta.CloudWatchEventsRule, 0) cloudWatchEventsPermission.Rules[\u0026#34;Rate5Mins\u0026#34;] = sparta.CloudWatchEventsRule{ ScheduleExpression: \u0026#34;rate(5 minutes)\u0026#34;, } cloudWatchEventsPermission.Rules[\u0026#34;EC2Activity\u0026#34;] = sparta.CloudWatchEventsRule{ EventPattern: map[string]interface{}{ \u0026#34;source\u0026#34;: []string{\u0026#34;aws.ec2\u0026#34;}, \u0026#34;detail-type\u0026#34;: []string{\u0026#34;EC2 Instance state change\u0026#34;}, }, } lambdaFn.Permissions = append(lambdaFn.Permissions, cloudWatchEventsPermission) return append(lambdaFunctions, lambdaFn) } Wrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all CloudWatch Events-triggered lambda functions:\n Define the lambda function (echoCloudWatchEvent). If needed, create the required IAMRoleDefinition with appropriate privileges. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Create a CloudWatchEventsPermission value. Add one or more CloudWatchEventsRules to the CloudWatchEventsPermission.Rules map that define your lambda function\u0026rsquo;s trigger condition:  Scheduled Events Event Patterns   Append the CloudWatchEventsPermission value to the lambda function\u0026rsquo;s Permissions slice. Include the reference in the call to sparta.Main().  Other Resources  Introduction to CloudWatch Events Tim Bray\u0026rsquo;s Cloud Eventing writeup Run an AWS Lambda Function on a Schedule Using the AWS CLI The EC2 event pattern is drawn from the AWS Events \u0026amp; Event Patterns documentation  "
   239  },
   240  {
   241  	"uri": "/reference/eventsources/cloudwatchlogs/",
   242  	"title": "CloudWatch Logs",
   243  	"tags": [],
   244  	"description": "",
   245  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to CloudWatch Logs. This overview is based on the SpartaApplication sample code if you\u0026rsquo;d rather jump to the end result.\nGoal Assume that we\u0026rsquo;re supposed to write a simple \u0026ldquo;HelloWorld\u0026rdquo; CloudWatch Logs function that should be triggered in response to any log message issued to a specific Log Group.\nGetting Started Our lambda function is relatively short:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func echoCloudWatchLogsEvent(ctx context.Context, cwlEvent awsLambdaEvents.CloudwatchLogsEvent) (*awsLambdaEvents.CloudwatchLogsEvent, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: cwlEvent, }).Info(\u0026#34;Request received\u0026#34;) return \u0026amp;cwlEvent, nil } Our lambda function doesn\u0026rsquo;t need to do much with the log message other than log and return it.\nSparta Integration With echoCloudWatchLogsEvent() implemented, the next step is to integrate the go function with Sparta. This is done by the appendCloudWatchLogsLambda in the SpartaApplication application.go source.\nOur lambda function only needs logfile write privileges, and since these are enabled by default, we can use an empty sparta.IAMRoleDefinition value:\nfunc appendCloudWatchLogsHandler(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoCloudWatchLogsEvent), echoCloudWatchLogsEvent, sparta.IAMRoleDefinition{}) The next step is to add a CloudWatchLogsSubscriptionFilter value that represents the CloudWatch Lambda subscription filter information.\ncloudWatchLogsPermission := sparta.CloudWatchLogsPermission{} cloudWatchLogsPermission.Filters = make(map[string]sparta.CloudWatchLogsSubscriptionFilter, 1) cloudWatchLogsPermission.Filters[\u0026#34;MyFilter\u0026#34;] = sparta.CloudWatchLogsSubscriptionFilter{ LogGroupName: \u0026#34;/aws/lambda/versions\u0026#34;, } The sparta.CloudWatchLogsPermission struct provides fields for both the LogGroupName and optional Filter expression (not shown here) to use when calling putSubscriptionFilter.\nAdd Permission With the subscription information configured, the final step is to add the sparta.CloudWatchLogsPermission to our sparta.LambdaAWSInfo value:\nlambdaFn.Permissions = append(lambdaFn.Permissions, cloudWatchLogsPermission) Our entire function is therefore:\nfunc appendCloudWatchLogsHandler(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoCloudWatchLogsEvent), echoCloudWatchLogsEvent, sparta.IAMRoleDefinition{}) cloudWatchLogsPermission := sparta.CloudWatchLogsPermission{} cloudWatchLogsPermission.Filters = make(map[string]sparta.CloudWatchLogsSubscriptionFilter, 1) cloudWatchLogsPermission.Filters[\u0026#34;MyFilter\u0026#34;] = sparta.CloudWatchLogsSubscriptionFilter{ FilterPattern: \u0026#34;\u0026#34;, // NOTE: This LogGroupName MUST already exist in your account, otherwise \t// the `provision` step will fail. You can create a LogGroupName in the \t// AWS Console \tLogGroupName: \u0026#34;/aws/lambda/versions\u0026#34;, } lambdaFn.Permissions = append(lambdaFn.Permissions, cloudWatchLogsPermission) return append(lambdaFunctions, lambdaFn) } Wrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all CloudWatch Logs-triggered lambda functions:\n Define the lambda function (echoCloudWatchLogsEvent). If needed, create the required IAMRoleDefinition with appropriate privileges. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Create a CloudWatchLogsPermission value. Add one or more CloudWatchLogsSubscriptionFilter to the CloudWatchLogsPermission.Filters map that defines your lambda function\u0026rsquo;s logfile subscription information. Append the CloudWatchLogsPermission value to the lambda function\u0026rsquo;s Permissions slice. Include the reference in the call to sparta.Main().  Other Resources "
   246  },
   247  {
   248  	"uri": "/reference/eventsources/cognito/",
   249  	"title": "Cognito",
   250  	"tags": [],
   251  	"description": "",
   252  	"content": " TODO: Cognito source documentation\n "
   253  },
   254  {
   255  	"uri": "/reference/application/custom_commands/",
   256  	"title": "Custom Commands",
   257  	"tags": [],
   258  	"description": "",
   259  	"content": "In addition to custom flags, an application may register completely new commands. For example, to support alternative topologies or integrated automated acceptance tests as part of a CI/CD pipeline.\nTo register a custom command, define a new cobra.Command and add it to the sparta.CommandLineOptions.Root command value. Ensure you use the xxxxE Cobra functions so that errors can be properly propagated.\nhttpServerCommand := \u0026amp;cobra.Command{ Use: \u0026#34;httpServer\u0026#34;, Short: \u0026#34;Sample HelloWorld HTTP server\u0026#34;, Long: `Sample HelloWorld HTTP server that binds to port: ` + HTTPServerPort, RunE: func(cmd *cobra.Command, args []string) error { http.HandleFunc(\u0026#34;/\u0026#34;, helloWorldResource) return http.ListenAndServe(fmt.Sprintf(\u0026#34;:%d\u0026#34;, HTTPServerPort), nil) }, } sparta.CommandLineOptions.Root.AddCommand(httpServerCommand) Registering a user-defined command makes that command\u0026rsquo;s usage information seamlessly integrate with the standard commands:\n$ go run main.go --help Provision AWS Lambda and EC2 instance with same code Usage: main [command] Available Commands: delete Delete service describe Describe service execute Start the application and begin handling events explore Interactively explore a provisioned service help Help about any command httpServer Sample HelloWorld HTTP server profile Interactively examine service pprof output provision Provision service status Produce a report for a provisioned service version Display version information Flags: -f, --format string Log format [text, json] (default \u0026#34;text\u0026#34;) -h, --help help for main --ldflags string Go linker string definition flags (https://golang.org/cmd/link/) -l, --level string Log level [panic, fatal, error, warn, info, debug] (default \u0026#34;info\u0026#34;) --nocolor Boolean flag to suppress colorized TTY output -n, --noop Dry-run behavior only (do not perform mutations) -t, --tags string Optional build tags for conditional compilation -z, --timestamps Include UTC timestamp log line prefix And you can query for user-command specific usage as in:\n$ ./SpartaOmega httpServer --help Custom command Usage: SpartaOmega httpServer [flags] Global Flags: -l, --level string Log level [panic, fatal, error, warn, info, debug] (default \u0026#34;info\u0026#34;) -n, --noop Dry-run behavior only (do not perform mutations) "
   260  },
   261  {
   262  	"uri": "/reference/application/custom_flags/",
   263  	"title": "Custom Flags",
   264  	"tags": [],
   265  	"description": "",
   266  	"content": "Some commands (eg: provision) may require additional options. For instance, your application\u0026rsquo;s provision logic may require VPC subnets or EC2 SSH Key Names.\nThe default Sparta command line option flags may be extended and validated by building on the exposed Cobra command objects.\nAdding Flags To add a flag, use one of the pflag functions to register your custom flag with one of the standard CommandLineOption values.\nFor example:\n// SSHKeyName is the SSH KeyName to use when provisioning new EC2 instance var SSHKeyName string func main() { // And add the SSHKeyName option to the provision step  sparta.CommandLineOptions.Provision.Flags().StringVarP(\u0026amp;SSHKeyName, \u0026#34;key\u0026#34;, \u0026#34;k\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;SSH Key Name to use for EC2 instances\u0026#34;) } Validating Input Flags may be used to conditionalize which Sparta lambda functions are provided and/or their content. In this case, your application may first need to parse and validate the command line input before calling sparta.Main().\nTo validate user input, define a CommandLineOptionsHook function and provide it to sparta.ParseOptions. This function is called after the pflag bindings are invoked so that your application can validate user input.\nThe ParseOptions result is the optional error returned from your CommandLineOptionsHook function. If there is an error, your application can then exit with an application specific exit code. For instance:\n// Define a validation hook s.t. we can verify the SSHKey is valid validationHook := func(command *cobra.Command) error { if command.Name() == \u0026#34;provision\u0026#34; \u0026amp;\u0026amp; len(SSHKeyName) \u0026lt;= 0 { return fmt.Errorf(\u0026#34;SSHKeyName option is required\u0026#34;) } return nil } } // Extract \u0026amp; validate the SSH Key parseErr := sparta.ParseOptions(validationHook) if nil != parseErr { os.Exit(3) } Sparta itself uses the govalidator package to simplify validating command line arguments. See sparta_main.go for an example.\n"
   267  },
   268  {
   269  	"uri": "/reference/eventsources/dynamodb/",
   270  	"title": "DynamoDB",
   271  	"tags": [],
   272  	"description": "",
   273  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to DynamoDB stream events. This overview is based on the SpartaApplication sample code if you\u0026rsquo;d rather jump to the end result.\nGoal Assume that we\u0026rsquo;re given a DynamoDB stream. See below for details on how to create the stream. We\u0026rsquo;ve been asked to write a lambda function that logs when operations are performed to the table so that we can perform offline analysis.\nGetting Started We\u0026rsquo;ll start with an empty lambda function and build up the needed functionality.\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func echoDynamoDBEvent(ctx context.Context, ddbEvent awsLambdaEvents.DynamoDBEvent) (*awsLambdaEvents.DynamoDBEvent, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: ddbEvent, }).Info(\u0026#34;Event received\u0026#34;) return \u0026amp;ddbEvent, nil } Since the echoDynamoDBEvent is triggered by Dynamo events, we can leverage the AWS Go Lambda SDK event types to access the record.\nSparta Integration With the core of the echoDynamoDBEvent complete, the next step is to integrate the go function with Sparta. This is performed by the appendDynamoDBLambda function. Since the echoDynamoDBEvent function doesn\u0026rsquo;t access any additional services (Sparta enables CloudWatch Logs privileges by default), the integration is pretty straightforward:\nlambdaFn, _ := sparta.NewAWSLambda( sparta.LambdaName(echoDynamoDBEvent), echoDynamoDBEvent, sparta.IAMRoleDefinition{}) Event Source Mappings If we were to deploy this Sparta application, the echoDynamoDBEvent function would have the ability to log DynamoDB stream events, but would not be invoked in response to events published by the stream. To register for notifications, we need to configure the lambda\u0026rsquo;s EventSourceMappings:\nlambdaFn.EventSourceMappings = append(lambdaFn.EventSourceMappings, \u0026amp;lambda.CreateEventSourceMappingInput{ EventSourceArn: aws.String(dynamoTestStream), StartingPosition: aws.String(\u0026#34;TRIM_HORIZON\u0026#34;), BatchSize: aws.Int64(10), Enabled: aws.Bool(true), }) lambdaFunctions = append(lambdaFunctions, lambdaFn) The dynamoTestStream param is the ARN of the Dynamo stream that that your lambda function will poll (eg: arn:aws:dynamodb:us-west-2:000000000000:table/myDynamoDBTable/stream/2015-12-05T16:28:11.869).\nThe EventSourceMappings field is transformed into the appropriate CloudFormation Resource which enables automatic polling of the DynamoDB stream.\nWrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all DynamoDB stream based lambda functions:\n Define the lambda function (echoDynamoDBEvent). If needed, create the required IAMRoleDefinition with appropriate privileges if the lambda function accesses other AWS services. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Add the necessary EventSourceMappings to the LambdaAWSInfo struct so that the lambda function is properly configured.  Other Resources  Using Triggers for Cross Region DynamoDB Replication  # Appendix Creating a DynamoDB Stream To create a DynamoDB stream for a given table, follow the steps below:\nSelect Table Enable Stream Copy ARN The Latest stream ARN value is the value that should be provided as the EventSourceArn in to the Event Source Mappings.\n"
   274  },
   275  {
   276  	"uri": "/reference/apigateway/echo_event/",
   277  	"title": "Echo",
   278  	"tags": [],
   279  	"description": "",
   280  	"content": "To start, we\u0026rsquo;ll create a HTTPS accessible lambda function that simply echoes back the contents of incoming API Gateway Lambda event. The source for this is the SpartaHTML.\nFor reference, the helloWorld function is below.\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; ) func helloWorld(ctx context.Context, gatewayEvent spartaAWSEvents.APIGatewayRequest) (*spartaAPIGateway.Response, error) { logger, loggerOk := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) if loggerOk { logger.Info(\u0026#34;Hello world structured log message\u0026#34;) } // Return a message, together with the incoming input...  return spartaAPIGateway.NewResponse(http.StatusOK, \u0026amp;helloWorldResponse{ Message: fmt.Sprintf(\u0026#34;Hello world 🌏\u0026#34;), Request: gatewayEvent, }), nil } API Gateway The first requirement is to create a new API instance via sparta.NewAPIGateway().\nstage := sparta.NewStage(\u0026#34;prod\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;MySpartaAPI\u0026#34;, stage) In the example above, we\u0026rsquo;re also including a Stage value. A non-nil Stage value will cause the registered API to be deployed. If the Stage value is nil, a REST API will be created, but it will not be deployed (and therefore not publicly accessible).\nResource The next step is to associate a URL path with the sparta.LambdaAWSInfo struct that represents the go function:\nfunc spartaHTMLLambdaFunctions(api *sparta.API) []*sparta.LambdaAWSInfo { var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(helloWorld), helloWorld, sparta.IAMRoleDefinition{}) if nil != api { apiGatewayResource, _ := api.NewResource(\u0026#34;/hello\u0026#34;, lambdaFn) // We only return http.StatusOK  apiMethod, apiMethodErr := apiGatewayResource.NewMethod(\u0026#34;GET\u0026#34;, http.StatusOK, http.StatusInternalServerError) if nil != apiMethodErr { panic(\u0026#34;Failed to create /hello resource: \u0026#34; + apiMethodErr.Error()) } // The lambda resource only supports application/json Unmarshallable  // requests.  apiMethod.SupportedRequestContentTypes = []string{\u0026#34;application/json\u0026#34;} } return append(lambdaFunctions, lambdaFn) } Our helloWorld only supports GET. We\u0026rsquo;ll see how a single lambda function can support multiple HTTP methods shortly.\nProvision The final step is to to provide the API instance to Sparta.Main()\n// Register the function with the API Gateway apiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaHTML\u0026#34;, apiStage) Once the service is successfully provisioned, the Outputs key will include the API Gateway Deployed URL (sample):\nINFO[0096] ──────────────────────────────────────────────── INFO[0096] Stack Outputs INFO[0096] ──────────────────────────────────────────────── INFO[0096] S3SiteURL Description=\u0026#34;S3 Website URL\u0026#34; Value=\u0026#34;http://spartahtml-mweagle-s3site89c05c24a06599753eb3ae4e-1w6rehqu6x04c.s3-website-us-west-2.amazonaws.com\u0026#34; INFO[0096] APIGatewayURL Description=\u0026#34;API Gateway URL\u0026#34; Value=\u0026#34;https://w2tefhnt4b.execute-api.us-west-2.amazonaws.com/v1\u0026#34; INFO[0096] ──────────────────────────────────────────────── Combining the API Gateway URL OutputValue with our resource path (/hello/world/test), we get the absolute URL to our lambda function: https://w2tefhnt4b.execute-api.us-west-2.amazonaws.com/v1/hello\nVerify Let\u0026rsquo;s query the lambda function and see what the event data is at execution time. The snippet below is pretty printed by piping the response through jq.\n$ curl -vs https://3e7ux226ga.execute-api.us-west-2.amazonaws.com/v1/hello | jq . * Trying 52.84.237.220... * TCP_NODELAY set * Connected to 3e7ux226ga.execute-api.us-west-2.amazonaws.com (52.84.237.220) port 443 (#0) * TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 * Server certificate: *.execute-api.us-west-2.amazonaws.com * Server certificate: Amazon * Server certificate: Amazon Root CA 1 * Server certificate: Starfield Services Root Certificate Authority - G2 \u0026gt; GET /v1/hello HTTP/1.1 \u0026gt; Host: 3e7ux226ga.execute-api.us-west-2.amazonaws.com \u0026gt; User-Agent: curl/7.54.0 \u0026gt; Accept: */* \u0026gt; \u0026lt; HTTP/1.1 200 OK \u0026lt; Content-Type: application/json \u0026lt; Content-Length: 1137 \u0026lt; Connection: keep-alive \u0026lt; Date: Mon, 29 Jan 2018 14:15:28 GMT \u0026lt; x-amzn-RequestId: db7f5734-04fe-11e8-b264-c70ecab3a032 \u0026lt; Access-Control-Allow-Origin: http://spartahtml-mweagle-s3site89c05c24a06599753eb3ae4e-419zo4dp8n2d.s3-website-us-west-2.amazonaws.com \u0026lt; Access-Control-Allow-Headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key \u0026lt; Access-Control-Allow-Methods: * \u0026lt; X-Amzn-Trace-Id: sampled=0;root=1-5a6f2c80-efb0f84554384252abca6d15 \u0026lt; X-Cache: Miss from cloudfront \u0026lt; Via: 1.1 570a1979c411cb4529fa1e711db52490.cloudfront.net (CloudFront) \u0026lt; X-Amz-Cf-Id: -UsCegiR1K3vJUFyAo9IMrWGdH8rKW6UBrtJLjxZqke19r0cxMl1NA== \u0026lt; { [1137 bytes data] * Connection #0 to host 3e7ux226ga.execute-api.us-west-2.amazonaws.com left intact { \u0026quot;Message\u0026quot;: \u0026quot;Hello world 🌏\u0026quot;, \u0026quot;Request\u0026quot;: { \u0026quot;method\u0026quot;: \u0026quot;GET\u0026quot;, \u0026quot;body\u0026quot;: {}, \u0026quot;headers\u0026quot;: { \u0026quot;Accept\u0026quot;: \u0026quot;*/*\u0026quot;, \u0026quot;CloudFront-Forwarded-Proto\u0026quot;: \u0026quot;https\u0026quot;, \u0026quot;CloudFront-Is-Desktop-Viewer\u0026quot;: \u0026quot;true\u0026quot;, \u0026quot;CloudFront-Is-Mobile-Viewer\u0026quot;: \u0026quot;false\u0026quot;, \u0026quot;CloudFront-Is-SmartTV-Viewer\u0026quot;: \u0026quot;false\u0026quot;, \u0026quot;CloudFront-Is-Tablet-Viewer\u0026quot;: \u0026quot;false\u0026quot;, \u0026quot;CloudFront-Viewer-Country\u0026quot;: \u0026quot;US\u0026quot;, \u0026quot;Host\u0026quot;: \u0026quot;3e7ux226ga.execute-api.us-west-2.amazonaws.com\u0026quot;, \u0026quot;User-Agent\u0026quot;: \u0026quot;curl/7.54.0\u0026quot;, \u0026quot;Via\u0026quot;: \u0026quot;1.1 570a1979c411cb4529fa1e711db52490.cloudfront.net (CloudFront)\u0026quot;, \u0026quot;X-Amz-Cf-Id\u0026quot;: \u0026quot;vAFNTV5uAMeTG9JN6IORnA7LYJhZyB3jHV7vh-7lXn2uZQUR6eHQUw==\u0026quot;, \u0026quot;X-Amzn-Trace-Id\u0026quot;: \u0026quot;Root=1-5a6f2c80-2b48a9c86a30b0162d8ab1f1\u0026quot;, \u0026quot;X-Forwarded-For\u0026quot;: \u0026quot;73.118.138.121, 205.251.214.60\u0026quot;, \u0026quot;X-Forwarded-Port\u0026quot;: \u0026quot;443\u0026quot;, \u0026quot;X-Forwarded-Proto\u0026quot;: \u0026quot;https\u0026quot; }, \u0026quot;queryParams\u0026quot;: {}, \u0026quot;pathParams\u0026quot;: {}, \u0026quot;context\u0026quot;: { \u0026quot;appId\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;method\u0026quot;: \u0026quot;GET\u0026quot;, \u0026quot;requestId\u0026quot;: \u0026quot;db7f5734-04fe-11e8-b264-c70ecab3a032\u0026quot;, \u0026quot;resourceId\u0026quot;: \u0026quot;401s9n\u0026quot;, \u0026quot;resourcePath\u0026quot;: \u0026quot;/hello\u0026quot;, \u0026quot;stage\u0026quot;: \u0026quot;v1\u0026quot;, \u0026quot;identity\u0026quot;: { \u0026quot;accountId\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;apiKey\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;caller\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;cognitoAuthenticationProvider\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;cognitoAuthenticationType\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;cognitoIdentityId\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;cognitoIdentityPoolId\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;sourceIp\u0026quot;: \u0026quot;73.118.138.121\u0026quot;, \u0026quot;user\u0026quot;: \u0026quot;\u0026quot;, \u0026quot;userAgent\u0026quot;: \u0026quot;curl/7.54.0\u0026quot;, \u0026quot;userArn\u0026quot;: \u0026quot;\u0026quot; } } } } While this demonstrates that our lambda function is publicly accessible, it\u0026rsquo;s not immediately obvious where the *event data is being populated.\nMapping Templates The event data that\u0026rsquo;s actually supplied to echoS3Event is the complete HTTP request body. This content is what the API Gateway sends to our lambda function, which is defined by the integration mapping. This event data also includes the values of any whitelisted parameters. When the API Gateway Method is defined, it optionally includes any whitelisted query params and header values that should be forwarded to the integration target. For this example, we\u0026rsquo;re not whitelisting any params, so those fields (queryParams, pathParams) are empty. Then for each integration target (which can be AWS Lambda, a mock, or a HTTP Proxy), it\u0026rsquo;s possible to transform the API Gateway request data and whitelisted arguments into a format that\u0026rsquo;s more amenable to the target.\nSparta uses a pass-through template that passes all valid data, with minor Body differences based on the inbound Content-Type:\napplication/json #* Provide an automatic pass through template that transforms all inputs into the JSON payload sent to a golang function. The JSON behavior attempts to parse the incoming HTTP body as JSON assign it to the `body` field. See https://forums.aws.amazon.com/thread.jspa?threadID=220274\u0026amp;tstart=0 http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html *# { \u0026quot;method\u0026quot;: \u0026quot;$context.httpMethod\u0026quot;, \u0026quot;body\u0026quot; : $input.json('$'), \u0026quot;headers\u0026quot;: { #foreach($param in $input.params().header.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($input.params().header.get($param))\u0026quot; #if($foreach.hasNext),#end #end }, \u0026quot;queryParams\u0026quot;: { #foreach($param in $input.params().querystring.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($input.params().querystring.get($param))\u0026quot; #if($foreach.hasNext),#end #end }, \u0026quot;pathParams\u0026quot;: { #foreach($param in $input.params().path.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($input.params().path.get($param))\u0026quot; #if($foreach.hasNext),#end #end }, \u0026quot;context\u0026quot; : { \u0026quot;apiId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.apiId)\u0026quot;, \u0026quot;method\u0026quot; : \u0026quot;$util.escapeJavaScript($context.httpMethod)\u0026quot;, \u0026quot;requestId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.requestId)\u0026quot;, \u0026quot;resourceId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.resourceId)\u0026quot;, \u0026quot;resourcePath\u0026quot; : \u0026quot;$util.escapeJavaScript($context.resourcePath)\u0026quot;, \u0026quot;stage\u0026quot; : \u0026quot;$util.escapeJavaScript($context.stage)\u0026quot;, \u0026quot;identity\u0026quot; : { \u0026quot;accountId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.accountId)\u0026quot;, \u0026quot;apiKey\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.apiKey)\u0026quot;, \u0026quot;caller\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.caller)\u0026quot;, \u0026quot;cognitoAuthenticationProvider\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoAuthenticationProvider)\u0026quot;, \u0026quot;cognitoAuthenticationType\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoAuthenticationType)\u0026quot;, \u0026quot;cognitoIdentityId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoIdentityId)\u0026quot;, \u0026quot;cognitoIdentityPoolId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoIdentityPoolId)\u0026quot;, \u0026quot;sourceIp\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.sourceIp)\u0026quot;, \u0026quot;user\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.user)\u0026quot;, \u0026quot;userAgent\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.userAgent)\u0026quot;, \u0026quot;userArn\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.userArn)\u0026quot; } }, \u0026quot;authorizer\u0026quot;: { #foreach($param in $context.authorizer.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($context.authorizer.get($param))\u0026quot; #if($foreach.hasNext),#end #end } }  * (Default Content-Type) #* Provide an automatic pass through template that transforms all inputs into the JSON payload sent to a golang function. The default behavior passes the 'body' key as raw string. See https://forums.aws.amazon.com/thread.jspa?threadID=220274\u0026amp;tstart=0 http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html *# { \u0026quot;method\u0026quot;: \u0026quot;$context.httpMethod\u0026quot;, \u0026quot;body\u0026quot; : \u0026quot;$input.path('$')\u0026quot;, \u0026quot;headers\u0026quot;: { #foreach($param in $input.params().header.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($input.params().header.get($param))\u0026quot; #if($foreach.hasNext),#end #end }, \u0026quot;queryParams\u0026quot;: { #foreach($param in $input.params().querystring.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($input.params().querystring.get($param))\u0026quot; #if($foreach.hasNext),#end #end }, \u0026quot;pathParams\u0026quot;: { #foreach($param in $input.params().path.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($input.params().path.get($param))\u0026quot; #if($foreach.hasNext),#end #end }, \u0026quot;context\u0026quot; : { \u0026quot;apiId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.apiId)\u0026quot;, \u0026quot;method\u0026quot; : \u0026quot;$util.escapeJavaScript($context.httpMethod)\u0026quot;, \u0026quot;requestId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.requestId)\u0026quot;, \u0026quot;resourceId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.resourceId)\u0026quot;, \u0026quot;resourcePath\u0026quot; : \u0026quot;$util.escapeJavaScript($context.resourcePath)\u0026quot;, \u0026quot;stage\u0026quot; : \u0026quot;$util.escapeJavaScript($context.stage)\u0026quot;, \u0026quot;identity\u0026quot; : { \u0026quot;accountId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.accountId)\u0026quot;, \u0026quot;apiKey\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.apiKey)\u0026quot;, \u0026quot;caller\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.caller)\u0026quot;, \u0026quot;cognitoAuthenticationProvider\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoAuthenticationProvider)\u0026quot;, \u0026quot;cognitoAuthenticationType\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoAuthenticationType)\u0026quot;, \u0026quot;cognitoIdentityId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoIdentityId)\u0026quot;, \u0026quot;cognitoIdentityPoolId\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.cognitoIdentityPoolId)\u0026quot;, \u0026quot;sourceIp\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.sourceIp)\u0026quot;, \u0026quot;user\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.user)\u0026quot;, \u0026quot;userAgent\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.userAgent)\u0026quot;, \u0026quot;userArn\u0026quot; : \u0026quot;$util.escapeJavaScript($context.identity.userArn)\u0026quot; } }, \u0026quot;authorizer\u0026quot;: { #foreach($param in $context.authorizer.keySet()) \u0026quot;$param\u0026quot;: \u0026quot;$util.escapeJavaScript($context.authorizer.get($param))\u0026quot; #if($foreach.hasNext),#end #end } }  The default mapping templates forwards all whitelisted data \u0026amp; body to the lambda function. You can see by switching on the method field would allow a single function to handle different HTTP methods.\nThe next example shows how to unmarshal this data and perform request-specific actions.\nProxying Envelope Because the integration request returned a successful response, the API Gateway response body contains only our lambda\u0026rsquo;s output ($input.json('$.body')).\nTo return an error that API Gateway can properly translate into an HTTP status code, use an apigateway.NewErrorResponse type. This custom error type includes fields that trigger integration mappings based on the inline HTTP StatusCode. The proper error code is extracted by lifting the code value from the Lambda\u0026rsquo;s response body and using a template override\nIf you look at the Integration Response section of the /hello/world/test resource in the Console, you\u0026rsquo;ll see a list of Regular Expression matches:\nCleanup Before moving on, remember to decommission the service via:\ngo run application.go delete Wrapping Up Now that we know what data is actually being sent to our API Gateway-connected Lambda function, we\u0026rsquo;ll move on to performing a more complex operation, including returning a custom HTTP response body.\nNotes  Mapping Template Reference  "
   281  },
   282  {
   283  	"uri": "/example_service/",
   284  	"title": "Example Service",
   285  	"tags": [],
   286  	"description": "",
   287  	"content": "This is a walkthrough of a simple \u0026ldquo;Hello World\u0026rdquo; style Sparta service.\nThe Overview section talks about the programming model.\nThe Details section goes into more detail about how Sparta manages your serverless application.\n Overview   Details   "
   288  },
   289  {
   290  	"uri": "/reference/eventsources/kinesis/",
   291  	"title": "Kinesis",
   292  	"tags": [],
   293  	"description": "",
   294  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to Amazon Kinesis streams. This overview is based on the SpartaApplication sample code if you\u0026rsquo;d rather jump to the end result.\nGoal The goal of this example is to provision a Sparta lambda function that logs Amazon Kinesis events to CloudWatch logs.\nGetting Started We\u0026rsquo;ll start with an empty lambda function and build up the needed functionality.\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func echoKinesisEvent(ctx context.Context, kinesisEvent awsLambdaEvents.KinesisEvent) (*awsLambdaEvents.KinesisEvent, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: kinesisEvent, }).Info(\u0026#34;Event received\u0026#34;) return \u0026amp;kinesisEvent, nil } For this sample all we\u0026rsquo;re going to do is transparently unmarshal the Kinesis event to an AWS Lambda event, log it, and return the value.\nWith the function defined let\u0026rsquo;s register it with Sparta.\nSparta Integration First we wrap the go function in a LambdaAWSInfo struct:\nlambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoKinesisEvent), echoKinesisEvent, sparta.IAMRoleDefinition{}) Since our lambda function doesn\u0026rsquo;t access any other AWS Services, we can use an empty IAMRoleDefinition (sparta.IAMRoleDefinition{}).\nEvent Source Registration Then last step is to configure our AWS Lambda function with Kinesis as the EventSource\nlambdaFn.EventSourceMappings = append(lambdaFn.EventSourceMappings, \u0026amp;lambda.CreateEventSourceMappingInput{ EventSourceArn: aws.String(kinesisTestStream), StartingPosition: aws.String(\u0026#34;TRIM_HORIZON\u0026#34;), BatchSize: aws.Int64(100), Enabled: aws.Bool(true), }) The kinesisTestStream parameter is the Kinesis stream ARN (eg: arn:aws:kinesis:us-west-2:123412341234:stream/kinesisTestStream) whose events will trigger lambda execution.\nWrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all Kinesis-triggered lambda functions:\n Define the lambda function (echoKinesisEvent). If needed, create the required IAMRoleDefinition with appropriate privileges if the lambda function accesses other AWS services. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Add the necessary EventSourceMappings to the LambdaAWSInfo struct so that the lambda function is properly configured.  Notes  The Kinesis stream and the AWS Lambda function must be provisioned in the same region. The AWS docs have an excellent Kinesis EventSource walkthrough.  "
   295  },
   296  {
   297  	"uri": "/reference/application/environments/",
   298  	"title": "Managing Environments",
   299  	"tags": [],
   300  	"description": "",
   301  	"content": "It\u0026rsquo;s common for a single Sparta application to target multiple environments. For example:\n Development Staging Production  Each environment is largely similar, but the application may need slightly different configuration in each context.\nTo support this, Sparta uses Go\u0026rsquo;s conditional compilation support to ensure that configuration information is validated at build time. Conditional compilation is supported via the --tags/-t command line argument.\nThis example will work through the SpartaConfig sample. The requirement is that each environment declare it\u0026rsquo;s Name and also add that value as a Stack Output.\nDefault Configuration To start with, create the default configuration. This is the configuration that Sparta uses when provisioning your Stack and defines the environment configuration contract.\n// +build !staging,!production // file: environments/default.go  package environments import ( \u0026#34;fmt\u0026#34; \u0026#34;github.com/Sirupsen/logrus\u0026#34; \u0026#34;github.com/aws/aws-sdk-go/aws/session\u0026#34; gocf \u0026#34;github.com/crewjam/go-cloudformation\u0026#34; sparta \u0026#34;github.com/mweagle/Sparta\u0026#34; ) // Name is the default configuration const Name = \u0026#34;\u0026#34; The important part is the set of excluded tags at the top of the file:\n// +build !staging,!production This ensures that the configuration is only eligible for compilation when Sparta goes to provision the service.\nEnvironment Configuration The next steps are to define the environment-specific configuration files:\n// +build staging // file: environments/staging.go  package environments import ( \u0026#34;github.com/Sirupsen/logrus\u0026#34; \u0026#34;github.com/aws/aws-sdk-go/aws/session\u0026#34; gocf \u0026#34;github.com/crewjam/go-cloudformation\u0026#34; sparta \u0026#34;github.com/mweagle/Sparta\u0026#34; ) // Name is the production configuration const Name = \u0026#34;staging\u0026#34; // +build production // file: environments/production.go  package environments import ( \u0026#34;github.com/Sirupsen/logrus\u0026#34; \u0026#34;github.com/aws/aws-sdk-go/aws/session\u0026#34; gocf \u0026#34;github.com/crewjam/go-cloudformation\u0026#34; sparta \u0026#34;github.com/mweagle/Sparta\u0026#34; ) // Name is the production configuration const Name = \u0026#34;production\u0026#34; These three files define the set of compile-time mutually-exclusive sources that represent environment targets.\nSegregating Services The serviceName argument supplied to sparta.Main defines the AWS CloudFormation stack that supports your application. While the previous files represent different environments, they will collide at provision time since they share the same service name.\nThe serviceName can be specialized by using the buildTags in the service name definition as in:\nfmt.Sprintf(\u0026#34;SpartaHelloWorld-%s\u0026#34;, sparta.OptionsGlobal.BuildTags), Each time you run provision with a unique --tags value, a new CloudFormation stack will be created.\nNOTE: This isn\u0026rsquo;t something suitable for production use as there could be multiple BuildTags values.\nEnforcing Environments The final requirement is to add the environment name as a Stack Output. To annotate the stack with the output value, we\u0026rsquo;ll register a ServiceDecorator and use the same conditional compilation support to compile the environment-specific version.\nThe main.go source file registers the workflow hook via:\nhooks := \u0026amp;sparta.WorkflowHooks{ Context: map[string]interface{}{}, ServiceDecorator: environments.ServiceDecoratorHook(sparta.OptionsGlobal.BuildTags), } Both environments/staging.go and environments/production.go define the same hook function:\nfunc ServiceDecoratorHook(buildTags string) sparta.ServiceDecoratorHook { return func(context map[string]interface{}, serviceName string, template *gocf.Template, S3Bucket string, S3Key string, buildID string, awsSession *session.Session, noop bool, logger *logrus.Logger) error { template.Outputs[\u0026#34;Environment\u0026#34;] = \u0026amp;gocf.Output{ Description: \u0026#34;Sparta Config target environment\u0026#34;, Value: Name, } return nil } } The environments/default.go definition is slightly different. The \u0026ldquo;default\u0026rdquo; environment isn\u0026rsquo;t one that our service should actually deploy to. It simply exists to make the initial Sparta build (the one that cross compiles the AWS Lambda binary) compile. Build tags are applied to the AWS Lambda compiled binary that Sparta generates.\nTo prevent users from accidentally deploying to the \u0026ldquo;default\u0026rdquo; environment, the BuildTags are validated in the hook definition:\nfunc ServiceDecoratorHook(buildTags string) sparta.ServiceDecoratorHook { return func(context map[string]interface{}, serviceName string, template *gocf.Template, S3Bucket string, S3Key string, buildID string, awsSession *session.Session, noop bool, logger *logrus.Logger) error { if len(buildTags) \u0026lt;= 0 { return fmt.Errorf(\u0026#34;Please provide a --tags value for environment target\u0026#34;) } return nil } } Provisioning Putting everything together, the SpartaConfig service can deploy to either environment:\nstaging\ngo run main.go provision --level info --s3Bucket $(S3_BUCKET) --noop --tags staging  production\ngo run main.go provision --level info --s3Bucket $(S3_BUCKET) --noop --tags production  Attempting to deploy to \u0026ldquo;default\u0026rdquo; generates an error:\nINFO[0000] Welcome to SpartaConfig- Go=go1.7.1 Option=provision SpartaVersion=0.9.2 UTC=2016-10-12T04:07:35Z INFO[0000] Provisioning service BuildID=550c9e360426f48201c885c0abeb078dfc000a0a NOOP=true Tags= INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=1 INFO[0000] Running `go generate` INFO[0000] Compiling binary Name=SpartaConfig_.lambda.amd64 INFO[0008] Executable binary size KB=15309 MB=14 INFO[0008] Creating ZIP archive for upload TempName=/Users/mweagle/Documents/gopath/src/github.com/mweagle/SpartaConfig/SpartaConfig_104207098 INFO[0009] Registering Sparta function FunctionName=main.helloWorld INFO[0009] Lambda function deployment package size KB=4262 MB=4 INFO[0009] Bypassing bucket expiration policy check due to -n/-noop command line argument BucketName=weagle INFO[0009] Bypassing S3 upload due to -n/-noop command line argument Bucket=weagle Key=SpartaConfig-/SpartaConfig_104207098 INFO[0009] Calling WorkflowHook WorkflowHook=github.com/mweagle/SpartaConfig/environments.ServiceDecoratorHook.func1 WorkflowHookContext=map[] INFO[0009] Invoking rollback functions RollbackCount=0 ERRO[0009] Please provide a --tags value for environment target Error: Please provide a --tags value for environment target Usage: main provision [flags] Flags: -i, --buildID string Optional BuildID to use -s, --s3Bucket string S3 Bucket to use for Lambda source -t, --tags string Optional build tags to use for compilation Global Flags: -l, --level string Log level [panic, fatal, error, warn, info, debug] (default \u0026#34;info\u0026#34;) -n, --noop Dry-run behavior only (do not perform mutations) ERRO[0009] Please provide a --tags value for environment target exit status 1 Notes  Call ParseOptions to initialize sparta.OptionsGlobal.BuildTags field for use in a service name definition. An alternative approach is to define a custom ArchiveHook and inject custom configuration into the ZIP archive. This data is available at Path.Join(env.LAMBDA_TASK_ROOT, ZIP_ARCHIVE_PATH) See discfg, etcd, Consul (among others) for alternative, more dynamic discovery services.  "
   302  },
   303  {
   304  	"uri": "/reference/eventsources/s3/",
   305  	"title": "S3",
   306  	"tags": [],
   307  	"description": "",
   308  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to S3 events. This overview is based on the SpartaImager sample code if you\u0026rsquo;d rather jump to the end result.\nGoal Assume we have an S3 bucket that stores images. You\u0026rsquo;ve been asked to write a service that creates a duplicate image that includes a characteristic stamp overlay and store it in the same S3 bucket.\nGetting Started We\u0026rsquo;ll start with an empty lambda function and build up the needed functionality.\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; awsLambdaContext \u0026#34;github.com/aws/aws-lambda-go/lambdacontext\u0026#34; ) type transformedResponse struct { Bucket string Key string } func transformImage(ctx context.Context, event awsLambdaEvents.S3Event) ([]transformedResponse, error) { logger, _ := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) lambdaContext, _ := awsLambdaContext.FromContext(ctx) logger.WithFields(logrus.Fields{ \u0026#34;RequestID\u0026#34;: lambdaContext.AwsRequestID, \u0026#34;RecordCount\u0026#34;: len(event.Records), }).Info(\u0026#34;Request received 👍\u0026#34;) Since the transformImage is expected to be triggered by S3 event changes, we can transparently unmarshal the incoming request into an S3Event defined by the AWS Go Lambda SDK.\nS3 events are delivered in batches, via lists of EventRecords, so we\u0026rsquo;ll need to process each record.\nfor _, eachRecord := range event.Records { // What happened?  switch eachRecord.EventName { case \u0026#34;ObjectCreated:Put\u0026#34;: { err = stampImage(eachRecord.S3.Bucket.Name, eachRecord.S3.Object.Key, logger) } case \u0026#34;s3:ObjectRemoved:Delete\u0026#34;: { // Delete stamped image  } default: { logger.Info(\u0026#34;Unsupported event: \u0026#34;, eachRecord.EventName) } } //  if err != nil { logger.Error(\u0026#34;Failed to process event: \u0026#34;, err.Error()) http.Error(w, err.Error(), http.StatusInternalServerError) } } The stampImage function does most of the work, fetching the S3 image to memory, applying the stamp, and putting the transformed content back to S3 with a new name. It uses a simple **xformed_** keyname prefix to identify items which have already been stamped \u0026amp; prevents an \u0026ldquo;event-storm\u0026rdquo; from being triggered. This simple approach is acceptable for an example, but in production you should use a more durable approach.\nSparta Integration With the core of the transformImage complete, the next step is to integrate the go function with Sparta. This is performed by the imagerFunctions source.\nOur lambda function needs to both Get and Put items back to an S3 bucket, so we need an IAM Role that grants those privileges under which the function will execute:\n// Provision an IAM::Role as part of this application var iamRole = sparta.IAMRoleDefinition{} // Setup the ARN that includes all child keys resourceArn := fmt.Sprintf(\u0026#34;%s/*\u0026#34;, s3EventBroadcasterBucket) iamRole.Privileges = append(iamRole.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:PutObject\u0026#34;, }, Resource: resourceArn, }) The s3EventBroadcasterBucket param is the ARN of the S3 bucket that will trigger your lambda function (eg: arn:aws:s3:::MyImagingS3Bucket).\nWith the IAM Role defined, we can create the Sparta lambda function for transformImage:\n// The default timeout is 3 seconds - increase that to 30 seconds s.t. the // transform lambda doesn\u0026#39;t fail early. transformOptions := \u0026amp;sparta.LambdaFunctionOptions{ Description: \u0026#34;Stamp assets in S3\u0026#34;, MemorySize: 128, Timeout: 30, } lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(transformImage), transformImage, iamRole) lambdaFn.Options = transformOptions It typically takes more than 3 seconds to apply the transform, so we increase the execution timeout and provision a new lambda function using the iamRole we defined earlier.\nEvent Source Registration If we were to deploy this Sparta application, the transformImage function would have the ability to Get and Put back to the s3EventBroadcasterBucket, but would not be invoked in response to events triggered by that bucket. To register for state change events, we need to configure the lambda\u0026rsquo;s Permissions:\n////////////////////////////////////////////////////////////////////////////// // S3 configuration // lambdaFn.Permissions = append(lambdaFn.Permissions, sparta.S3Permission{ BasePermission: sparta.BasePermission{ SourceArn: s3EventBroadcasterBucket, }, Events: []string{\u0026#34;s3:ObjectCreated:*\u0026#34;, \u0026#34;s3:ObjectRemoved:*\u0026#34;}, }) lambdaFunctions = append(lambdaFunctions, lambdaFn) When Sparta generates the CloudFormation template, it scans for Permission configurations. For push based sources like S3, Sparta uses that service\u0026rsquo;s APIs to register your lambda function as a publishing target for events. This remote registration is handled automatically by CustomResources added to the CloudFormation template.\nWrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all S3-triggered lambda functions:\n Define the lambda function (transformImage). Implement the associated business logic (stampImage). If needed, create the required IAMRoleDefinition with appropriate privileges. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Add the necessary Permissions to the LambdaAWSInfo struct so that the lambda function is triggered.  The SpartaImager repo contains the full code, and includes API Gateway support that allows you to publicly fetch the stamped image via an expiring S3 URL.\nOther Resources  The AWS docs have an excellent S3 event source walkthrough.  "
   309  },
   310  {
   311  	"uri": "/reference/eventsources/ses/",
   312  	"title": "SES",
   313  	"tags": [],
   314  	"description": "",
   315  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to inbound email. This overview is based on the SpartaApplication sample code if you\u0026rsquo;d rather jump to the end result.\nGoal Assume that we have already verified our email domain with AWS. This allows our domain\u0026rsquo;s email to be handled by SES.\nWe\u0026rsquo;ve been asked to write a lambda function that logs inbound messages, including the metadata associated with the message body itself.\nThere is also an additional requirement to support immutable infrastructure, so our service needs to manage the S3 bucket to which message bodies should be stored. Our service cannot rely on a pre-existing S3 bucket. The infrastructure (and associated security policies) together with the application logic is coupled.\nGetting Started We\u0026rsquo;ll start with an empty lambda function and build up the needed functionality.\nimport ( spartaSES \u0026#34;github.com/mweagle/Sparta/aws/ses\u0026#34; ) func echoSESEvent(ctx context.Context, sesEvent spartaSES.Event) (*spartaSES.Event, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) configuration, configErr := sparta.Discover() logger.WithFields(logrus.Fields{ \u0026#34;Error\u0026#34;: configErr, \u0026#34;Configuration\u0026#34;: configuration, }).Info(\u0026#34;Discovery results\u0026#34;) } Unmarshalling the SES Event At this point we would normally continue processing the SES event, using Sparta types until the official events are available.\nHowever, before moving on to the event processing, we need to take a detour into dynamic infrastructure because of the immutable infrastructure requirement.\nThis requirement implies that our service must be self-contained: we can\u0026rsquo;t assume that the S3 bucket already exists. How can our locally compiled code access AWS-created resources?\nDynamic Resources The immutable infrastructure requirement makes this lambda function a bit more complex. Our service needs to:\n Provision a new S3 bucket for email message body storage  SES will not provide the message body in the event data. It will only store the email body in an S3 bucket, from which your lambda function can later consume it.   Wait for the S3 bucket to be provisioned  As we need a new S3 bucket, we\u0026rsquo;re relying on AWS to generate a unique name. But this means that our lambda function doesn\u0026rsquo;t know the S3 bucket name during provisioning. In fact, we shouldn\u0026rsquo;t even create an AWS Lambda function if the S3 bucket can\u0026rsquo;t be created.   Include an IAMPrivilege so that our go function can access the dynamically created bucket Discover the S3 Bucket at lambda execution time  Provision Message Body Storage Resource Let\u0026rsquo;s first take a look at how the SES lambda handler provisions a new S3 bucket via the MessageBodyStorage type:\nfunc appendSESLambda(api *sparta.API, lambdaFunctions []*sparta.LambdaAWSInfo) []*sparta.LambdaAWSInfo { // Our lambda function will need to be able to read from the bucket, which \t// will be handled by the S3MessageBodyBucketDecorator below \tlambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoSESEvent), echoSESEvent, sparta.IAMRoleDefinition{}) // Setup options s.t. the lambda function has time to consume the message body \tlambdaFn.Options = \u0026amp;sparta.LambdaFunctionOptions{ Description: \u0026#34;\u0026#34;, MemorySize: 128, Timeout: 10, } // Add a Permission s.t. the Lambda function automatically manages SES registration  sesPermission := sparta.SESPermission{ BasePermission: sparta.BasePermission{ // SES only supports wildcard ARNs  SourceArn: \u0026#34;*\u0026#34;, }, InvocationType: \u0026#34;Event\u0026#34;, } // Store the message body  bodyStorage, _ := sesPermission.NewMessageBodyStorageResource(\u0026#34;Special\u0026#34;) sesPermission.MessageBodyStorage = bodyStorage The MessageBodyStorage type (and the related MessageBodyStorageOptions type) cause our SESPermission handler to add an S3 ReceiptRule at the head of the rules list. This rule instructs SES to store the message body in the supplied bucket before invoking our lambda function.\nThe single parameter \u0026quot;Special\u0026quot; is an application-unique literal value that is used to create a stable CloudFormation resource identifier so that new buckets are not created in response to stack update requests.\nOur SES handler then adds two ReceiptRules:\nsesPermission.ReceiptRules = make([]sparta.ReceiptRule, 0) sesPermission.ReceiptRules = append(sesPermission.ReceiptRules, sparta.ReceiptRule{ Name: \u0026#34;Special\u0026#34;, Recipients: []string{\u0026#34;sombody_special@gosparta.io\u0026#34;}, TLSPolicy: \u0026#34;Optional\u0026#34;, }) sesPermission.ReceiptRules = append(sesPermission.ReceiptRules, sparta.ReceiptRule{ Name: \u0026#34;Default\u0026#34;, Recipients: []string{}, TLSPolicy: \u0026#34;Optional\u0026#34;, }) Dynamic IAMPrivilege Arn Our lambda function is required to access the message body in the dynamically created MessageBodyStorage resource, but the S3 resource Arn is only defined after the service is provisioned. The solution to this is to reference the dynamically generated BucketArnAllKeys() value in the sparta.IAMRolePrivilege initializer:\n// Then add the privilege to the Lambda function s.t. we can actually get at the data lambdaFn.RoleDefinition.Privileges = append(lambdaFn.RoleDefinition.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:HeadObject\u0026#34;}, Resource: sesPermission.MessageBodyStorage.BucketArnAllKeys(), }) The last step is to register the SESPermission with the lambda info:\n// Finally add the SES permission to the lambda function lambdaFn.Permissions = append(lambdaFn.Permissions, sesPermission) At this point we\u0026rsquo;ve implicitly created an S3 bucket via the MessageBodyStorage value. Our lambda function now needs to dynamically determine the AWS-assigned bucket name.\nDynamic Message Body Storage Discovery Our echoSESEvent function needs to determine, at execution time, the MessageBodyStorage S3 bucket name. This is done via sparta.Discover():\nconfiguration, configErr := sparta.Discover() logger.WithFields(logrus.Fields{ \u0026#34;Error\u0026#34;: configErr, \u0026#34;Configuration\u0026#34;: configuration, }).Info(\u0026#34;Discovery results\u0026#34;) // The message bucket is an explicit `DependsOn` relationship, so it\u0026#39;ll be in the // resources map. We\u0026#39;ll find it by looking for the dependent resource with the \u0026#34;AWS::S3::Bucket\u0026#34; type bucketName := \u0026#34;\u0026#34; for _, eachResourceInfo := range configuration.Resources { if eachResourceInfo.ResourceType == \u0026#34;AWS::S3::Bucket\u0026#34; { bucketName = eachResourceInfo.Properties[\u0026#34;Ref\u0026#34;] } } if \u0026#34;\u0026#34; == bucketName { return nil, errors.Errorf(\u0026#34;Failed to discover SES bucket from sparta.Discovery: %#v\u0026#34;, configuration) } The sparta.Discover() function returns a DiscoveryInfo structure. This data is published into the Lambda\u0026rsquo;s environment variables to enable it to discover other resources published in the same Stack.\nThe structure includes the stack\u0026rsquo;s Pseudo Parameters as well information about any immediate resource dependencies. Eg, those that were explicitly marked as DependsOn. See the discovery documentation for more details.\nAs we only have a single dependency, our discovery filter is:\n// The message bucket is an explicit `DependsOn` relationship, so it\u0026#39;ll be in the // resources map. We\u0026#39;ll find it by looking for the dependent resource with the \u0026#34;AWS::S3::Bucket\u0026#34; type bucketName := \u0026#34;\u0026#34; for _, eachResourceInfo := range configuration.Resources { if eachResourceInfo.ResourceType == \u0026#34;AWS::S3::Bucket\u0026#34; { bucketName = eachResourceInfo.Properties[\u0026#34;Ref\u0026#34;] } } if \u0026#34;\u0026#34; == bucketName { return nil, errors.Errorf(\u0026#34;Failed to discover SES bucket from sparta.Discovery: %#v\u0026#34;, configuration) } #Sparta Integration The rest of echoSESEvent satisfies the other requirements, with a bit of help from the SES event types:\n// Get the metdata about the item... svc := s3.New(session.New()) for _, eachRecord := range sesEvent.Records { logger.WithFields(logrus.Fields{ \u0026#34;Source\u0026#34;: eachRecord.SES.Mail.Source, \u0026#34;MessageID\u0026#34;: eachRecord.SES.Mail.MessageID, \u0026#34;BucketName\u0026#34;: bucketName, }).Info(\u0026#34;SES Event\u0026#34;) if \u0026#34;\u0026#34; != bucketName { params := \u0026amp;s3.HeadObjectInput{ Bucket: aws.String(bucketName), Key: aws.String(eachRecord.SES.Mail.MessageID), } resp, err := svc.HeadObject(params) logger.WithFields(logrus.Fields{ \u0026#34;Error\u0026#34;: err, \u0026#34;Metadata\u0026#34;: resp, }).Info(\u0026#34;SES MessageBody\u0026#34;) } } return \u0026amp;sesEvent, nil Wrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all SES-triggered lambda function:\n Define the lambda function (echoSESEvent). If needed, create the required IAMRoleDefinition with appropriate privileges if the lambda function accesses other AWS services. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Add the necessary Permissions to the LambdaAWSInfo struct so that the lambda function is triggered.  Additionally, if the SES handler needs to access the raw email message body:\n Create a new sesPermission.NewMessageBodyStorageResource(\u0026quot;Special\u0026quot;) value to store the message body Assign the value to the sesPermission.MessageBodyStorage field If your lambda function needs to consume the message body, add an entry to sesPermission.[]IAMPrivilege that includes the sesPermission.MessageBodyStorage.BucketArnAllKeys() Arn In your go lambda function definition, discover the S3 bucketname via sparta.Discover()  Notes  The SES message (including headers) is stored in the raw format More on Immutable Infrastructure:  Subbu - Automate Everything Chad Fowler - Immutable Deployments The Cloudcast - What is Immutable Infrastructure The New Stack    "
   316  },
   317  {
   318  	"uri": "/reference/eventsources/sns/",
   319  	"title": "SNS",
   320  	"tags": [],
   321  	"description": "",
   322  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to SNS events. This overview is based on the SpartaApplication sample code if you\u0026rsquo;d rather jump to the end result.\nGoal Assume that we have an SNS topic that broadcasts notifications. We\u0026rsquo;ve been asked to write a lambda function that logs the Subject and Message text to CloudWatch logs for later processing.\nGetting Started We\u0026rsquo;ll start with an empty lambda function and build up the needed functionality.\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func echoSNSEvent(ctx context.Context, snsEvent awsLambdaEvents.SNSEvent) (*awsLambdaEvents.SNSEvent, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: snsEvent, }).Info(\u0026#34;Event received\u0026#34;) return \u0026amp;snsEvent, nil } Unmarshalling the SNS Event SNS events are delivered in batches, via lists of SNSEventRecords, so we\u0026rsquo;ll need to process each record.\nfor _, eachRecord := range lambdaEvent.Records { logger.WithFields(logrus.Fields{ \u0026#34;Subject\u0026#34;: eachRecord.Sns.Subject, \u0026#34;Message\u0026#34;: eachRecord.Sns.Message, }).Info(\u0026#34;SNS Event\u0026#34;) } That\u0026rsquo;s enough to get the data into CloudWatch Logs.\nSparta Integration With the core of the echoSNSEvent complete, the next step is to integrate the go function with Sparta. This is performed by the appendSNSLambda function. Since the echoSNSEvent function doesn\u0026rsquo;t access any additional services (Sparta enables CloudWatch Logs privileges by default), the integration is pretty straightforward:\nlambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoSNSEvent), echoSNSEvent, sparta.IAMRoleDefinition{}) Event Source Registration If we were to deploy this Sparta application, the echoSNSEvent function would have the ability to log SNS events, but would not be invoked in response to messages published to that topic. To register for notifications, we need to configure the lambda\u0026rsquo;s Permissions:\nlambdaFn.Permissions = append(lambdaFn.Permissions, sparta.SNSPermission{ BasePermission: sparta.BasePermission{ SourceArn: snsTopic, }, }) lambdaFunctions = append(lambdaFunctions, lambdaFn) The snsTopic param is the ARN of the SNS topic that will notify your lambda function (eg: _arn:aws:sns:us-west-2:000000000000:myTopicName).\nSee the S3 docs for more information on how the Permissions data is processed.\nWrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. The workflow below is shared by all SNS-triggered lambda function:\n Define the lambda function (echoSNSEvent). If needed, create the required IAMRoleDefinition with appropriate privileges if the lambda function accesses other AWS services. Provide the lambda function \u0026amp; IAMRoleDefinition to sparta.NewAWSLambda() Add the necessary Permissions to the LambdaAWSInfo struct so that the lambda function is triggered.  Other Resources  TBD  "
   323  },
   324  {
   325  	"uri": "/reference/eventsources/sqs/",
   326  	"title": "SQS",
   327  	"tags": [],
   328  	"description": "",
   329  	"content": "In this section we\u0026rsquo;ll walkthrough how to trigger your lambda function in response to AWS Simple Queue Service (SQS) events. This overview is based on the SpartaSQS sample code if you\u0026rsquo;d rather jump to the end result.\nGoal The goal here is to create a self-contained service that provisions a SQS queue, an AWS Lambda function that processes messages posted to the queue\nGetting Started We\u0026rsquo;ll start with an empty lambda function and build up the needed functionality.\nimport ( \u0026#34;context\u0026#34; awsLambdaGo \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; sparta \u0026#34;github.com/mweagle/Sparta\u0026#34; spartaCF \u0026#34;github.com/mweagle/Sparta/aws/cloudformation\u0026#34; gocf \u0026#34;github.com/mweagle/go-cloudformation\u0026#34; \u0026#34;github.com/sirupsen/logrus\u0026#34; ) func sqsHandler(ctx context.Context, sqsRequest awsLambdaGo.SQSEvent) error { logger, _ := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) logger.WithField(\u0026#34;Event\u0026#34;, sqsRequest).Info(\u0026#34;SQS Event Received\u0026#34;) return nil } Since the sqsHandler function subscribes to SQS messages, it can use the AWS provided SQSEvent to automatically unmarshal the incoming event.\nTypically the lambda function would process each record in the event, but for this example we\u0026rsquo;ll just log the entire batch and then return.\nSparta Integration The next step is to integrate the lambda function with Sparta:\n// 1. Create the Sparta Lambda function lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(sqsHandler), sqsHandler, sparta.IAMRoleDefinition{}) Once the lambda function is integrated with Sparta, we can use a TemplateDecoratorHandler to include the SQS provisioning request as part of the overall service creation.\nSQS Topic Definition Decorators enable a Sparta service to provision other types of infrastructure together with the core lambda functions. In this example, our sqsHandler function should also provision an SQS queue from which it will receive events. This is done as in the following:\nsqsResourceName := \u0026#34;LambdaSQSFTW\u0026#34; sqsDecorator := func(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, template *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error { // Include the SQS resource in the application  sqsResource := \u0026amp;gocf.SQSQueue{} template.AddResource(sqsResourceName, sqsResource) return nil } lambdaFn.Decorators = []sparta.TemplateDecoratorHandler{sparta.TemplateDecoratorHookFunc(sqsDecorator)} This function-level decorator includes an AWS CloudFormation SQS::Queue definition that will be included with the stack definition.\nConnecting SQS to AWS Lambda The final step is to make the sqsHandler the Lambda\u0026rsquo;s EventSourceMapping target for the dynamically provisioned Queue\u0026rsquo;s ARN:\nlambdaFn.EventSourceMappings = append(lambdaFn.EventSourceMappings, \u0026amp;sparta.EventSourceMapping{ EventSourceArn: gocf.GetAtt(sqsResourceName, \u0026#34;Arn\u0026#34;), BatchSize: 2, }) Wrapping Up With the lambdaFn fully defined, we can provide it to sparta.Main() and deploy our service. It\u0026rsquo;s also possible to use a pre-existing SQS resource by providing a string literal as the EventSourceArn value.\nOther Resources  The AWS docs have an excellent SQS event source walkthrough.  "
   330  },
   331  {
   332  	"uri": "/reference/apigateway/request_params/",
   333  	"title": "Request Parameters",
   334  	"tags": [],
   335  	"description": "",
   336  	"content": "Request Parameters This example demonstrates how to accept client request params supplied as HTTP query params and return an expiring S3 URL to access content. The source for this is the s3ItemInfo function defined as part of the SpartaApplication.\nLambda Definition Our function will accept two params:\n bucketName : The S3 bucket name storing the asset keyName : The S3 item key  Those params will be passed as part of the URL query string. The function will fetch the item metadata, generate an expiring URL for public S3 access, and return a JSON response body with the item data.\nBecause s3ItemInfo is expected to be invoked by the API Gateway, we\u0026rsquo;ll use the AWS Lambda Go type in the function signature:\nimport ( spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; spartaEvents \u0026#34;github.com/mweagle/Sparta/aws/events\u0026#34; ) func s3ItemInfo(ctx context.Context, apigRequest spartaEvents.APIGatewayRequest) (*spartaAPIGateway.Response, error) { logger, _ := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) lambdaContext, _ := awsLambdaContext.FromContext(ctx) logger.WithFields(logrus.Fields{ \u0026#34;RequestID\u0026#34;: lambdaContext.AwsRequestID, }).Info(\u0026#34;Request received\u0026#34;) getObjectInput := \u0026amp;s3.GetObjectInput{ Bucket: aws.String(apigRequest.QueryParams[\u0026#34;bucketName\u0026#34;]), Key: aws.String(apigRequest.QueryParams[\u0026#34;keyName\u0026#34;]), } awsSession := spartaAWS.NewSession(logger) svc := s3.New(awsSession) result, err := svc.GetObject(getObjectInput) if nil != err { return nil, err } presignedReq, _ := svc.GetObjectRequest(getObjectInput) url, err := presignedReq.Presign(5 * time.Minute) if nil != err { return nil, err } return spartaAPIGateway.NewResponse(http.StatusOK, \u0026amp;itemInfoResponse{ S3: result, URL: url, }), nil } The sparta.APIGatewayRequest fields correspond to the Integration Response Mapping template discussed in the previous example (see the full mapping template here.\nOnce the event is unmarshaled, we can use it to fetch the S3 item info:\ngetObjectInput := \u0026amp;s3.GetObjectInput{ Bucket: aws.String(lambdaEvent.QueryParams[\u0026#34;bucketName\u0026#34;]), Key: aws.String(lambdaEvent.QueryParams[\u0026#34;keyName\u0026#34;]), } Assuming there are no errors (including the case where the item does not exist), the remainder of the function fetches the data, generates a presigned URL, and returns a JSON response whose shape matches the Sparta default mapping templates:\nawsSession := spartaAWS.NewSession(logger) svc := s3.New(awsSession) result, err := svc.GetObject(getObjectInput) if nil != err { return nil, err } presignedReq, _ := svc.GetObjectRequest(getObjectInput) url, err := presignedReq.Presign(5 * time.Minute) if nil != err { return nil, err } return spartaAPIGateway.NewResponse(http.StatusOK, \u0026amp;itemInfoResponse{ S3: result, URL: url, }), nil API Gateway The next step is to create a new API instance via sparta.NewAPIGateway()\napiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaImagerAPI\u0026#34;, apiStage) Lambda Binding Next we create an sparta.LambdaAWSInfo struct that references the s3ItemInfo function:\nvar iamDynamicRole = sparta.IAMRoleDefinition{} iamDynamicRole.Privileges = append(iamDynamicRole.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;}, Resource: resourceArn, }) s3ItemInfoLambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(s3ItemInfo), s3ItemInfo, iamDynamicRole) s3ItemInfoOptions.Options = \u0026amp;sparta.LambdaFunctionOptions{ Description: \u0026#34;Get information about an item in S3 via querystring params\u0026#34;, MemorySize: 128, Timeout: 10, } A few items to note here:\n We\u0026rsquo;re providing a custom LambdaFunctionOptions in case the request to S3 to get item metadata exceeds the default 3 second timeout. We also add a custom iamDynamicRole.Privileges entry to the Privileges slice that authorizes the lambda function to only access objects in a single bucket (resourceArn).  This bucket ARN is externally created and the ARN provided to this code. While the API will accept any bucketName value, it is only authorized to access a single bucket.    Resources The next step is to associate a URL path with the sparta.LambdaAWSInfo struct that represents the s3ItemInfo function. This will be the relative path component used to reference our lambda function via the API Gateway.\napiGatewayResource, _ := api.NewResource(\u0026#34;/info\u0026#34;, s3ItemInfoLambdaFn) method, err := apiGatewayResource.NewMethod(\u0026#34;GET\u0026#34;, http.StatusOK) if err != nil { return nil, err } Whitelist Input The final step is to add the whitelisted parameters to the Method definition.\n// Whitelist query string params method.Parameters[\u0026#34;method.request.querystring.keyName\u0026#34;] = true method.Parameters[\u0026#34;method.request.querystring.bucketName\u0026#34;] = true Note that the keynames in the method.Parameters map must be of the form: method.request.{location}.{name} where location is one of:\n querystring path header  See the REST documentation for more information.\nProvision With everything configured, let\u0026rsquo;s provision the stack:\ngo run application.go --level debug provision --s3Bucket $S3_BUCKET and check the results.\nVerify As this Sparta application includes an API Gateway definition, the stack Outputs includes the API Gateway URL:\nINFO[0243] Stack Outputs ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0243] APIGatewayURL Description=\u0026#34;API Gateway URL\u0026#34; Value=\u0026#34;https://xccmsl98p1.execute-api.us-west-2.amazonaws.com/v1\u0026#34; INFO[0243] Stack provisioned CreationTime=\u0026#34;2018-12-11 14:56:41.051 +0000 UTC\u0026#34; StackId=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaImager-mweagle/f7b7d3e0-fd54-11e8-9064-0aa3372404a6\u0026#34; StackName=SpartaImager-mweagle INFO[0243] ════════════════════════════════════════════════ Let\u0026rsquo;s fetch an item we know exists:\n$ curl -vs \u0026quot;https://xccmsl98p1.execute-api.us-west-2.amazonaws.com/v1/info?keyName=twitterAvatar.jpg\u0026amp;bucketName=weagle-public\u0026quot; * Trying 13.32.254.241... * TCP_NODELAY set * Connected to xccmsl98p1.execute-api.us-west-2.amazonaws.com (13.32.254.241) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=*.execute-api.us-west-2.amazonaws.com * start date: Oct 9 00:00:00 2018 GMT * expire date: Oct 9 12:00:00 2019 GMT * subjectAltName: host \u0026quot;xccmsl98p1.execute-api.us-west-2.amazonaws.com\u0026quot; matched cert's \u0026quot;*.execute-api.us-west-2.amazonaws.com\u0026quot; * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7ff68b802c00) \u0026gt; GET /v1/info?keyName=twitterAvatar.jpg\u0026amp;bucketName=weagle-public HTTP/2 \u0026gt; Host: xccmsl98p1.execute-api.us-west-2.amazonaws.com \u0026gt; User-Agent: curl/7.54.0 \u0026gt; Accept: */* \u0026gt; * Connection state changed (MAX_CONCURRENT_STREAMS updated)! \u0026lt; HTTP/2 200 \u0026lt; content-type: application/json \u0026lt; content-length: 1539 \u0026lt; date: Tue, 11 Dec 2018 15:08:56 GMT \u0026lt; x-amzn-requestid: aded8786-fd56-11e8-836c-dff86eb3938d \u0026lt; access-control-allow-origin: * \u0026lt; access-control-allow-headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key \u0026lt; x-amz-apigw-id: Rv3pRH8jPHcFTfA= \u0026lt; access-control-allow-methods: * \u0026lt; x-amzn-trace-id: Root=1-5c0fd308-f576dae00848eb44535a5c70;Sampled=0 \u0026lt; x-cache: Miss from cloudfront \u0026lt; via: 1.1 8ddadd1ab84a7f1bef108d6a72eccf06.cloudfront.net (CloudFront) \u0026lt; x-amz-cf-id: OO01Dua9x5dHyXr-arKJ3LKu2ahbPYv5ESqUg2lAhlzLJDQTLVyW_A== \u0026lt; {\u0026quot;S3\u0026quot;:{\u0026quot;AcceptRanges\u0026quot;:\u0026quot;bytes\u0026quot;,\u0026quot;Body\u0026quot;:{},\u0026quot;CacheControl\u0026quot;:null,\u0026quot;ContentDisposition\u0026quot;:null,\u0026quot;ContentEncoding\u0026quot;:null,\u0026quot;ContentLanguage\u0026quot;:null,\u0026quot;ContentLength\u0026quot;:613560,\u0026quot;ContentRange\u0026quot;:null,\u0026quot;ContentType\u0026quot;:\u0026quot;image/jpeg\u0026quot;,\u0026quot;DeleteMarker\u0026quot;:null,\u0026quot;ETag\u0026quot;:\u0026quot;\\\u0026quot;7250a1802a5e2f94532b9ee38429a3fd\\\u0026quot;\u0026quot;,\u0026quot;Expiration\u0026quot;:null,\u0026quot;Expires\u0026quot;:null,\u0026quot;LastModified\u0026quot;:\u0026quot;2018-03-14T14:55:19Z\u0026quot;,\u0026quot;Metadata\u0026quot;:{},\u0026quot;MissingMeta\u0026quot;:null,\u0026quot;ObjectLockLegalHoldStatus\u0026quot;:null,\u0026quot;ObjectLockMode\u0026quot;:null,\u0026quot;ObjectLockRetainUntilDate\u0026quot;:null,\u0026quot;PartsCount\u0026quot;:null,\u0026quot;ReplicationStatus\u0026quot;:null,\u0026quot;RequestCharged\u0026quot;:null,\u0026quot;Restore\u0026quot;:null,\u0026quot;SSECustomerAlgorithm\u0026quot;:null,\u0026quot;SSECustomerKeyMD5\u0026quot;:null,\u0026quot;SSEKMSKeyId\u0026quot;:null,\u0026quot;ServerSideEncryption\u0026quot;:null,\u0026quot;StorageClass\u0026quot;:null,\u0026quot;TagCount\u0026quot;:null,\u0026quot;VersionId\u0026quot;:null,\u0026quot;WebsiteRedirectLocation\u0026quot;:null},\u0026quot;URL\u0026quot;:\u0026quot;https://weagle-public.s3.us-west-2.amazonaws.com/twitterAvatar.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026amp;X-Amz-Credential=ASIAQMUWTUUFF65WLRLE%2F20181211%2Fus-west-2%2Fs3%2Faws4_request\u0026amp;X-Amz-Date=20181211T150856Z\u0026amp;X-Amz-Expires=300\u0026amp;X-Amz-Security-Token=FQoGZXIvYXdzEIH%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDMMVITmbkwrrxznAHCL9AaUQwfC%2F%2F6go%2FKBZigDuI4BLLwJzqiwhquTZ9TR1oxVKOAA0h6WzWUEfjjOjZK56SFk3cIJ%2FjKIBmImKpTIGyN7fn48s6N51RFFxra2Mamrp1pDqEcP4VswnJH8C5Q7ZfmltJDiFqLbd4FCQdgoGT228Ls49Uo24EyT%2B%2BTL%2Fl0sKTVYtI1MbGSK%2B%2BKZ6rpPEsyR%2FTuIdeDvA1P%2BRlMEyvr0NhO7Wpf7ZZMs3taNcUMQDRmARyIgAp87ziwIavUTaPqbgpGNqJ6XAO%2Byf3y0g9JurYj44HrwpLWmuF5g%2B%2FtLv8VikzqD8GuWARJuo%2BPlH54KmcMrbXBpLq9sZl2Io3KO%2F4AU%3D\u0026amp;X-Amz-SignedHeaders=host\u0026amp;X-Amz-Signature=88976d33d4cdefff02265e1f40e4d18005231672f1a6e41ad12733f0ce97e91b\u0026quot;} Pretty printing the response body:\n{ \u0026#34;S3\u0026#34;: { \u0026#34;AcceptRanges\u0026#34;: \u0026#34;bytes\u0026#34;, \u0026#34;Body\u0026#34;: {}, \u0026#34;CacheControl\u0026#34;: null, \u0026#34;ContentDisposition\u0026#34;: null, \u0026#34;ContentEncoding\u0026#34;: null, \u0026#34;ContentLanguage\u0026#34;: null, \u0026#34;ContentLength\u0026#34;: 613560, \u0026#34;ContentRange\u0026#34;: null, \u0026#34;ContentType\u0026#34;: \u0026#34;image/jpeg\u0026#34;, \u0026#34;DeleteMarker\u0026#34;: null, \u0026#34;ETag\u0026#34;: \u0026#34;\\\u0026#34;7250a1802a5e2f94532b9ee38429a3fd\\\u0026#34;\u0026#34;, \u0026#34;Expiration\u0026#34;: null, \u0026#34;Expires\u0026#34;: null, \u0026#34;LastModified\u0026#34;: \u0026#34;2018-03-14T14:55:19Z\u0026#34;, \u0026#34;Metadata\u0026#34;: {}, \u0026#34;MissingMeta\u0026#34;: null, \u0026#34;ObjectLockLegalHoldStatus\u0026#34;: null, \u0026#34;ObjectLockMode\u0026#34;: null, \u0026#34;ObjectLockRetainUntilDate\u0026#34;: null, \u0026#34;PartsCount\u0026#34;: null, \u0026#34;ReplicationStatus\u0026#34;: null, \u0026#34;RequestCharged\u0026#34;: null, \u0026#34;Restore\u0026#34;: null, \u0026#34;SSECustomerAlgorithm\u0026#34;: null, \u0026#34;SSECustomerKeyMD5\u0026#34;: null, \u0026#34;SSEKMSKeyId\u0026#34;: null, \u0026#34;ServerSideEncryption\u0026#34;: null, \u0026#34;StorageClass\u0026#34;: null, \u0026#34;TagCount\u0026#34;: null, \u0026#34;VersionId\u0026#34;: null, \u0026#34;WebsiteRedirectLocation\u0026#34;: null }, \u0026#34;URL\u0026#34;: \u0026#34;https://weagle-public.s3.us-west-2.amazonaws.com/twitterAvatar.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256\u0026amp;X-Amz-Credential=ASIAQMUWTUUFF65WLRLE%2F20181211%2Fus-west-2%2Fs3%2Faws4_request\u0026amp;X-Amz-Date=20181211T150856Z\u0026amp;X-Amz-Expires=300\u0026amp;X-Amz-Security-Token=FQoGZXIvYXdzEIH%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDMMVITmbkwrrxznAHCL9AaUQwfC%2F%2F6go%2FKBZigDuI4BLLwJzqiwhquTZ9TR1oxVKOAA0h6WzWUEfjjOjZK56SFk3cIJ%2FjKIBmImKpTIGyN7fn48s6N51RFFxra2Mamrp1pDqEcP4VswnJH8C5Q7ZfmltJDiFqLbd4FCQdgoGT228Ls49Uo24EyT%2B%2BTL%2Fl0sKTVYtI1MbGSK%2B%2BKZ6rpPEsyR%2FTuIdeDvA1P%2BRlMEyvr0NhO7Wpf7ZZMs3taNcUMQDRmARyIgAp87ziwIavUTaPqbgpGNqJ6XAO%2Byf3y0g9JurYj44HrwpLWmuF5g%2B%2FtLv8VikzqD8GuWARJuo%2BPlH54KmcMrbXBpLq9sZl2Io3KO%2F4AU%3D\u0026amp;X-Amz-SignedHeaders=host\u0026amp;X-Amz-Signature=88976d33d4cdefff02265e1f40e4d18005231672f1a6e41ad12733f0ce97e91b\u0026#34; } What about an item that we know doesn\u0026rsquo;t exist, but is in the bucket our lambda function has privileges to access:\n$ curl -vs \u0026#34;https://xccmsl98p1.execute-api.us-west-2.amazonaws.com/v1/info?keyName=NOT_HERE.jpg\u0026amp;bucketName=weagle-public\u0026#34; * Trying 13.32.254.241... * TCP_NODELAY set * Connected to xccmsl98p1.execute-api.us-west-2.amazonaws.com (13.32.254.241) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=*.execute-api.us-west-2.amazonaws.com * start date: Oct 9 00:00:00 2018 GMT * expire date: Oct 9 12:00:00 2019 GMT * subjectAltName: host \u0026#34;xccmsl98p1.execute-api.us-west-2.amazonaws.com\u0026#34; matched cert\u0026#39;s \u0026#34;*.execute-api.us-west-2.amazonaws.com\u0026#34; * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7f9e4f00b600) \u0026gt; GET /v1/info?keyName=twitterAvatarArgh.jpg\u0026amp;bucketName=weagle HTTP/2 \u0026gt; Host: xccmsl98p1.execute-api.us-west-2.amazonaws.com \u0026gt; User-Agent: curl/7.54.0 \u0026gt; Accept: */* \u0026gt; * Connection state changed (MAX_CONCURRENT_STREAMS updated)! \u0026lt; HTTP/2 404 \u0026lt; content-type: application/json \u0026lt; content-length: 177 \u0026lt; date: Tue, 11 Dec 2018 15:21:18 GMT \u0026lt; x-amzn-requestid: 675edef9-fd58-11e8-ae45-3fac75041f4d \u0026lt; access-control-allow-origin: * \u0026lt; access-control-allow-headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key \u0026lt; x-amz-apigw-id: Rv5dAETkvHcFvYg= \u0026lt; access-control-allow-methods: * \u0026lt; x-amzn-trace-id: Root=1-5c0fd5ec-1d8bba64519f71126c12b4d6;Sampled=0 \u0026lt; x-cache: Error from cloudfront \u0026lt; via: 1.1 4c4ed81695980f3c6829b9fd229bd0f8.cloudfront.net (CloudFront) \u0026lt; x-amz-cf-id: ZT5R4BUSAkZpT46s_wCjBImHsM3w6mHFlYG0lnfwONSkPCgxzOQ_lQ== \u0026lt; {\u0026#34;error\u0026#34;:\u0026#34;AccessDenied: Access Denied\\n\\tstatus code: 403, request id: A10C69E17E4C9D00, host id: pAnhP+tg9rDh0yP5FJyC8bSnj1GJJjJvAFXwiluW4yHnVvt5EvkvkpKA4UzjJmCoFyI8hGST6YE=\u0026#34;} * Connection #0 to host xccmsl98p1.execute-api.us-west-2.amazonaws.com left intact And finally, what if we try to access a bucket that our lambda function isn\u0026rsquo;t authorized to access:\n$ curl -vs \u0026#34;https://xccmsl98p1.execute-api.us-west-2.amazonaws.com/v1/info?keyName=NOT_HERE.jpg\u0026amp;bucketName=VERY_PRIVATE_BUCKET\u0026#34; * Trying 13.32.254.241... * TCP_NODELAY set * Connected to xccmsl98p1.execute-api.us-west-2.amazonaws.com (13.32.254.241) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=*.execute-api.us-west-2.amazonaws.com * start date: Oct 9 00:00:00 2018 GMT * expire date: Oct 9 12:00:00 2019 GMT * subjectAltName: host \u0026#34;xccmsl98p1.execute-api.us-west-2.amazonaws.com\u0026#34; matched cert\u0026#39;s \u0026#34;*.execute-api.us-west-2.amazonaws.com\u0026#34; * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7f9e4f00b600) \u0026gt; GET /v1/info?keyName=twitterAvatarArgh.jpg\u0026amp;bucketName=weagle HTTP/2 \u0026gt; Host: xccmsl98p1.execute-api.us-west-2.amazonaws.com \u0026gt; User-Agent: curl/7.54.0 \u0026gt; Accept: */* \u0026gt; * Connection state changed (MAX_CONCURRENT_STREAMS updated)! \u0026lt; HTTP/2 404 \u0026lt; content-type: application/json \u0026lt; content-length: 177 \u0026lt; date: Tue, 11 Dec 2018 15:21:18 GMT \u0026lt; x-amzn-requestid: 675edef9-fd58-11e8-ae45-3fac75041f4d \u0026lt; access-control-allow-origin: * \u0026lt; access-control-allow-headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key \u0026lt; x-amz-apigw-id: Rv5dAETkvHcFvYg= \u0026lt; access-control-allow-methods: * \u0026lt; x-amzn-trace-id: Root=1-5c0fd5ec-1d8bba64519f71126c12b4d6;Sampled=0 \u0026lt; x-cache: Error from cloudfront \u0026lt; via: 1.1 4c4ed81695980f3c6829b9fd229bd0f8.cloudfront.net (CloudFront) \u0026lt; x-amz-cf-id: ZT5R4BUSAkZpT46s_wCjBImHsM3w6mHFlYG0lnfwONSkPCgxzOQ_lQ== \u0026lt; {\u0026#34;error\u0026#34;:\u0026#34;AccessDenied: Access Denied\\n\\tstatus code: 403, request id: A10C69E17E4C9D00, host id: pAnhP+tg9rDh0yP5FJyC8bSnj1GJJjJvAFXwiluW4yHnVvt5EvkvkpKA4UzjJmCoFyI8hGST6YE=\u0026#34;} * Connection #0 to host xccmsl98p1.execute-api.us-west-2.amazonaws.com left intact Cleanup Before moving on, remember to decommission the service via:\ngo run application.go delete Conclusion With this example we\u0026rsquo;ve walked through a simple example that whitelists user input, uses IAM Roles to limit what S3 buckets a lambda function may access, and returns an application/json response to the caller.\n"
   337  },
   338  {
   339  	"uri": "/reference/apigateway/context/",
   340  	"title": "Request Context",
   341  	"tags": [],
   342  	"description": "",
   343  	"content": "This example demonstrates how to use the Context struct provided as part of the APIGatewayRequest. The SpartaGeoIP service will return Geo information based on the inbound request\u0026rsquo;s IP address.\nLambda Definition Our function will examine the inbound request, lookup the user\u0026rsquo;s IP address in the GeoLite2 Database and return any information to the client.\nAs this function is only expected to be invoked from the API Gateway, we\u0026rsquo;ll unmarshall the inbound event:\nimport ( spartaAWSEvents \u0026#34;github.com/mweagle/Sparta/aws/events\u0026#34; spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; ) func ipGeoLambda(ctx context.Context, apiRequest spartaAWSEvents.APIGatewayRequest) (*spartaAPIGateway.Response, error) { parsedIP := net.ParseIP(apiRequest.Context.Identity.SourceIP) record, err := dbHandle.City(parsedIP) if err != nil { return nil, err } We\u0026rsquo;ll then parse the inbound IP address from the Context and perform a lookup against the database handle opened in the init block:\nparsedIP := net.ParseIP(lambdaEvent.Context.Identity.SourceIP) record, err := dbHandle.City(parsedIP) if err != nil { return nil, err } Finally, marshal the data or error result and we\u0026rsquo;re done:\nrequestResponse := map[string]interface{}{ \u0026#34;ip\u0026#34;: parsedIP, \u0026#34;record\u0026#34;: record, } return spartaAPIGateway.NewResponse(http.StatusOK, requestResponse), nil Sparta Integration The next steps are to:\n Create the LambdaAWSInfo value Create an associated API Gateway Create an API Gateway resource that invokes our lambda function Add a Method name to the resource.  These four steps are managed in the service\u0026rsquo;s main() function:\n//////////////////////////////////////////////////////////////////////////////// // Main func main() { stage := sparta.NewStage(\u0026#34;ipgeo\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaGeoIPService\u0026#34;, stage) stackName := \u0026#34;SpartaGeoIP\u0026#34; var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(ipGeoLambda), ipGeoLambda, sparta.IAMRoleDefinition{}) apiGatewayResource, _ := apiGateway.NewResource(\u0026#34;/info\u0026#34;, lambdaFn) apiMethod, _ := apiGatewayResource.NewMethod(\u0026#34;GET\u0026#34;, http.StatusOK, http.StatusOK) apiMethod.SupportedRequestContentTypes = []string{\u0026#34;application/json\u0026#34;} lambdaFunctions = append(lambdaFunctions, lambdaFn) sparta.Main(stackName, \u0026#34;Sparta app supporting ip-\u0026gt;geo mapping\u0026#34;, lambdaFunctions, apiGateway, nil) } Provision The next step is to provision the stack:\nS3_BUCKET=\u0026lt;MY-S3-BUCKETNAME\u0026gt; mage provision Assuming all goes well, the log output will include the API Gateway URL as in:\nINFO[0077] Stack Outputs ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0077] APIGatewayURL Description=\u0026#34;API Gateway URL\u0026#34; Value=\u0026#34;https://52a5qqqwo4.execute-api.us-west-2.amazonaws.com/ipgeo\u0026#34; INFO[0077] Stack provisioned CreationTime=\u0026#34;2018-12-11 14:30:01.822 +0000 UTC\u0026#34; StackId=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaGeoIP-mweagle/3e803cd0-fd51-11e8-9c7e-06972e890616\u0026#34; Stack Verify With the API Gateway provisioned, let\u0026rsquo;s check the response:\n$ curl -vs https://52a5qqqwo4.execute-api.us-west-2.amazonaws.com/ipgeo/info * Trying 13.32.254.81... * TCP_NODELAY set * Connected to 52a5qqqwo4.execute-api.us-west-2.amazonaws.com (13.32.254.81) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/cert.pem CApath: none * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use h2 * Server certificate: * subject: CN=*.execute-api.us-west-2.amazonaws.com * start date: Oct 9 00:00:00 2018 GMT * expire date: Oct 9 12:00:00 2019 GMT * subjectAltName: host \u0026#34;52a5qqqwo4.execute-api.us-west-2.amazonaws.com\u0026#34; matched cert\u0026#39;s \u0026#34;*.execute-api.us-west-2.amazonaws.com\u0026#34; * issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon * SSL certificate verify ok. * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Using Stream ID: 1 (easy handle 0x7f8522804200) \u0026gt; GET /ipgeo/info HTTP/2 \u0026gt; Host: 52a5qqqwo4.execute-api.us-west-2.amazonaws.com \u0026gt; User-Agent: curl/7.54.0 \u0026gt; Accept: */* \u0026gt; * Connection state changed (MAX_CONCURRENT_STREAMS updated)! \u0026lt; HTTP/2 200 \u0026lt; content-type: application/json \u0026lt; content-length: 1103 \u0026lt; date: Tue, 11 Dec 2018 14:32:00 GMT \u0026lt; x-amzn-requestid: 851627ca-fd51-11e8-ba5d-9f30493b4ce1 \u0026lt; x-amz-apigw-id: RvyPBHuuPHcFx4w= \u0026lt; x-amzn-trace-id: Root=1-5c0fca60-2eecbee8bad756981052608c;Sampled=0 \u0026lt; x-cache: Miss from cloudfront \u0026lt; via: 1.1 400e19a7f70282e0817451f6606ca8f9.cloudfront.net (CloudFront) \u0026lt; x-amz-cf-id: l4gOpUjDylhS0yHwBWpMneD4BqLBv3zkWcjv6I0j2vBoQu6qD4gKyw== \u0026lt; {\u0026#34;ip\u0026#34;:\u0026#34;127.0.0.1\u0026#34;,\u0026#34;record\u0026#34;:{\u0026#34;City\u0026#34;:{\u0026#34;GeoNameID\u0026#34;:0,\u0026#34;Names\u0026#34;:null},\u0026#34;Continent\u0026#34;:{\u0026#34;Code\u0026#34;:\u0026#34;NA\u0026#34;,\u0026#34;GeoNameID\u0026#34;:6255149,\u0026#34;Names\u0026#34;:{\u0026#34;de\u0026#34;:\u0026#34;Nordamerika\u0026#34;,\u0026#34;en\u0026#34;:\u0026#34;North America\u0026#34;,\u0026#34;es\u0026#34;:\u0026#34;Norteamérica\u0026#34;,\u0026#34;fr\u0026#34;:\u0026#34;Amérique du Nord\u0026#34;,\u0026#34;ja\u0026#34;:\u0026#34;北アメリカ\u0026#34;,\u0026#34;pt-BR\u0026#34;:\u0026#34;América do Norte\u0026#34;,\u0026#34;ru\u0026#34;:\u0026#34;Северная Америка\u0026#34;,\u0026#34;zh-CN\u0026#34;:\u0026#34;北美洲\u0026#34;}},\u0026#34;Country\u0026#34;:{\u0026#34;GeoNameID\u0026#34;:6252001,\u0026#34;IsInEuropeanUnion\u0026#34;:false,\u0026#34;IsoCode\u0026#34;:\u0026#34;US\u0026#34;,\u0026#34;Names\u0026#34;:{\u0026#34;de\u0026#34;:\u0026#34;USA\u0026#34;,\u0026#34;en\u0026#34;:\u0026#34;United States\u0026#34;,\u0026#34;es\u0026#34;:\u0026#34;Estados Unidos\u0026#34;,\u0026#34;fr\u0026#34;:\u0026#34;États-Unis\u0026#34;,\u0026#34;ja\u0026#34;:\u0026#34;アメリカ合衆国\u0026#34;,\u0026#34;pt-BR\u0026#34;:\u0026#34;Estados Unidos\u0026#34;,\u0026#34;ru\u0026#34;:\u0026#34;США\u0026#34;,\u0026#34;zh-CN\u0026#34;:\u0026#34;美国\u0026#34;}},\u0026#34;Location\u0026#34;:{\u0026#34;AccuracyRadius\u0026#34;:0,\u0026#34;Latitude\u0026#34;:0,\u0026#34;Longitude\u0026#34;:0,\u0026#34;MetroCode\u0026#34;:0,\u0026#34;TimeZone\u0026#34;:\u0026#34;\u0026#34;},\u0026#34;Postal\u0026#34;:{\u0026#34;Code\u0026#34;:\u0026#34;\u0026#34;},\u0026#34;RegisteredCountry\u0026#34;:{\u0026#34;GeoNameID\u0026#34;:6252001,\u0026#34;IsInEuropeanUnion\u0026#34;:false,\u0026#34;IsoCode\u0026#34;:\u0026#34;US\u0026#34;,\u0026#34;Names\u0026#34;:{\u0026#34;de\u0026#34;:\u0026#34;USA\u0026#34;,\u0026#34;en\u0026#34;:\u0026#34;United States\u0026#34;,\u0026#34;es\u0026#34;:\u0026#34;Estados Unidos\u0026#34;,\u0026#34;fr\u0026#34;:\u0026#34;États-Unis\u0026#34;,\u0026#34;ja\u0026#34;:\u0026#34;アメリカ合衆国\u0026#34;,\u0026#34;pt-BR\u0026#34;:\u0026#34;Estados Unidos\u0026#34;,\u0026#34;ru\u0026#34;:\u0026#34;США\u0026#34;,\u0026#34;zh-CN\u0026#34;:\u0026#34;美国\u0026#34;}},\u0026#34;RepresentedCountry\u0026#34;:{\u0026#34;GeoNameID\u0026#34;:0,\u0026#34;IsInEuropeanUnion\u0026#34;:false,\u0026#34;IsoCode\u0026#34;:\u0026#34;\u0026#34;,\u0026#34;Names\u0026#34;:null,\u0026#34;Type\u0026#34;:\u0026#34;\u0026#34;},\u0026#34;Subdivisions\u0026#34;:null,\u0026#34;Traits\u0026#34;:{\u0026#34;IsAnonymousProxy\u0026#34;:false,\u0026#34;IsSatelliteProvider\u0026#34;:false}}} Pretty-printing the response body:\n{ \u0026#34;ip\u0026#34;: \u0026#34;127.0.0.1\u0026#34;, \u0026#34;record\u0026#34;: { \u0026#34;City\u0026#34;: { \u0026#34;GeoNameID\u0026#34;: 0, \u0026#34;Names\u0026#34;: null }, \u0026#34;Continent\u0026#34;: { \u0026#34;Code\u0026#34;: \u0026#34;NA\u0026#34;, \u0026#34;GeoNameID\u0026#34;: 6255149, \u0026#34;Names\u0026#34;: { \u0026#34;de\u0026#34;: \u0026#34;Nordamerika\u0026#34;, \u0026#34;en\u0026#34;: \u0026#34;North America\u0026#34;, \u0026#34;es\u0026#34;: \u0026#34;Norteamérica\u0026#34;, \u0026#34;fr\u0026#34;: \u0026#34;Amérique du Nord\u0026#34;, \u0026#34;ja\u0026#34;: \u0026#34;北アメリカ\u0026#34;, \u0026#34;pt-BR\u0026#34;: \u0026#34;América do Norte\u0026#34;, \u0026#34;ru\u0026#34;: \u0026#34;Северная Америка\u0026#34;, \u0026#34;zh-CN\u0026#34;: \u0026#34;北美洲\u0026#34; } }, \u0026#34;Country\u0026#34;: { \u0026#34;GeoNameID\u0026#34;: 6252001, \u0026#34;IsInEuropeanUnion\u0026#34;: false, \u0026#34;IsoCode\u0026#34;: \u0026#34;US\u0026#34;, \u0026#34;Names\u0026#34;: { \u0026#34;de\u0026#34;: \u0026#34;USA\u0026#34;, \u0026#34;en\u0026#34;: \u0026#34;United States\u0026#34;, \u0026#34;es\u0026#34;: \u0026#34;Estados Unidos\u0026#34;, \u0026#34;fr\u0026#34;: \u0026#34;États-Unis\u0026#34;, \u0026#34;ja\u0026#34;: \u0026#34;アメリカ合衆国\u0026#34;, \u0026#34;pt-BR\u0026#34;: \u0026#34;Estados Unidos\u0026#34;, \u0026#34;ru\u0026#34;: \u0026#34;США\u0026#34;, \u0026#34;zh-CN\u0026#34;: \u0026#34;美国\u0026#34; } }, \u0026#34;Location\u0026#34;: { \u0026#34;AccuracyRadius\u0026#34;: 0, \u0026#34;Latitude\u0026#34;: 0, \u0026#34;Longitude\u0026#34;: 0, \u0026#34;MetroCode\u0026#34;: 0, \u0026#34;TimeZone\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;Postal\u0026#34;: { \u0026#34;Code\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;RegisteredCountry\u0026#34;: { \u0026#34;GeoNameID\u0026#34;: 6252001, \u0026#34;IsInEuropeanUnion\u0026#34;: false, \u0026#34;IsoCode\u0026#34;: \u0026#34;US\u0026#34;, \u0026#34;Names\u0026#34;: { \u0026#34;de\u0026#34;: \u0026#34;USA\u0026#34;, \u0026#34;en\u0026#34;: \u0026#34;United States\u0026#34;, \u0026#34;es\u0026#34;: \u0026#34;Estados Unidos\u0026#34;, \u0026#34;fr\u0026#34;: \u0026#34;États-Unis\u0026#34;, \u0026#34;ja\u0026#34;: \u0026#34;アメリカ合衆国\u0026#34;, \u0026#34;pt-BR\u0026#34;: \u0026#34;Estados Unidos\u0026#34;, \u0026#34;ru\u0026#34;: \u0026#34;США\u0026#34;, \u0026#34;zh-CN\u0026#34;: \u0026#34;美国\u0026#34; } }, \u0026#34;RepresentedCountry\u0026#34;: { \u0026#34;GeoNameID\u0026#34;: 0, \u0026#34;IsInEuropeanUnion\u0026#34;: false, \u0026#34;IsoCode\u0026#34;: \u0026#34;\u0026#34;, \u0026#34;Names\u0026#34;: null, \u0026#34;Type\u0026#34;: \u0026#34;\u0026#34; }, \u0026#34;Subdivisions\u0026#34;: null, \u0026#34;Traits\u0026#34;: { \u0026#34;IsAnonymousProxy\u0026#34;: false, \u0026#34;IsSatelliteProvider\u0026#34;: false } } } Clean Up Before moving on, remember to decommission the service via go run main.go delete or mage delete.\nNotes  The GeoLite2-Country.mmdb content is embedded in the go binary via esc as part of the go generate phase.  This is a port of Tom Maiaroto\u0026rsquo;s go-lambda-geoip implementation.    "
   344  },
   345  {
   346  	"uri": "/reference/step/fargate/",
   347  	"title": "Fargate",
   348  	"tags": [],
   349  	"description": "",
   350  	"content": "Teaching people to do serverless is hard. It\u0026#39;s far less about teaching someone about FaaS and far more about getting people into the right mindset. It is not about technology, or at least, it\u0026#39;s not about doing technology\n\u0026mdash; Paul Johnston - knows some stuff about stuff (@PaulDJohnston) December 13, 2018  While Serverless and FaaS are often used interchangeably, there are types of workloads that are more challenging to move to FaaS. Perhaps due to third party libraries, latency, or storage requirements, the FaaS model isn\u0026rsquo;t an ideal fit. An example that is commonly provided is the need to run ffmpeg.\nTo benefit from the serverless model in these cases, Sparta provides the ability to leverage the Fargate service to run Containers without needing to manage servers.\nThere are several steps to Fargate-ifying your application and Sparta exposes functions and hooks to make that operation scoped to a provision operation.\nThose steps include:\n Make the application Task-aware Package your application in a Docker image Push the Docker image to Amazon ECR Reference the ECR URL in a Fargate Task Provision an ECS cluster that hosts the Task  This overview is based on the SpartaStepServicefull project. The implementation uses a combination of ServiceDecoratorHookHandlers to achieve the end result.\nPlease see servicefull_build.go for the most up-to-date version of code samples.\nTask Awareness The first step is to provide an opportunity for our application to behave differently when run as a Fargate task. To do this we add a new application subcommand option that augments the standard Main behavior:\n// Add a hook to do something fargateTask := \u0026amp;cobra.Command{ Use: \u0026#34;fargateTask\u0026#34;, Short: \u0026#34;Sample Fargate task\u0026#34;, Long: `Sample Fargate task that simply logs a message\u0026#34;`, RunE: func(cmd *cobra.Command, args []string) error { fmt.Printf(\u0026#34;Insert your Fargate code here! 🎉\u0026#34;) return nil }, } // Register the command with the Sparta root dispatcher. This // command `fargateTask` matches the command line option in the // Dockerfile that is used to build the image. sparta.CommandLineOptions.Root.AddCommand(fargateTask) This subcommand is defined in the servicefull_task file. Note that the file uses go build tags so that the new fargateTask subcommand is only available when the build target includes the lambdaBinary flag:\n// +build lambdabinary  package bootstrap We can now package our Task-aware executable and deploy it to the cloud.\nPackage The first step is to create a version of your application that can support a Fargate task. This is done in the ecrImageBuilderDecorator function which delegates the compiling and image creation to Sparta:\n// Always build the image buildErr := spartaDocker.BuildDockerImage(serviceName, \u0026#34;\u0026#34;, dockerTags, logger) The second empty argument above is an optional Dockerfile path. The sample project uses the default Dockerfile filename and defines that at the root of the repository. The full Dockerfile is:\nFROMalpine:3.8RUN apk update \u0026amp;\u0026amp; apk add ca-certificates \u0026amp;\u0026amp; rm -rf /var/cache/apk/*# Sparta provides the SPARTA_DOCKER_BINARY argument to the builder# in order to embed the binary.# Ref: https://docs.docker.com/engine/reference/builder/ARG SPARTA_DOCKER_BINARYADD $SPARTA_DOCKER_BINARY /SpartaServicefullCMD [\u0026#34;/SpartaServicefull\u0026#34;, \u0026#34;fargateTask\u0026#34;]The BuildDockerImage function supplies the transient binary executable path to docker via the SPARTA_DOCKER_BINARY ARG value.\nThe CMD instruction includes our previously registered fargateTask subcommand name to invoke the Task-appropriate codepath at runtime.\nThe log output includes the docker build info:\nINFO[0002] Calling WorkflowHook ServiceDecoratorHook=github.com/mweagle/SpartaStepServicefull/bootstrap.ecrImageBuilderDecorator.func1 WorkflowHookContext=\u0026quot;map[]\u0026quot; INFO[0002] Docker version 18.09.0, build 4d60db4 INFO[0002] Running `go generate` INFO[0002] Compiling binary Name=ServicefulStepFunction-1544976454011339000-docker.lambda.amd64 INFO[0003] Creating Docker image Tags=\u0026quot;map[servicefulstepfunction:adc67a77aef22b6dab9c6156d13853e2cfe06488.1544976453]\u0026quot; NFO[0004] Sending build context to Docker daemon 35.43MB INFO[0004] Step 1/5 : FROM alpine:3.8 INFO[0004] ---\u0026gt; 196d12cf6ab1 INFO[0004] Step 2/5 : RUN apk update \u0026amp;\u0026amp; apk add ca-certificates \u0026amp;\u0026amp; rm -rf /var/cache/apk/* INFO[0004] ---\u0026gt; Using cache INFO[0004] ---\u0026gt; 99402375b7f2 INFO[0004] Step 3/5 : ARG SPARTA_DOCKER_BINARY INFO[0004] ---\u0026gt; Using cache INFO[0004] ---\u0026gt; a44d27522c40 INFO[0004] Step 4/5 : ADD $SPARTA_DOCKER_BINARY /SpartaServicefull INFO[0005] ---\u0026gt; 87ffd10e9901 INFO[0005] Step 5/5 : CMD [\u0026quot;/SpartaServicefull\u0026quot;, \u0026quot;fargateTask\u0026quot;] INFO[0005] ---\u0026gt; Running in 0a3b503201c7 INFO[0005] Removing intermediate container 0a3b503201c7 INFO[0005] ---\u0026gt; 7cb1b2261a92 INFO[0005] Successfully built 7cb1b2261a92 INFO[0005] Successfully tagged servicefulstepfunction:adc67a77aef22b6dab9c6156d13853e2cfe06488.1544976453 Push to ECR The next step is to push the locally built image to the Elastic Container Registry. The push will return either the ECR URL which will be used as Fargate Task image property or an error:\n// Push the image to ECR \u0026amp; store the URL s.t. we can properly annotate // the CloudFormation template ecrURLPush, pushImageErr := spartaDocker.PushDockerImageToECR(buildTag, ecrRepositoryName, awsSession, logger) The ECR push URL is stored in the context variable so that a downstream Fargate cluster builder knows the image to use:\ncontext[contextKeyImageURL] = ecrURLPush State Machine The Step Function definition indirectly references the Fargate Task via task specific parameters:\nfargateParams := spartaStep.FargateTaskParameters{ LaunchType: \u0026#34;FARGATE\u0026#34;, Cluster: gocf.Ref(resourceNames.ECSCluster).String(), TaskDefinition: gocf.Ref(resourceNames.ECSTaskDefinition).String(), NetworkConfiguration: \u0026amp;spartaStep.FargateNetworkConfiguration{ AWSVPCConfiguration: \u0026amp;gocf.ECSServiceAwsVPCConfiguration{ Subnets: gocf.StringList( gocf.Ref(resourceNames.PublicSubnetAzs[0]).String(), gocf.Ref(resourceNames.PublicSubnetAzs[1]).String(), ), AssignPublicIP: gocf.String(\u0026#34;ENABLED\u0026#34;), }, }, } fargateState := spartaStep.NewFargateTaskState(\u0026#34;Run Fargate Task\u0026#34;, fargateParams) The ECSCluster and ECSTaskDefinition are resources that are provisioned by the fargateClusterDecorator decorator function.\nFargate Cluster The final step is to provision the ECS cluster that supports the Fargate task. This is encapsulated in the fargateClusterDecorator which creates the required set of CloudFormation resources. The set of CloudFormation resource names is represented in the stackResourceNames struct:\ntype stackResourceNames struct { StepFunction string SNSTopic string ECSCluster string ECSRunTaskRole string ECSTaskDefinition string ECSTaskDefinitionLogGroup string ECSTaskDefinitionRole string VPC string InternetGateway string AttachGateway string RouteViaIgw string PublicRouteViaIgw string ECSSecurityGroup string PublicSubnetAzs []string } The ECS Task Definition is of particular interest and is where the inline created ECR_URL is used to define a FARGATE task.\nECS Task Definition imageURL, _ := context[contextKeyImageURL].(string) if imageURL == \u0026#34;\u0026#34; { return errors.Errorf(\u0026#34;Failed to get image URL from context with key %s\u0026#34;, contextKeyImageURL) } ... // Create the ECS task definition ecsTaskDefinition := \u0026amp;gocf.ECSTaskDefinition{ ExecutionRoleArn: gocf.GetAtt(resourceNames.ECSTaskDefinitionRole, \u0026#34;Arn\u0026#34;), RequiresCompatibilities: gocf.StringList(gocf.String(\u0026#34;FARGATE\u0026#34;)), CPU: gocf.String(\u0026#34;256\u0026#34;), Memory: gocf.String(\u0026#34;512\u0026#34;), NetworkMode: gocf.String(\u0026#34;awsvpc\u0026#34;), ContainerDefinitions: \u0026amp;gocf.ECSTaskDefinitionContainerDefinitionList{ gocf.ECSTaskDefinitionContainerDefinition{ Image: gocf.String(imageURL), Name: gocf.String(\u0026#34;sparta-servicefull\u0026#34;), Essential: gocf.Bool(true), LogConfiguration: \u0026amp;gocf.ECSTaskDefinitionLogConfiguration{ LogDriver: gocf.String(\u0026#34;awslogs\u0026#34;), // Options Ref: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html  Options: map[string]interface{}{ \u0026#34;awslogs-region\u0026#34;: gocf.Ref(\u0026#34;AWS::Region\u0026#34;), \u0026#34;awslogs-group\u0026#34;: strings.Join([]string{\u0026#34;\u0026#34;, sparta.ProperName, serviceName}, \u0026#34;/\u0026#34;), \u0026#34;awslogs-stream-prefix\u0026#34;: serviceName, \u0026#34;awslogs-create-group\u0026#34;: \u0026#34;true\u0026#34;, }, }, }, }, } Configuration The final step is to provide the three decorators to the WorkflowHooks structure:\nworkflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorators: []sparta.ServiceDecoratorHookHandler{ ecrImageBuilderDecorator(\u0026#34;spartadocker\u0026#34;), // Then build the state machine  stateMachine.StateMachineDecorator(), // Then the ECS cluster that supports the Fargate task  fargateClusterDecorator(resourceNames), }, } Provisioning The provisioning workflow for this service is the same as a Lambda-based one:\n$ go run main.provision --s3Bucket $MY_S3_BUCKET Output:\nINFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.8.0 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : 597d3ba INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.11.1 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: ServicefulStepFunction LinkFlags= Option=provision UTC=\u0026quot;2018-12-16T16:07:31Z\u0026quot; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Using `git` SHA for StampedBuildID Command=\u0026quot;git rev-parse HEAD\u0026quot; SHA=adc67a77aef22b6dab9c6156d13853e2cfe06488 INFO[0000] Provisioning service BuildID=adc67a77aef22b6dab9c6156d13853e2cfe06488 CodePipelineTrigger= InPlaceUpdates=false NOOP=false Tags= WARN[0000] No lambda functions provided to Sparta.Provision() INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=0 Result The end result is a Step function that uses our go binary, Step functions, and SNS rather than Lambda functions:\n"
   351  },
   352  {
   353  	"uri": "/concepts/",
   354  	"title": "Concepts",
   355  	"tags": [],
   356  	"description": "Core Sparta Concepts",
   357  	"content": "This is a brief overview of Sparta\u0026rsquo;s core concepts. Additional information regarding specific features is available from the menu.\nTerms and Concepts At a high level, Sparta transforms a go binary\u0026rsquo;s registered lambda functions into a set of independently addressable AWS Lambda functions . Additionally, Sparta provides microservice authors an opportunity to satisfy other requirements such as defining the IAM Roles under which their function will execute in AWS, additional infrastructure requirements, and telemetry and alerting information (via CloudWatch).\nService Name Sparta applications are deployed as a single unit, using the ServiceName as a stable logical identifier. The ServiceName is used as your application\u0026rsquo;s CloudFormation StackName\nstackName := \u0026#34;MyUniqueServiceName\u0026#34; sparta.Main(stackName, \u0026#34;Simple Sparta application\u0026#34;, myLambdaFunctions, nil, nil) Lambda Function A Sparta-compatible lambda is a standard AWS Lambda Go function. The following function signatures are supported:\n func () func () error func (TIn), error func () (TOut, error) func (context.Context) error func (context.Context, TIn) error func (context.Context) (TOut, error) func (context.Context, TIn) (TOut, error)  where the TIn and TOut parameters represent encoding/json un/marshallable types. Supplying an invalid signature will produce a run time error as in:\nERRO[0000] Lambda function (Hello World) has invalid returns: handler returns a single value, but it does not implement error exit status 1 Privileges To support accessing other AWS resources in your go function, Sparta allows you to define and link IAM Roles with narrowly defined sparta.IAMRolePrivilege values. This allows you to define the minimal set of privileges under which your go function will execute. The Privilege.Resource field value may also be a StringExpression referencing a dynamically provisioned CloudFormation resource.\nlambdaFn.RoleDefinition.Privileges = append(lambdaFn.RoleDefinition.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:HeadObject\u0026#34;}, Resource: \u0026#34;arn:aws:s3:::MyS3Bucket\u0026#34;, }) Permissions To configure AWS Lambda Event Sources, Sparta provides both sparta.LambdaPermission and service-specific Permission types like sparta.CloudWatchEventsPermission. The service-specific Permission types automatically register your lambda function with the remote AWS service, using each service\u0026rsquo;s specific API.\ncloudWatchEventsPermission := sparta.CloudWatchEventsPermission{} cloudWatchEventsPermission.Rules = make(map[string]sparta.CloudWatchEventsRule, 0) cloudWatchEventsPermission.Rules[\u0026#34;Rate5Mins\u0026#34;] = sparta.CloudWatchEventsRule{ ScheduleExpression: \u0026#34;rate(5 minutes)\u0026#34;, } lambdaFn.Permissions = append(lambdaFn.Permissions, cloudWatchEventsPermission) Decorators Decorators are associated with either Lambda functions or the larger service workflow via WorkflowHooks. They are user-defined functions that provide an opportunity for your service to perform secondary actions such as automatically generating a CloudFormation Dashboard or automatically publish an S3 Artifact from your service.\nDecorators are applied at provision time.\nInterceptors Interceptors are the runtime analog to Decorators. They are user-defined functions that are executed in the context of handling an event. They provide an opportunity for you to support cross-cutting concerns such as automatically registering XRayTraces that can capture service performance and log messages in the event of an error.\nmermaid.initialize({startOnLoad:true}); graph TD classDef stdOp fill:#FFF,stroke:#A00,stroke-width:2px; classDef userHook fill:#B5B2A1,stroke:#A00,stroke-width:2px,stroke-dasharray: 5, 5; execute[Execute] class execute stdOp; lookup[Lookup Function] class lookup stdOp; call[Call Function] class call stdOp; interceptorBegin[Interceptor Begin] class interceptorBegin userHook; populateLogger[Logger into context] class populateLogger stdOp; interceptorBeforeSetup[Interceptor BeforeSetup] class interceptorBeforeSetup userHook; populateContext[Populate context] class populateContext stdOp; interceptorAfterSetup[Interceptor AfterSetup] class interceptorAfterSetup userHook; unmarshalArgs[Introspect Arguments] class unmarshalArgs stdOp; interceptorBeforeDispatch[Interceptor BeforeDispatch] class interceptorBeforeDispatch userHook; callFunction[Call Function] class callFunction stdOp; interceptorAfterDispatch[Interceptor AfterDispatch] class interceptorAfterDispatch userHook; extractReturn[Extract Function Return] class extractReturn stdOp; interceptorComplete[Interceptor Complete] class interceptorComplete userHook; done[Done] class done stdOp; execute--lookup lookup--call call--interceptorBegin interceptorBegin--populateLogger populateLogger--interceptorBeforeSetup interceptorBeforeSetup--populateContext populateContext--interceptorAfterSetup interceptorAfterSetup--unmarshalArgs unmarshalArgs--interceptorBeforeDispatch interceptorBeforeDispatch--callFunction callFunction--interceptorAfterDispatch interceptorAfterDispatch--extractReturn extractReturn--interceptorComplete interceptorComplete--done  This diagram is rendered with Mermaid. Please open an issue if it doesn't render properly. Dynamic Resources Sparta applications can specify other AWS Resources (eg, SNS Topics) as part of their application. The dynamic resource outputs can be referenced by Sparta lambda functions via gocf.Ref and gocf.GetAtt functions.\nsnsTopicName := sparta.CloudFormationResourceName(\u0026#34;SNSDynamicTopic\u0026#34;) snsTopic := \u0026amp;gocf.SNSTopic{ DisplayName: gocf.String(\u0026#34;Sparta Application SNS topic\u0026#34;), }) lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoDynamicSNSEvent), echoDynamicSNSEvent, sparta.IAMRoleDefinition{}) lambdaFn.Permissions = append(lambdaFn.Permissions, sparta.SNSPermission{ BasePermission: sparta.BasePermission{ SourceArn: gocf.Ref(snsTopicName), }, }) Discovery To support Sparta lambda functions discovering dynamically assigned AWS resource values, Sparta provides sparta.Discover. This function returns information about resources that a given entity specifies a DependsOn relationship.\nfunc echoS3DynamicBucketEvent(ctx context.Context, s3Event awsLambdaEvents.S3Event) (*awsLambdaEvents.S3Event, error) { discoveryInfo, discoveryInfoErr := sparta.Discover() logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: s3Event, \u0026#34;Discovery\u0026#34;: discoveryInfo, \u0026#34;DiscoveryErr\u0026#34;: discoveryInfoErr, }).Info(\u0026#34;Event received\u0026#34;) // Use discoveryInfo to determine the bucket name to which RawMessage should be stored  ... } Summary Given a set of registered Sparta lambda function, a typical provision build to create a new service follows this workflow. Items with dashed borders are opt-in user behaviors.\nmermaid.initialize({startOnLoad:true}); graph TD classDef stdOp fill:#FFF,stroke:#A00,stroke-width:2px; classDef userHook fill:#B5B2A1,stroke:#A00,stroke-width:2px,stroke-dasharray: 5, 5; iam[Verify Static IAM Roles] class iam stdOp; preBuild[WorkflowHook - PreBuild] class preBuild userHook; compile[Compile for AWS Lambda Container] postBuild[WorkflowHook - PostBuild] class postBuild userHook; package[ZIP archive] class package stdOp; userArchive[WorkflowHook - Archive] class userArchive userHook; upload[Upload Archive to S3] packageAssets[Conditionally ZIP S3 Site Assets] uploadAssets[Upload S3 Assets] class upload,packageAssets,uploadAssets stdOp; preMarshall[WorkflowHook - PreMarshall] class preMarshall userHook; generate[Marshal to CloudFormation] class generate stdOp; decorate[Call Lambda Decorators - Dynamic AWS Resources] class decorate stdOp; serviceDecorator[Service Decorator] class serviceDecorator userHook; postMarshall[WorkflowHook - PostMarshall] class postMarshall stdOp; uploadTemplate[Upload Template to S3] updateStack[Create/Update Stack] inplaceUpdates[In-place λ code updates] wait[Wait for Complete/Failure Result] class uploadTemplate,updateStack,inplaceUpdates,wait stdOp; iam--preBuild preBuild--|go|compile compile--postBuild postBuild--package package--packageAssets package--userArchive userArchive--upload packageAssets--uploadAssets uploadAssets--generate upload--generate generate--preMarshall preMarshall--decorate decorate--serviceDecorator serviceDecorator--postMarshall postMarshall--uploadTemplate uploadTemplate--|standard|updateStack uploadTemplate--|inplace|inplaceUpdates updateStack--wait  This diagram is rendered with Mermaid. Please open an issue if it doesn't render properly. During provisioning, Sparta uses AWS Lambda-backed Custom Resources to support operations for which CloudFormation doesn\u0026rsquo;t yet support (eg, API Gateway creation).\nNext Steps Walk through a starting Sparta Application.\n"
   358  },
   359  {
   360  	"uri": "/reference/apigateway/cors/",
   361  	"title": "CORS",
   362  	"tags": [],
   363  	"description": "",
   364  	"content": "Cross Origin Resource Sharing defines a protocol by which resources on different domains may establish whether cross site operations are permissible.\nSparta makes CORS support a single CORSEnabled field of the API struct:\n// Register the function with the API Gateway apiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaHTML\u0026#34;, apiStage) // Enable CORS s.t. the S3 site can access the resources apiGateway.CORSEnabled = true Setting the boolean to true will add the necessary OPTIONS and mock responses to all resources exposed by your API. See the SpartaHTML sample for a complete example.\nCustomization Sparta provides two ways to customize the CORS headers available:\n Via the apigateway.CORSOptions field. Customization may use the S3Site.CloudformationS3ResourceName to get the WebsiteURL value so that the CORS origin options can be minimally scoped.  References  API Gateway Docs  "
   365  },
   366  {
   367  	"uri": "/reference/operations/deployment_strategies/",
   368  	"title": "Deployment Strategies",
   369  	"tags": [],
   370  	"description": "",
   371  	"content": " Document the SpartaSafeDeploy example.\n "
   372  },
   373  {
   374  	"uri": "/example_service/step2/",
   375  	"title": "Details",
   376  	"tags": [],
   377  	"description": "",
   378  	"content": "The Overview walked through a simple \u0026ldquo;Hello World\u0026rdquo; example. In this section we\u0026rsquo;ll cover how Sparta works in preparation for moving on to more advanced use. Most development will use the provision command line argument, so this section will outline exactly what that entails.\nProvisioning Flow The provisioning workflow is defined in provision.go, with a goal of marshalling all AWS operations into a CloudFormation template. Where CloudFormation does not support a given service, Sparta falls back to using Lambda-backed Custom Resources in the template definition.\nAt a high level, provisioning uses the flow below. We\u0026rsquo;ll dive a bit deeper into each stage in the following sections.\nmermaid.initialize({startOnLoad:true}); graph TD classDef stdOp fill:#FFF,stroke:#A00,stroke-width:2px; classDef userHook fill:#B5B2A1,stroke:#A00,stroke-width:2px,stroke-dasharray: 5, 5; iam[Verify Static IAM Roles] class iam stdOp; preBuild[WorkflowHook - PreBuild] class preBuild userHook; compile[Compile for AWS Lambda Container] postBuild[WorkflowHook - PostBuild] class postBuild userHook; package[ZIP archive] class package stdOp; userArchive[WorkflowHook - Archive] class userArchive userHook; upload[Upload Archive to S3] packageAssets[Conditionally ZIP S3 Site Assets] uploadAssets[Upload S3 Assets] class upload,packageAssets,uploadAssets stdOp; preMarshall[WorkflowHook - PreMarshall] class preMarshall userHook; generate[Marshal to CloudFormation] class generate stdOp; decorate[Call Lambda Decorators - Dynamic AWS Resources] class decorate stdOp; serviceDecorator[Service Decorator] class serviceDecorator userHook; postMarshall[WorkflowHook - PostMarshall] class postMarshall stdOp; uploadTemplate[Upload Template to S3] updateStack[Create/Update Stack] inplaceUpdates[In-place λ code updates] wait[Wait for Complete/Failure Result] class uploadTemplate,updateStack,inplaceUpdates,wait stdOp; iam--preBuild preBuild--|go|compile compile--postBuild postBuild--package package--packageAssets package--userArchive userArchive--upload packageAssets--uploadAssets uploadAssets--generate upload--generate generate--preMarshall preMarshall--decorate decorate--serviceDecorator serviceDecorator--postMarshall postMarshall--uploadTemplate uploadTemplate--|standard|updateStack uploadTemplate--|inplace|inplaceUpdates updateStack--wait  This diagram is rendered with Mermaid. Please open an issue if it doesn't render properly. Verify Static IAM Roles The NewAWSLambda function accepts either a string or a sparta.IAMRoleDefinition value type. In the event that a string is passed, this function verifies that the IAM role exists and builds up a cache of IAM role information that can be shared and referenced during template generation. Specifically, a pre-existing IAM Role ARN is cached to minimize AWS calls during template generation.\nCompile The next step is to cross compile the application to a binary that can be executed on an AWS Lambda instance. The default compile flags are:\n TAGS: -tags lambdabinary ENVIRONMENT: GOOS=linux GOARCH=amd64  The build output is created in the _./.sparta/ working directory. The full set of build flags is available by running the provision workflow with the --level debug option.\nPackage The end result of the package phase is a ZIP archive of your application. You can inspect this archive, as well as any other Sparta artifacts in the ./.sparta directory by supplying the --noop argument during a provision operation.\nUpload Archive To S3 Uploads the archive to S3. There\u0026rsquo;s not much else to see here.\nGenerate CloudFormation Template Uploading the archive produces a valid CodeURI value that can be used for Lambda function creation. The CloudFormation template is generated by marshaling the sparta.LambdaAWSInfo objects into CloudFormation JSON representations.\nThe AWS Lambda marshaling is automatically handled. This is also the point at which the optional TemplateDecorator functions are called to annocate the automatically generated template with additional resources.\nUpload CloudFormation Template to S3 Uploads the template to S3 to maximize template size. There\u0026rsquo;s not much else to see here.\nCreate/Update Stack Finally, the provisioning workflow determines whether the Sparta serviceName exists and either creates or updates as appropriate using CloudFormation APIs.\nNext Steps Now that we\u0026rsquo;ve covered how Sparta handles provisioning your stack, we\u0026rsquo;re ready to expand functionality to leverge more of the AWS ecosystem in the next section.\n"
   379  },
   380  {
   381  	"uri": "/reference/apigateway/slack/",
   382  	"title": "Slack SlashCommand",
   383  	"tags": [],
   384  	"description": "",
   385  	"content": "In this example, we\u0026rsquo;ll walk through creating a Slack Slash Command service. The source for this is the SpartaSlackbot repo.\nOur initial command handler won\u0026rsquo;t be very sophisticated, but will show the steps necessary to provision and configure a Sparta AWS Gateway-enabled Lambda function.\nDefine the Lambda Function This lambda handler is a bit more complicated than the other examples, primarily because of the Slack Integration requirements. The full source is:\nimport ( spartaAWSEvents \u0026#34;github.com/mweagle/Sparta/aws/events\u0026#34; ) //////////////////////////////////////////////////////////////////////////////// // Hello world event handler // func helloSlackbot(ctx context.Context, apiRequest spartaAWSEvents.APIGatewayRequest) (map[string]interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) bodyParams, bodyParamsOk := apiRequest.Body.(map[string]interface{}) if !bodyParamsOk { return nil, fmt.Errorf(\u0026#34;Failed to type convert body. Type: %T\u0026#34;, apiRequest.Body) } logger.WithFields(logrus.Fields{ \u0026#34;BodyType\u0026#34;: fmt.Sprintf(\u0026#34;%T\u0026#34;, bodyParams), \u0026#34;BodyValue\u0026#34;: fmt.Sprintf(\u0026#34;%+v\u0026#34;, bodyParams), }).Info(\u0026#34;Slack slashcommand values\u0026#34;) // 2. Create the response  // Slack formatting:  // https://api.slack.com/docs/formatting  responseText := \u0026#34;Here\u0026#39;s what I understood\u0026#34; for eachKey, eachParam := range bodyParams { responseText += fmt.Sprintf(\u0026#34;\\n*%s*: %+v\u0026#34;, eachKey, eachParam) } // 4. Setup the response object:  // https://api.slack.com/slash-commands, \u0026#34;Responding to a command\u0026#34;  responseData := map[string]interface{}{ \u0026#34;response_type\u0026#34;: \u0026#34;in_channel\u0026#34;, \u0026#34;text\u0026#34;: responseText, \u0026#34;mrkdwn\u0026#34;: true, } return responseData, nil } There are a couple of things to note in this code:\n Custom Event Type   The inbound Slack POST request is application/x-www-form-urlencoded data. This data is unmarshalled into the same spartaAWSEvent.APIGatewayRequest using a customized mapping template.    Response Formatting The lambda function extracts all Slack parameters and if defined, sends the text back with a bit of Slack Message Formatting:\n ```go responseText := \u0026quot;Here's what I understood\u0026quot; for eachKey, eachParam := range bodyParams { responseText += fmt.Sprintf(\u0026quot;\\n*%s*: %+v\u0026quot;, eachKey, eachParam) } ```    Custom Response\n    The Slack API expects a JSON formatted response, which is created in step 4:\n```go responseData := sparta.ArbitraryJSONObject{ \u0026quot;response_type\u0026quot;: \u0026quot;in_channel\u0026quot;, \u0026quot;text\u0026quot;: responseText, } ```    Create the API Gateway With our lambda function defined, we need to setup an API Gateway so that it\u0026rsquo;s publicly available:\napiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaSlackbot\u0026#34;, apiStage) The apiStage value implies that we want to deploy this API Gateway Rest API as part of Sparta\u0026rsquo;s provision step.\nCreate Lambda Binding \u0026amp; Resource Next we create an sparta.LambdaAWSInfo struct that references the s3ItemInfo function:\nfunc spartaLambdaFunctions(api *sparta.API) []*sparta.LambdaAWSInfo { var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(helloSlackbot), helloSlackbot, iamDynamicRole) if nil != api { apiGatewayResource, _ := api.NewResource(\u0026#34;/slack\u0026#34;, lambdaFn) _, err := apiGatewayResource.NewMethod(\u0026#34;POST\u0026#34;, http.StatusCreated) if nil != err { panic(\u0026#34;Failed to create /hello resource\u0026#34;) } } return append(lambdaFunctions, lambdaFn) } A few items to note here:\n We\u0026rsquo;re using an empty sparta.IAMRoleDefinition{} definition because our go lambda function doesn\u0026rsquo;t access any additional AWS services. Our lambda function will be accessible at the /slack child path of the deployed API Gateway instance Slack supports both GET and POST integration types, but we\u0026rsquo;re limiting our lambda function to POST only  Provision With everything configured, we then configure our main() function to forward to Sparta:\nfunc main() { // Register the function with the API Gateway  apiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaSlackbot\u0026#34;, apiStage) // Deploy it  sparta.Main(\u0026#34;SpartaSlackbot\u0026#34;, fmt.Sprintf(\u0026#34;Sparta app that responds to Slack commands\u0026#34;), spartaLambdaFunctions(apiGateway), apiGateway, nil) } and provision the service:\nS3_BUCKET=\u0026lt;MY_S3_BUCKETNAME\u0026gt; go run slack.go --level info provision Look for the Stack output section of the log, you\u0026rsquo;ll need the APIGatewayURL value to configure Slack in the next step.\nINFO[0083] Stack output Description=API Gateway URL Key=APIGatewayURL Value=https://75mtsly44i.execute-api.us-west-2.amazonaws.com/v1 INFO[0083] Stack output Description=Sparta Home Key=SpartaHome Value=https://github.com/mweagle/Sparta INFO[0083] Stack output Description=Sparta Version Key=SpartaVersion Value=0.1.3 Configure Slack At this point our lambda function is deployed and is available through the API Gateway (_https://75mtsly44i.execute-api.us-west-2.amazonaws.com/v1/slack_ in the current example).\nThe next step is to configure Slack with this custom integration:\n Visit https://slack.com/apps/build and choose the \u0026ldquo;Custom Integration\u0026rdquo; option:  ![Custom integration](/images/apigateway/slack/customIntegration.jpg)   On the next page, choose \u0026ldquo;Slash Commands\u0026rdquo;:  ![Slash Commands](/images/apigateway/slack/slashCommandMenu.jpg)   The next screen is where you input the command that will trigger your lambda function. Enter /sparta  ![Slash Chose Command](/images/apigateway/slack/chooseCommand.jpg) - and click the \u0026quot;Add Slash Command Integration\u0026quot; button.   Finally, scroll down the next page to the Integration Settings section and provide the API Gateway URL of your lambda function.  ![Slash URL](/images/apigateway/slack/integrationSettings.jpg) * Leave the _Method_ field unchanged (it should be `POST`), to match how we configured the API Gateway entry above.   Save it  ![Save it](/images/apigateway/slack/saveIntegration.jpg)  There are additional Slash Command Integration options, but for this example the URL option is sufficient to trigger our command.\nTest With everything configured, visit your team\u0026rsquo;s Slack room and verify the integration via /sparta slash command:\nCleaning Up Before moving on, remember to decommission the service via:\ngo run slack.go delete Wrapping Up This example provides a good overview of Sparta \u0026amp; Slack integration, including how to handle external requests that are not application/json formatted.\n"
   386  },
   387  {
   388  	"uri": "/reference/operations/metrics_publisher/",
   389  	"title": "Metrics Publisher",
   390  	"tags": [],
   391  	"description": "",
   392  	"content": "AWS Lambda is tightly integrated with other AWS services and provides excellent opportunities for improving your service\u0026rsquo;s observability posture. Sparta includes a CloudWatch Metrics publisher that periodically publishes metrics to CloudWatch.\nThis periodic task publishes environment-level metrics that have been detected by the gopsutil package. Metrics include:\n CPU  Percent used   Disk  Percent used   Host  Uptime (milliseconds)   Load  Load1 (no units) Load5 (no units) Load15 (no units)   Network  NetBytesSent (bytes) NetBytesRecv (bytes) NetErrin (count) NetErrout (count)    You can provide an optional map[string]string set of dimensions to which the metrics should be published. This enables targeted alert conditions that can be used to improve system resiliency.\nTo register the metric publisher, call the RegisterLambdaUtilizationMetricPublisher at some point in your main() call graph. For example:\nimport spartaCloudWatch \u0026#34;github.com/mweagle/Sparta/aws/cloudwatch\u0026#34; func main() { ... spartaCloudWatch.RegisterLambdaUtilizationMetricPublisher(map[string]string{ \u0026#34;BuildId\u0026#34;: sparta.StampedBuildID, }) ... } The optional map[string]string parameter is the custom Name-Value pairs to use as a CloudWatch Dimension\n"
   393  },
   394  {
   395  	"uri": "/reference/operations/profiling/",
   396  	"title": "Profiling",
   397  	"tags": [],
   398  	"description": "",
   399  	"content": "One of Lambda\u0026rsquo;s biggest strengths, its ability to automatically scale across ephemeral containers in response to increased load, also creates one of its biggest problems: observability. The traditional set of tools used to identify performance bottlenecks are no longer valid, as there is no host into which one can SSH and interactively interrogate. Identifying performance bottlenecks is even more significant due to the Lambda pricing model, where idle time often directly translates into increased costs.\nHowever, Go offers the excellent pprof tool to visualize and cost allocate program execution. Beginning with Sparta 0.20.4, it\u0026rsquo;s possible to enable per-lambda instance snapshotting which can be locally visualized. This documentation provides an overview of how to enable profiling. The full source is available at the SpartaPProf repo.\nTo learn more about pprof itself, please visit:\n @rakyll\u0026rsquo;s blog Profiling Go programs with pprof Profiling Golang Profiling and optimizing Go web programs  Enabling Profiling To enable profiling add a reference to ScheduleProfileLoop in your main() function as in:\nsparta.ScheduleProfileLoop(nil, 5*time.Second, 30*time.Second, \u0026#34;goroutine\u0026#34;, \u0026#34;heap\u0026#34;, \u0026#34;threadcreate\u0026#34;, \u0026#34;block\u0026#34;, \u0026#34;mutex\u0026#34;) This function accepts the following arguments:\n s3Bucket: The S3 bucket to which profile snapshots should be written. If nil, the bucket used to host the original ZIP code archive is used. snapshotInterval - The interval between each snapshot. cpuProfileDuration - The duration for the CPUProfile sample. profileNames... - The profile types to snapshot. In addition to the standard profiles, Sparta includes a \u0026ldquo;cpu\u0026rdquo; profile iff the cpuProfileDuration is non-zero.  Profiling Implementation During the provision step, the ScheduleProfileLoop adds an IAMRolePrivilege Allow entry (if possible) to each Lambda function\u0026rsquo;s IAM policy. This policy extension is a minimal privilege and only enables s3:PutObject against the Sparta managed key prefix (see below).\nThe provision implementation also annotates the Lambda\u0026rsquo;s Environment map so that the publishing loop knows where to publish snapshots.\nDuring the execute step when the Sparta binary is executing in AWS Lambda, the ScheduleProfileLoop installs the requested sampling and publishing steps so that profile snapshots, serialized as proto files, are properly saved to S3. Profiles are published to a reserved location in S3 with the form:\ns3:://{BUCKET_NAME}/sparta/pprof/{STACK_NAME}/profiles/{PROFILE_TYPE}/{SNAPSHOT_INDEX}-{PROFILE_TYPE}.λ-{INSTANCE_ID}.profile\nTo manage profile sprawl, each lambda instance uses a rolling SNAPSHOT_INDEX to maintain a fixed size window. The new profile command is responsible for aggregating them into a single local, consolidated profile that can be visualized using the existing tools.\nDeploying With profiling enabled, the next step is to deploy the SpartaPProf service using the provision command:\n$ go run main.go provision --s3Bucket MY-S3-BUCKET INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.4.0 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : 8f199e1 INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.11.1 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: SpartaPProf-mweagle LinkFlags= Option=provision UTC=\u0026#34;2018-10-11T14:59:48Z\u0026#34; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Using `git` SHA for StampedBuildID Command=\u0026#34;git rev-parse HEAD\u0026#34; SHA=c3fbe8c289c3184efec842dca56b9bf541f39d21 INFO[0000] Provisioning service BuildID=c3fbe8c289c3184efec842dca56b9bf541f39d21 CodePipelineTrigger= InPlaceUpdates=false NOOP=false Tags= INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=2 INFO[0000] Checking S3 versioning Bucket=MY-S3-BUCKET VersioningEnabled=true INFO[0000] Checking S3 region Bucket=MY-S3-BUCKET Region=us-west-2 INFO[0000] Running `go generate` INFO[0000] Compiling binary Name=Sparta.lambda.amd64 INFO[0002] Creating code ZIP archive for upload TempName=./.sparta/SpartaPProf_mweagle-code.zip INFO[0002] Lambda code archive size Size=\u0026#34;17 MB\u0026#34; INFO[0002] Uploading local file to S3 Bucket=MY-S3-BUCKET Key=SpartaPProf-mweagle/SpartaPProf_mweagle-code.zip Path=./.sparta/SpartaPProf_mweagle-code.zip Size=\u0026#34;17 MB\u0026#34; INFO[0009] Calling WorkflowHook ServiceDecoratorHook= WorkflowHookContext=\u0026#34;map[]\u0026#34; INFO[0009] Uploading local file to S3 Bucket=MY-S3-BUCKET Key=SpartaPProf-mweagle/SpartaPProf_mweagle-cftemplate.json Path=./.sparta/SpartaPProf_mweagle-cftemplate.json Size=\u0026#34;7.1 kB\u0026#34; INFO[0010] Issued CreateChangeSet request StackName=SpartaPProf-mweagle INFO[0013] Issued ExecuteChangeSet request StackName=SpartaPProf-mweagle INFO[0026] CloudFormation Metrics ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0026] Operation duration Duration=11.71s Resource=SpartaPProf-mweagle Type=\u0026#34;AWS::CloudFormation::Stack\u0026#34; INFO[0026] Operation duration Duration=1.60s Resource=HelloWorldLambda7d01d27fe422d278bcc652b4a989528718eb88af Type=\u0026#34;AWS::Lambda::Function\u0026#34; INFO[0026] Operation duration Duration=1.33s Resource=KinesisLogConsumerLambda275ace0435c45228161570811178ce06fbcb359c Type=\u0026#34;AWS::Lambda::Function\u0026#34; INFO[0026] Stack Outputs ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0026] HelloWorldFunctionARN Description=\u0026#34;Hello World Lambda ARN\u0026#34; Value=\u0026#34;arn:aws:lambda:us-west-2:123412341234:function:SpartaPProf-mweagle_Hello_World\u0026#34; INFO[0026] KinesisLogConsumerFunctionARN Description=\u0026#34;KinesisLogConsumer Lambda ARN\u0026#34; Value=\u0026#34;arn:aws:lambda:us-west-2:123412341234:function:SpartaPProf-mweagle_KinesisLogConsumer\u0026#34; INFO[0026] Stack provisioned CreationTime=\u0026#34;2018-10-03 23:34:21.142 +0000 UTC\u0026#34; StackId=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaPProf-mweagle/da781540-c764-11e8-9bf1-0aceeffcea3c\u0026#34; StackName=SpartaPProf-mweagle INFO[0026] ════════════════════════════════════════════════ INFO[0026] SpartaPProf-mweagle Summary INFO[0026] ════════════════════════════════════════════════ INFO[0026] Verifying IAM roles Duration (s)=0 INFO[0026] Verifying AWS preconditions Duration (s)=1 INFO[0026] Creating code bundle Duration (s)=2 INFO[0026] Uploading code Duration (s)=8 INFO[0026] Ensuring CloudFormation stack Duration (s)=17 INFO[0026] Total elapsed time Duration (s)=27 Generating Load While the SpartaPProf binary does include functions that are likely to generate profiling data, we still need to issue a sufficient series of events to produce a non-empty profile data set. SpartaPProf includes a simple tool (cmd/load.go) that directly calls the provisioned lambda function on a regular interval. It accepts a single command line argument, the ARN of the lambda function listed as a Stack output in the log output:\nINFO[0058] FunctionARN Description=\u0026#34;Lambda function ARN\u0026#34; Value=\u0026#34;arn:aws:lambda:us-west-2:123412341234:function:SpartaPProf_mweagle_Hello_World\u0026#34; Run the simple load generation script with the ARN value as in:\n$ cd cmd $ go run load.go arn:aws:lambda:us-west-2:012345678910:function:SpartaPProf-mweagle-Hello_World Lambda response (0 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (1 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (2 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (3 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (4 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (5 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (6 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (7 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (8 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (9 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (10 of 60): \u0026#34;Hi there 🌍\u0026#34; Lambda response (11 of 60): \u0026#34;Hi there 🌍\u0026#34; ... After all the requests have completed for this sample against a stack provisioned in us-west-2, a set of named profiles was published. Since each container\u0026rsquo;s instance id is randomly assigned, the profile names you see will have slightly different names\n--------------------------------------------------------------- S3 bucket: s3://weagle/sparta/pprof/SpartaPProf-mweagle/profiles --------------------------------------------------------------- 2017-11-26 11:32:28 41 Bytes sparta/pprof/SpartaPProf-mweagle/profiles/block/0-block.λ-3838737145763622974.profile 2017-11-26 11:32:27 1.8 KiB sparta/pprof/SpartaPProf-mweagle/profiles/cpu/0-cpu.λ-3838737145763622974.profile 2017-11-26 11:32:28 1.8 KiB sparta/pprof/SpartaPProf-mweagle/profiles/goroutine/0-goroutine.λ-3838737145763622974.profile 2017-11-26 11:32:28 2.2 KiB sparta/pprof/SpartaPProf-mweagle/profiles/heap/0-heap.λ-3838737145763622974.profile 2017-11-26 11:32:28 54 Bytes sparta/pprof/SpartaPProf-mweagle/profiles/mutex/0-mutex.λ-3838737145763622974.profile 2017-11-26 11:32:30 200 Bytes sparta/pprof/SpartaPProf-mweagle/profiles/threadcreate/0-threadcreate.λ-3838737145763622974.profile Visualizing Profiles Sparta delegates to the pprof webui to visualize profile snapshots. Ensure you have the latest version of this by running go get -u -v go get github.com/google/pprof first.\nThe final step is to provide the profile snapshots to pprof. Sparta exposes a profile command that accomplishes this, by fetching and consolidating all published profiles for a single type.\n$ go run main.go profile --s3Bucket weagle INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗┌─┐┌─┐┬─┐┌┬┐┌─┐ Version : 1.0.2 INFO[0000] ╚═╗├─┘├─┤├┬┘ │ ├─┤ SHA : b37b93e INFO[0000] ╚═╝┴ ┴ ┴┴└─ ┴ ┴ ┴ Go : go1.9.2 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: SpartaPProf-mweagle LinkFlags= Option=profile UTC=\u0026#34;2018-01-29T15:23:18Z\u0026#34; INFO[0000] ════════════════════════════════════════════════ ? Which stack would you like to profile: SpartaPProf-mweagle ? What type of profile would you like to view? heap ? What profile snapshot(s) would you like to view? Download new snapshots from S3 ? Please select a heap profile type: alloc_space INFO[0028] Refreshing cached profiles CacheRoot=.sparta/profiles/SpartaPProf-mweagle/heap ProfileRootKey=sparta/pprof/SpartaPProf-mweagle/profiles/heap S3Bucket=MY-S3-BUCKET StackName=SpartaPProf-mweagle Type=heap INFO[0028] Aggregating profile Input=\u0026#34;.sparta/profiles/SpartaPProf-mweagle/heap/0-heap.λ-8850662459689822644.profile\u0026#34; INFO[0028] Consolidating profiles ProfileCount=1 INFO[0028] Creating consolidated profile ConsolidatedProfile=.sparta/heap.consolidated.profile INFO[0028] Starting pprof webserver on http://localhost:8080. Enter Ctrl+C to exit. The profile command downloads the published profiles and consolidates them into a single cached version in the ./sparta directory with a name of the form:\n./.sparta/{PROFILE_TYPE}.consolidated.profile\nYou can choose to use the cached file if it exists.\nFor this sample run, the heap profile output is made available to the pprof webserver which produces the following layout:\nThe latest pprof also includes flamegraph support to help identify issues:\nTo view another profile type, enter Ctrl+C to exit the blocking web ui loop and launch another profile session.\nConclusion Go includes a very powerful set of tools that can help diagnose performance bottlenecks. With the Sparta profile command, it\u0026rsquo;s possible to bring that same visibility to bear to AWS Lambda, despite running on ephemeral, (typically) unaddressable hosts. Get started optimizing today! And also, don\u0026rsquo;t forget to disable the profiling loop before pushing to production.\nNotes  CPU Flame Graphs provides a great overview. It\u0026rsquo;s not currently possible to use custom profiles Lambda instances are limited to a window size of 3 rolling snapshots The explore command also exposes the pprof web handlers for local exploration as well.  "
   400  },
   401  {
   402  	"uri": "/reference/",
   403  	"title": "Reference",
   404  	"tags": [],
   405  	"description": "",
   406  	"content": "Sparta Please see individual sections for reference information.\n"
   407  },
   408  {
   409  	"uri": "/reference/apigateway/",
   410  	"title": "",
   411  	"tags": [],
   412  	"description": "",
   413  	"content": "API Gateway One of the most powerful ways to use AWS Lambda is to make function publicly available over HTTPS. This is accomplished by connecting the AWS Lambda function with the API Gateway. In this section we\u0026rsquo;ll start with a simple \u0026ldquo;echo\u0026rdquo; example and move on to a lambda function that accepts user parameters and returns an expiring S3 URL.\n Echo  To start, we\u0026rsquo;ll create a HTTPS accessible lambda function that simply echoes back the contents of incoming API Gateway Lambda event. The source for this is the SpartaHTML. For reference, the helloWorld function is below. import ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; ) func helloWorld(ctx context.Context, gatewayEvent spartaAWSEvents.APIGatewayRequest) (*spartaAPIGateway.Response, error) { logger, loggerOk := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) if loggerOk { logger.Info(\u0026#34;Hello world structured log message\u0026#34;) } // Return a message, together with the incoming input.\n Request Parameters  Request Parameters This example demonstrates how to accept client request params supplied as HTTP query params and return an expiring S3 URL to access content. The source for this is the s3ItemInfo function defined as part of the SpartaApplication. Lambda Definition Our function will accept two params: bucketName : The S3 bucket name storing the asset keyName : The S3 item key Those params will be passed as part of the URL query string.\n Request Context  This example demonstrates how to use the Context struct provided as part of the APIGatewayRequest. The SpartaGeoIP service will return Geo information based on the inbound request\u0026rsquo;s IP address. Lambda Definition Our function will examine the inbound request, lookup the user\u0026rsquo;s IP address in the GeoLite2 Database and return any information to the client. As this function is only expected to be invoked from the API Gateway, we\u0026rsquo;ll unmarshall the inbound event:\n CORS  Cross Origin Resource Sharing defines a protocol by which resources on different domains may establish whether cross site operations are permissible. Sparta makes CORS support a single CORSEnabled field of the API struct: // Register the function with the API Gateway apiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaHTML\u0026#34;, apiStage) // Enable CORS s.t. the S3 site can access the resources apiGateway.CORSEnabled = true Setting the boolean to true will add the necessary OPTIONS and mock responses to all resources exposed by your API.\n Slack SlashCommand  In this example, we\u0026rsquo;ll walk through creating a Slack Slash Command service. The source for this is the SpartaSlackbot repo. Our initial command handler won\u0026rsquo;t be very sophisticated, but will show the steps necessary to provision and configure a Sparta AWS Gateway-enabled Lambda function. Define the Lambda Function This lambda handler is a bit more complicated than the other examples, primarily because of the Slack Integration requirements. The full source is:\n S3 Sites with CORS  Sparta supports provisioning an S3-backed static website as part of provisioning. We\u0026rsquo;ll walk through provisioning a minimal Bootstrap website that accesses API Gateway lambda functions provisioned by a single service in this example. The source for this is the SpartaHTML example application. Lambda Definition We\u0026rsquo;ll start by creating a very simple lambda function: import ( spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; spartaAWSEvents \u0026#34;github.com/mweagle/Sparta/aws/events\u0026#34; ) type helloWorldResponse struct { Message string Request spartaAWSEvents.APIGatewayRequest } //////////////////////////////////////////////////////////////////////////////// // Hello world event handler func helloWorld(ctx context.\n Concepts Before moving on to the examples, it\u0026rsquo;s suggested you familiarize yourself with the API Gateway concepts.\n Gettting Started with Amazon API Gateway  The API Gateway presents a powerful and complex domain model. In brief, to integrate with the API Gateway, a service must:\n Define one or more AWS Lambda functions Create an API Gateway REST API instance Create one or more resources associated with the REST API Create one or more methods for each resource For each method:  Define the method request params Define the integration request mapping Define the integration response mapping Define the method response mapping   Create a stage for a REST API Deploy the given stage  See a the echo example for a complete version.\nRequest Types AWS Lambda supports multiple function signatures. Some supported signatures include structured types, which are JSON un/marshalable structs that are automatically managed.\nTo simplify handling API Gateway requests, Sparta exposes the APIGatewayEnvelope type. This type provides an embeddable struct type whose fields and JSON serialization match up with the Velocity Template that\u0026rsquo;s applied to the incoming API Gateway request.\nEmbed the APIGatewayEnvelope type in your own lambda\u0026rsquo;s request type as in:\ntype FeedbackBody struct { Language string `json:\u0026#34;lang\u0026#34;` Comment string `json:\u0026#34;comment\u0026#34;` } type FeedbackRequest struct { spartaEvents.APIGatewayEnvelope Body FeedbackBody `json:\u0026#34;body\u0026#34;` } Then accept your custom type in your lambda function as in:\nfunc myLambdaFunction(ctx context.Context, apiGatewayRequest FeedbackRequest) (map[string]string, error) { language := apiGatewayRequest.Body.Language ... } Response Types The API Gateway response mappings must make assumptions about the shape of the Lambda response. The default application/json mapping template is:\n$input.json('$.body') ## Ok, parse the incoming map of headers ## and for each one, set the override header in the context. ## Ref: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#context-variable-reference ## Ref: https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-override-request-response-parameters.html #set($headers = $input.path(\u0026quot;$.headers\u0026quot;))## #foreach($key in $headers.keySet())## #set($context.responseOverride.header[$key] = $headers[$key])## #end## ## And use the code rather than integration templates so that ## the creation time is reduced #if($input.path(\u0026quot;$.code\u0026quot;) != \u0026quot;\u0026quot;)## #set($context.responseOverride.status = $input.path(\u0026quot;$.code\u0026quot;))## #end##  This template assumes that your response type has the following JSON shape:\n{ \u0026#34;code\u0026#34; : int, \u0026#34;body\u0026#34; : {...}, \u0026#34;headers\u0026#34;: {...} } The apigateway.NewResponse constructor is a utility function to produce a canonical version of this response shape. Note that header keys must be lower-cased.\nTo return a different structure change the content-specific mapping templates defined by the IntegrationResponse. See the mapping template reference for more information.\nCustom HTTP Headers API Gateway supports returning custom HTTP headers whose values are extracted from your response. To return custom HTTP headers using the default VTL mappings, provide them as the optional third map[string]string argument to NewResponse as in:\nfunc helloWorld(ctx context.Context, gatewayEvent spartaAWSEvents.APIGatewayRequest) (*spartaAPIGateway.Response, error) { logger, loggerOk := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) if loggerOk { logger.Info(\u0026#34;Hello world structured log message\u0026#34;) } // Return a message, together with the incoming input...  return spartaAPIGateway.NewResponse(http.StatusOK, \u0026amp;helloWorldResponse{ Message: fmt.Sprintf(\u0026#34;Hello world 🌏\u0026#34;), Request: gatewayEvent, }, map[string]string{ \u0026#34;X-Response\u0026#34;: \u0026#34;Some-value\u0026#34;, }), nil } Other Resources  Walkthrough: API Gateway and Lambda Functions Use a Mapping Template to Override an API\u0026rsquo;s Request and Response Parameters and Status Codes  "
   414  },
   415  {
   416  	"uri": "/reference/apiv2gateway/",
   417  	"title": "",
   418  	"tags": [],
   419  	"description": "",
   420  	"content": "API V2 Gateway The API V2 Gateway service provides a way to expose a WebSocket API that is supported by a set of Lambda functions. The AWS blog post supplies an excellent overview of the pros and cons of this approach that enables a near real time, pushed-based application. This section will provide an overview of how to configure a WebSocket API using Sparta. It is based on the SpartaWebSocket sample project.\nPayload Similar to the AWS blog post, our WebSocket API will transmit messages of the form\n{ \u0026#34;message\u0026#34;:\u0026#34;sendmessage\u0026#34;, \u0026#34;data\u0026#34;:\u0026#34;hello world !\u0026#34; } We\u0026rsquo;ll use ws to test the API from the command line.\nRoutes The Sparta service consists of three lambda functions:\n connectWorld(context.Context, awsEvents.APIGatewayWebsocketProxyRequest) (*wsResponse, error) disconnectWorld(context.Context, awsEvents.APIGatewayWebsocketProxyRequest) (*wsResponse, error) sendMessage(context.Context, awsEvents.APIGatewayWebsocketProxyRequest) (*wsResponse, error)  Our functions will use the PROXY style integration and therefore accept an instance of the APIGatewayWebsocketProxyRequest type.\nEach function returns a *wsResponse instance that satisfies the PROXY response:\ntype wsResponse struct { StatusCode int `json:\u0026#34;statusCode\u0026#34;` Body string `json:\u0026#34;body\u0026#34;` } connectWorld The connectWorld AWS Lambda function is responsible for saving the incoming connectionID into a dynamically provisioned DynamoDB database so that subsequent sendMessage requests can broadcast to all subscribed parties.\nThe table name is advertised in the Lambda function via a user-defined environment variable. The specifics of how that table is provisioned will be addressed in a section below.\n... // Operation putItemInput := \u0026amp;dynamodb.PutItemInput{ TableName: aws.String(os.Getenv(envKeyTableName)), Item: map[string]*dynamodb.AttributeValue{ ddbAttributeConnectionID: \u0026amp;dynamodb.AttributeValue{ S: aws.String(request.RequestContext.ConnectionID), }, }, } _, putItemErr := dynamoClient.PutItem(putItemInput) ... disconnectWorld The complement to connectWorld is disconnectWorld which is responsible for removing the connectionID from the list of registered connections:\ndelItemInput := \u0026amp;dynamodb.DeleteItemInput{ TableName: aws.String(os.Getenv(envKeyTableName)), Key: map[string]*dynamodb.AttributeValue{ ddbAttributeConnectionID: \u0026amp;dynamodb.AttributeValue{ S: aws.String(connectionID), }, }, } _, delItemErr := ddbService.DeleteItem(delItemInput) sendMessage With the connectWorld and disconnectWorld connection management functions created, the core of the WebSocket API is sendMessage. This function is responsible for scanning over the set of registered connectionIDs and forwarding a request to PostConnectionWithContext. This function sends the message to the registered connections.\nThe sendMessage function can be broken down into a few sections.\nSetup API Gateway Management Instance The first requirement is to setup the API Gateway Management service instance using the proper endpoint. The endpoint can be constructed from the incoming APIGatewayWebsocketProxyRequestContext member of the request.\nendpointURL := fmt.Sprintf(\u0026#34;%s/%s\u0026#34;, request.RequestContext.DomainName, request.RequestContext.Stage) logger.WithField(\u0026#34;Endpoint\u0026#34;, endpointURL).Info(\u0026#34;API Gateway Endpoint\u0026#34;) dynamoClient := dynamodb.New(sess) apigwMgmtClient := apigwManagement.New(sess, aws.NewConfig().WithEndpoint(endpointURL)) Validate Input The new step is to unmarshal and validate the incoming JSON request body:\n// Get the input request...  var objMap map[string]*json.RawMessage unmarshalErr := json.Unmarshal([]byte(request.Body), \u0026amp;objMap) if unmarshalErr != nil || objMap[\u0026#34;data\u0026#34;] == nil { return \u0026amp;wsResponse{ StatusCode: 500, Body: \u0026#34;Failed to unmarshal request: \u0026#34; + unmarshalErr.Error(), }, nil } Once we have verified that the input is valid, the final step is to notify all the subscribers.\nScan and Publish Once the incoming data property is validated, the next step is to scan the DynamoDB table for the registered connections and post a message to each one. Note that the scan callback also attempts to cleanup connections that are no longer valid, but which haven\u0026rsquo;t been cleanly removed.\nscanCallback := func(output *dynamodb.ScanOutput, lastPage bool) bool { // Send the message to all the clients  for _, eachItem := range output.Items { // Get the connectionID  receiverConnection := \u0026#34;\u0026#34; if eachItem[ddbAttributeConnectionID].S != nil { receiverConnection = *eachItem[ddbAttributeConnectionID].S } // Post to this connectionID  postConnectionInput := \u0026amp;apigwManagement.PostToConnectionInput{ ConnectionId: aws.String(receiverConnection), Data: *objMap[\u0026#34;data\u0026#34;], } _, respErr := apigwMgmtClient.PostToConnectionWithContext(ctx, postConnectionInput) if respErr != nil { if receiverConnection != \u0026#34;\u0026#34; \u0026amp;\u0026amp; strings.Contains(respErr.Error(), apigwManagement.ErrCodeGoneException) { // Cleanup in case the connection is stale  go deleteConnection(receiverConnection, dynamoClient) } else { logger.WithField(\u0026#34;Error\u0026#34;, respErr).Warn(\u0026#34;Failed to post to connection\u0026#34;) } } return true } return true } // Scan the connections table  scanInput := \u0026amp;dynamodb.ScanInput{ TableName: aws.String(os.Getenv(envKeyTableName)), } scanItemErr := dynamoClient.ScanPagesWithContext(ctx, scanInput, scanCallback) ... These three functions are the core of the WebSocket service.\nAPI V2 Gateway Decorator The next step is to create the API V2 API object which is comprised of:\n Stage API Routes  There is one Stage and one API per service, but a given service (including this one) may include multiple Routes.\n// APIv2 Websockets stage, _ := sparta.NewAPIV2Stage(\u0026#34;v1\u0026#34;) stage.Description = \u0026#34;New deploy!\u0026#34; apiGateway, _ := sparta.NewAPIV2(sparta.Websocket, \u0026#34;sample\u0026#34;, \u0026#34;$request.body.message\u0026#34;, stage) The NewAPIV2 creation function requires:\n The protocol to use (sparta.Websocket) The name of the API (sample) The route selection expression that represents a JSONPath selection expression to map input data to the corresponding lambda function. The stage  Once the API is defined, each route is associated with the API as in:\napiv2ConnectRoute, _ := apiGateway.NewAPIV2Route(\u0026#34;$connect\u0026#34;, lambdaConnect) apiv2ConnectRoute.OperationName = \u0026#34;ConnectRoute\u0026#34; ... apiv2SendRoute, _ := apiGateway.NewAPIV2Route(\u0026#34;sendmessage\u0026#34;, lambdaSend) apiv2SendRoute.OperationName = \u0026#34;SendRoute\u0026#34; ... The $connect routeKey is a special route key value that is sent when a client first connects to the WebSocket API. See the official documentation for more information.\nIn comparison, the sendmessage routeKey value of sendmessage means that a payload of the form:\n{ \u0026#34;message\u0026#34;:\u0026#34;sendmessage\u0026#34;, \u0026#34;data\u0026#34;:\u0026#34;hello world !\u0026#34; } will trigger the lambdaSend function given the parent API\u0026rsquo;s route selection expression of $request.body.message.\nAdditional Privileges Because the lambdaSend function also needs to invoke the API Gateway Management APIs to broadcast, an additional IAM Privilege must be enabled:\nvar apigwPermissions = []sparta.IAMRolePrivilege{ { Actions: []string{\u0026#34;execute-api:ManageConnections\u0026#34;}, Resource: gocf.Join(\u0026#34;\u0026#34;, gocf.String(\u0026#34;arn:aws:execute-api:\u0026#34;), gocf.Ref(\u0026#34;AWS::Region\u0026#34;), gocf.String(\u0026#34;:\u0026#34;), gocf.Ref(\u0026#34;AWS::AccountId\u0026#34;), gocf.String(\u0026#34;:\u0026#34;), gocf.Ref(apiGateway.LogicalResourceName()), gocf.String(\u0026#34;/*\u0026#34;)), }, } lambdaSend.RoleDefinition.Privileges = append(lambdaSend.RoleDefinition.Privileges, apigwPermissions...) Annotating Lambda Functions The final configuration step is to use the API gateway to create an instance of the APIV2GatewayDecorator. This decorator is responsible for:\n Provisioning the DynamoDB table. Ensuring DynamoDB CRUD permissions for all the AWS Lambda functions. Publishing the table name into the Lambda function\u0026rsquo;s Environment block. Adding the WebSocket wss://... URL to the Stack\u0026rsquo;s Outputs.  The decorator is created by a call to NewConnectionTableDecorator which accepts:\n The environment variable to populate with the dynamically assigned DynamoDB table The DynamoDB attribute name to use to store the connectionID The read capacity units The write capacity units  For instance:\ndecorator, _ := apiGateway.NewConnectionTableDecorator(envKeyTableName, ddbAttributeConnectionID, 5, 5) var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFunctions = append(lambdaFunctions, lambdaConnect, lambdaDisconnect, lambdaSend) decorator.AnnotateLambdas(lambdaFunctions) Provision With everything defined, provide the API V2 Decorator as a Workflow hook as in:\n// Set everything up and run it...  workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorators: []sparta.ServiceDecoratorHookHandler{decorator}, } err := sparta.MainEx(awsName, \u0026#34;Sparta application that demonstrates API v2 Websocket support\u0026#34;, lambdaFunctions, apiGateway, nil, workflowHooks, false) and then provision the application:\ngo run main.go provision --s3Bucket $S3_BUCKET --noop INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.9.4 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : cfd44e2 INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.12.6 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: SpartaWebSocket-123412341234 LinkFlags= Option=provision UTC=\u0026#34;2019-07-25T05:26:57Z\u0026#34; INFO[0000] ════════════════════════════════════════════════ INFO[0000] Using `git` SHA for StampedBuildID Command=\u0026#34;git rev-parse HEAD\u0026#34; SHA=6b26f8e645e9d58c1b678e46576e19bbc29886c0 INFO[0000] Provisioning service BuildID=6b26f8e645e9d58c1b678e46576e19bbc29886c0 CodePipelineTrigger= InPlaceUpdates=false NOOP=false Tags= INFO[0000] Verifying IAM Lambda execution roles INFO[0000] IAM roles verified Count=3 INFO[0000] Checking S3 versioning Bucket=weagle VersioningEnabled=true INFO[0000] Checking S3 region Bucket=weagle Region=us-west-2 INFO[0000] Running `go generate` INFO[0000] Compiling binary Name=Sparta.lambda.amd64 INFO[0002] Creating code ZIP archive for upload TempName=./.sparta/SpartaWebSocket_123412341234-code.zip INFO[0002] Lambda code archive size Size=\u0026#34;23 MB\u0026#34; INFO[0002] Uploading local file to S3 Bucket=weagle Key=SpartaWebSocket-123412341234/SpartaWebSocket_123412341234-code.zip Path=./.sparta/SpartaWebSocket_123412341234-code.zip Size=\u0026#34;23 MB\u0026#34; INFO[0011] Calling WorkflowHook ServiceDecoratorHook= WorkflowHookContext=\u0026#34;map[]\u0026#34; INFO[0011] Uploading local file to S3 Bucket=weagle Key=SpartaWebSocket-123412341234/SpartaWebSocket_123412341234-cftemplate.json Path=./.sparta/SpartaWebSocket_123412341234-cftemplate.json Size=\u0026#34;14 kB\u0026#34; INFO[0011] Creating stack StackID=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaWebSocket-123412341234/d8a405b0-ae9c-11e9-a05a-0a1528792fce\u0026#34; INFO[0122] CloudFormation Metrics ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ ... INFO[0122] Stack Outputs ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ INFO[0122] APIGatewayURL Description=\u0026#34;API Gateway Websocket URL\u0026#34; Value=\u0026#34;wss://gu4vmnia27.execute-api.us-west-2.amazonaws.com/v1\u0026#34; INFO[0122] Stack provisioned CreationTime=\u0026#34;2019-07-25 05:27:08.687 +0000 UTC\u0026#34; StackId=\u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaWebSocket-123412341234/d8a405b0-ae9c-11e9-a05a-0a1528792fce\u0026#34; StackName=SpartaWebSocket-123412341234 INFO[0122] ════════════════════════════════════════════════ INFO[0122] SpartaWebSocket-123412341234 Summary INFO[0122] ════════════════════════════════════════════════ INFO[0122] Verifying IAM roles Duration (s)=0 INFO[0122] Verifying AWS preconditions Duration (s)=0 INFO[0122] Creating code bundle Duration (s)=1 INFO[0122] Uploading code Duration (s)=9 INFO[0122] Ensuring CloudFormation stack Duration (s)=112 INFO[0122] Total elapsed time Duration (s)=122 Test With the API Gateway deployed, the last step is to test it. Download and install the [ws](go get -u github.com/hashrocket/ws ) tool:\ngo get -u github.com/hashrocket/ws then connect to your new API and send a message as in:\n22:31 $ ws wss://gu4vmnia27.execute-api.us-west-2.amazonaws.com/v1 \u0026gt; {\u0026#34;message\u0026#34;:\u0026#34;sendmessage\u0026#34;, \u0026#34;data\u0026#34;:\u0026#34;hello world !\u0026#34;} \u0026lt; \u0026#34;hello world !\u0026#34; You can also send messages with Firecamp, a Chrome extension, and send messages between your ws session and the web (or vice versa).\nConclusion While a production ready application would likely need to include authentication and authorization, this is the beginnings of a full featured WebSocket service in fewer than 200 lines of application code:\n------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- Go 1 21 52 183 Markdown 1 0 2 0 ------------------------------------------------------------------------------- TOTAL 2 21 54 183 ------------------------------------------------------------------------------- Remember to terminate the stack when you\u0026rsquo;re done to avoid any unintentional costs!\nReferences  The SpartaWebSocket application is modeled after the https://github.com/aws-samples/simple-websockets-chat-app sample.  "
   421  },
   422  {
   423  	"uri": "/reference/eventsources/",
   424  	"title": "",
   425  	"tags": [],
   426  	"description": "",
   427  	"content": "Event Sources The true power of the AWS Lambda architecture is the ability to integrate Lambda execution with other AWS service state transitions. Depending on the service type, state change events are either pushed or transparently polled and used as the input to a Lambda execution.\nThere are several event sources available. They are grouped into Pull and Push types. Pull based models use sparta.EventSourceMapping values, as the trigger configuration is stored in the AWS Lambda service. Push based types use service specific sparta.*Permission types to denote the fact that the trigger logic is configured in the remote service.\nPull Based  DynamoDB Kinesis SQS  Push Based  CloudFormation NOT YET IMPLEMENTED CloudWatch Events CloudWatch Logs Cognito NOT YET IMPLEMENTED S3 SES SNS  "
   428  },
   429  {
   430  	"uri": "/reference/archetypes/",
   431  	"title": "",
   432  	"tags": [],
   433  	"description": "",
   434  	"content": "Archetype Constructors Sparta\u0026rsquo;s archetype package provides convenience functions to simplify creating AWS Lambda functions for specific types of event sources. See each section for more details:\n Event Bridge  The EventBridge lambda event source allows you to trigger lambda functions in response to either cron schedules or account events. There are two different archetype functions available. Scheduled Scheduled Lambdas execute either at fixed times or periodically depending on the schedule expression. To create a scheduled function use a constructor as in: import ( spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // EventBridge reactor function func echoEventBridgeEvent(ctx context.Context, msg json.RawMessage) (interface{}, error) { logger, _ := ctx.\n CodeCommit  The CodeCommit Lambda event source allows you to trigger lambda functions in response to CodeCommit repository events. Events Lambda functions triggered in response to CodeCommit evetms use a combination of events and branches to manage which state changes trigger your lambda function. To create an event subscriber use a constructor as in: // CodeCommit reactor function func reactorFunc(ctx context.Context, event awsLambdaEvents.CodeCommitEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: event, }).\n CloudWatch  The CloudWatch Logs Lambda event source allows you to trigger lambda functions in response to either cron schedules or account events. There are three different archetype functions available. Scheduled Scheduled Lambdas execute either at fixed times or periodically depending on the schedule expression. To create a scheduled function use a constructor as in: import ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // CloudWatch reactor function func reactorFunc(ctx context.Context, cwLogs awsLambdaEvents.CloudwatchLogsEvent) (interface{}, error) { logger, _ := ctx.\n DynamoDB  To create a DynamoDB reactor that subscribes via an EventSourceMapping, use the NewDynamoDBReactor constructor as in: import ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // DynamoDB reactor function func reactorFunc(ctx context.Context, dynamoEvent awsLambdaEvents.DynamoDBEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: dynamoEvent, }).Info(\u0026#34;DynamoDB Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ... handler := spartaArchetype.DynamoDBReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewDynamoDBReactor(handler, \u0026#34;DYNAMO_DB_ARN_OR_CLOUDFORMATION_REF_VALUE\u0026#34;, \u0026#34;TRIM_HORIZON\u0026#34;, 10, nil) }  Kinesis  To create a Kinesis Stream reactor that subscribes via an EventSourceMapping, use the NewKinesisReactor constructor as in: import ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // KinesisStream reactor function func reactorFunc(ctx context.Context, kinesisEvent awsLambdaEvents.KinesisEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: kinesisEvent, }).Info(\u0026#34;Kinesis Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ... handler := spartaArchetype.KinesisReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewKinesisReactor(handler, \u0026#34;KINESIS_STREAM_ARN_OR_CLOUDFORMATION_REF_VALUE\u0026#34;, \u0026#34;TRIM_HORIZON\u0026#34;, 10, nil) }  Kinesis Firehose  There are two ways to create a Firehose Transform reactor that transforms a KinesisFirehoseEventRecord with a Lambda function: NewKinesisFirehoseLambdaTransformer Transform using a Lambda function NewKinesisFirehoseTransformer Transform using a go text/template declaration NewKinesisFirehoseLambdaTransformer import ( awsEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // KinesisStream reactor function func reactorFunc(ctx context.Context, record *awsEvents.KinesisFirehoseEventRecord) (*awsEvents.KinesisFirehoseResponseRecord, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Record\u0026#34;: record, }).Info(\u0026#34;Kinesis Firehose Event\u0026#34;) responseRecord = \u0026amp;awsEvents.\n REST Service  The rest package provides convenience functions to define a serverless REST style service. The package uses three concepts: Routes: URL paths that resolve to a single go struct Resources: go structs that optionally define HTTP method (GET, POST, etc.). ResourceDefinition: an interface that go structs must implement in order to support resource-based registration. Routes Routes are similar many HTTP-routing libraries. They support path parameters. Resources Resources are the targets of Routes.\n S3  There are two different S3-based constructors depending on whether your lambda function should use an Object Key Name filter. The S3 subscriber is preconfigured to be notified of both s3:ObjectCreated:* and s3:ObjectRemoved:* events. Object Key Name Filtering Object key name filtering only invokes a lambda function when objects with the given prefix are created. To subscribe to object events created by objects with a given prefix, use the NewS3ScopedReactor constructor as in:\n SNS  To create a SNS reactor that subscribes via an subscription configuration, use the NewSNSReactor constructor as in: import ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; spartaArchetype \u0026#34;github.com/mweagle/Sparta/archetype\u0026#34; ) // DynamoDB reactor function func reactorFunc(ctx context.Context, snsEvent awsLambdaEvents.SNSEvent) (interface{}, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: snsEvent, }).Info(\u0026#34;SNS Event\u0026#34;) return \u0026#34;Hello World 👋. Welcome to AWS Lambda! 🙌🎉🍾\u0026#34;, nil } func main() { // ... handler := spartaArchetype.SNSReactorFunc(reactorFunc) lambdaFn, lambdaFnErr := spartaArchetype.NewDynamoDBReactor(handler, \u0026#34;SNS_ARN_OR_CLOUDFORMATION_REF_VALUE\u0026#34;, nil) }  "
   435  },
   436  {
   437  	"uri": "/reference/decorators/",
   438  	"title": "",
   439  	"tags": [],
   440  	"description": "",
   441  	"content": "Build-Time Decorators Sparta uses build-time decorators to annotate the CloudFormation template with additional functionality.\nWhile Sparta tries to provide workflows common across service lifecycles, it may be the case that an application requires additional functionality or runtime resources.\nTo support this, Sparta allows you to customize the build pipeline via WorkflowHooks structure. These hooks are called at specific points in the provision lifecycle and support augmenting the standard pipeline:\nmermaid.initialize({startOnLoad:true}); graph TD classDef stdOp fill:#FFF,stroke:#A00,stroke-width:2px; classDef userHook fill:#B5B2A1,stroke:#A00,stroke-width:2px,stroke-dasharray: 5, 5; iam[Verify Static IAM Roles] class iam stdOp; preBuild[WorkflowHook - PreBuild] class preBuild userHook; compile[Compile for AWS Lambda Container] postBuild[WorkflowHook - PostBuild] class postBuild userHook; package[ZIP archive] class package stdOp; userArchive[WorkflowHook - Archive] class userArchive userHook; upload[Upload Archive to S3] packageAssets[Conditionally ZIP S3 Site Assets] uploadAssets[Upload S3 Assets] class upload,packageAssets,uploadAssets stdOp; preMarshall[WorkflowHook - PreMarshall] class preMarshall userHook; generate[Marshal to CloudFormation] class generate stdOp; decorate[Call Lambda Decorators - Dynamic AWS Resources] class decorate stdOp; serviceDecorator[Service Decorator] class serviceDecorator userHook; postMarshall[WorkflowHook - PostMarshall] class postMarshall stdOp; uploadTemplate[Upload Template to S3] updateStack[Create/Update Stack] inplaceUpdates[In-place λ code updates] wait[Wait for Complete/Failure Result] class uploadTemplate,updateStack,inplaceUpdates,wait stdOp; iam--preBuild preBuild--|go|compile compile--postBuild postBuild--package package--packageAssets package--userArchive userArchive--upload packageAssets--uploadAssets uploadAssets--generate upload--generate generate--preMarshall preMarshall--decorate decorate--serviceDecorator serviceDecorator--postMarshall postMarshall--uploadTemplate uploadTemplate--|standard|updateStack uploadTemplate--|inplace|inplaceUpdates updateStack--wait  This diagram is rendered with Mermaid. Please open an issue if it doesn't render properly. The following sections describe the types of WorkflowHooks available. All hooks accept a context map[string]interface{} as their first parameter. Sparta treats this as an opaque property bag that enables hooks to communicate state.\nWorkflowHook Types Builder Hooks BuilderHooks share the WorkflowHook signature:\ntype WorkflowHook func(context map[string]interface{}, serviceName string, S3Bucket string, buildID string, awsSession *session.Session, noop bool, logger *logrus.Logger) error These functions include:\n PreBuild PostBuild PreMarshall PostMarshall  Archive Hook The ArchiveHook allows a service to add custom resources to the ZIP archive and have the signature:\ntype ArchiveHook func(context map[string]interface{}, serviceName string, zipWriter *zip.Writer, awsSession *session.Session, noop bool, logger *logrus.Logger) error This function is called after Sparta has written the standard resources to the *zip.Writer stream.\nRollback Hook The RollbackHook is called iff the provision operation fails and has the signature:\ntype RollbackHook func(context map[string]interface{}, serviceName string, awsSession *session.Session, noop bool, logger *logrus.Logger) Using WorkflowHooks To use the Workflow Hooks feature, initialize a WorkflowHooks structure with 1 or more hook functions and call sparta.MainEx.\nAvailable Decorators  Application Load Balancer  The ApplicationLoadBalancerDecorator allows you to expose lambda functions as Application Load Balancer targets. This can be useful to provide HTTP(S) access to one or more Lambda functions without requiring an API-Gateway service. Lambda Function Application Load Balancer (ALB) lambda targets must satisfy a prescribed Lambda signature: import ( awsEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func(context.Context, awsEvents.ALBTargetGroupRequest) awsEvents.ALBTargetGroupResponse See the ALBTargetGroupRequest and ALBTargetGroupResponse godoc entries for more information. An example ALB-eligible target function might look like:\n CloudMap Service Discovery  The CloudMapServiceDecorator allows your service to register a service instance for your application. For example, an application that provisions a SQS queue and an AWS Lambda function that consumes messages from that queue may need a way for the Lambda function to discover the dynamically provisioned queue. Sparta supports an environment-based discovery service but that discovery is limited to a single Service. The CloudMapServiceDecorator leverages the CloudMap service to support intra- and inter-service resource discovery.\n CloudFront Distribution  The CloudFrontDistributionDecorator associates a CloudFront Distribution with your S3-backed website. It is implemented as a ServiceDecoratorHookHandler as a single service can only provision one CloudFront distribution. Sample usage: //////////////////////////////////////////////////////////////////////////////// // CloudFront settings const subdomain = \u0026#34;mySiteSubdomain\u0026#34; // The domain managed by Route53. const domainName = \u0026#34;myRoute53ManagedDomain.net\u0026#34; // The site will be available at // https://mySiteSubdomain.myRoute53ManagedDomain.net // The S3 bucketname must match the subdomain.domain // name pattern to serve as a CloudFront Distribution target var bucketName = fmt.\n Lambda Versioning Decorator   TODO: LambdaVersioningDecorator  Publishing Outputs  CloudFormation stack outputs can be used to advertise information about a service. Sparta provides different publishing output decorators depending on the type of CloudFormation resource output: Ref: PublishRefOutputDecorator Fn::Att: PublishAttOutputDecorator Publishing Resource Ref Values For example, to publish the dynamically lambda resource name for a given AWS Lambda function, use PublishRefOutputDecorator such as: lambdaFunctionName := \u0026#34;Hello World\u0026#34; lambdaFn, _ := sparta.NewAWSLambda(lambdaFunctionName, helloWorld, sparta.IAMRoleDefinition{}) lambdaFn.Decorators = append(lambdaFn.Decorators, spartaDecorators.PublishRefOutputDecorator(fmt.Sprintf(\u0026#34;%s FunctionName\u0026#34;, lambdaFunctionName), fmt.\n S3 Artifact Publisher  The S3ArtifactPublisherDecorator enables a service to publish objects to S3 locations as part of the service lifecycle. This decorator is implemented as a ServiceDecoratorHookHandler which is supplied to MainEx. For example: hooks := \u0026amp;sparta.WorkflowHooks{} payloadData := map[string]interface{}{ \u0026#34;SomeValue\u0026#34;: gocf.Ref(\u0026#34;AWS::StackName\u0026#34;), } serviceHook := spartaDecorators.S3ArtifactPublisherDecorator(gocf.String(\u0026#34;MY-S3-BUCKETNAME\u0026#34;), gocf.Join(\u0026#34;\u0026#34;, gocf.String(\u0026#34;metadata/\u0026#34;), gocf.Ref(\u0026#34;AWS::StackName\u0026#34;), gocf.String(\u0026#34;.json\u0026#34;)), payloadData) hooks.ServiceDecorators = []sparta.ServiceDecoratorHookHandler{serviceHook}  Dynamic Infrastructure  In addition to provisioning AWS Lambda functions, Sparta supports the creation of other CloudFormation Resources. This enables a service to move towards immutable infrastructure, where the service and its infrastructure requirements are treated as a logical unit. For instance, consider the case where two developers are working in the same AWS account. Developer 1 is working on analyzing text documents. Their lambda code is triggered in response to uploading sample text documents to S3.\n Notes  Workflow hooks can be used to support Dockerizing your application You may need to add custom CLI commands to fully support Docker Enable --level debug for detailed workflow hook debugging information  "
   442  },
   443  {
   444  	"uri": "/reference/interceptors/",
   445  	"title": "",
   446  	"tags": [],
   447  	"description": "",
   448  	"content": "Runtime Interceptors Sparta uses runtime interceptors to hook into the event handling workflow. Interceptors provide an opportunity to handle concerns (logging, metrics, etc) independent of core event handling workflow.\nmermaid.initialize({startOnLoad:true}); graph TD classDef stdOp fill:#FFF,stroke:#A00,stroke-width:2px; classDef userHook fill:#B5B2A1,stroke:#A00,stroke-width:2px,stroke-dasharray: 5, 5; execute[Execute] class execute stdOp; lookup[Lookup Function] class lookup stdOp; call[Call Function] class call stdOp; interceptorBegin[Interceptor Begin] class interceptorBegin userHook; populateLogger[Logger into context] class populateLogger stdOp; interceptorBeforeSetup[Interceptor BeforeSetup] class interceptorBeforeSetup userHook; populateContext[Populate context] class populateContext stdOp; interceptorAfterSetup[Interceptor AfterSetup] class interceptorAfterSetup userHook; unmarshalArgs[Introspect Arguments] class unmarshalArgs stdOp; interceptorBeforeDispatch[Interceptor BeforeDispatch] class interceptorBeforeDispatch userHook; callFunction[Call Function] class callFunction stdOp; interceptorAfterDispatch[Interceptor AfterDispatch] class interceptorAfterDispatch userHook; extractReturn[Extract Function Return] class extractReturn stdOp; interceptorComplete[Interceptor Complete] class interceptorComplete userHook; done[Done] class done stdOp; execute--lookup lookup--call call--interceptorBegin interceptorBegin--populateLogger populateLogger--interceptorBeforeSetup interceptorBeforeSetup--populateContext populateContext--interceptorAfterSetup interceptorAfterSetup--unmarshalArgs unmarshalArgs--interceptorBeforeDispatch interceptorBeforeDispatch--callFunction callFunction--interceptorAfterDispatch interceptorAfterDispatch--extractReturn extractReturn--interceptorComplete interceptorComplete--done  This diagram is rendered with Mermaid. Please open an issue if it doesn't render properly. Available Interceptors  XRayInterceptor  TODO: Document the XRayInterceptor 🎉 Sparta v1.7.0: The Time Machine Edition 🕰 🎉 For those times when you wish you could go back in time and enable debug logging for a single request.https://t.co/BP60qQpKva#serverless #go \u0026mdash; Matt Weagle (@mweagle) November 12, 2018 Sparta v1.7.0 adds `Interceptors`: user defined hooks called during the lambda event handling flow to support cross-cutting concerns. The first interceptor is an XRay annotation and metadata interceptor.\n "
   449  },
   450  {
   451  	"uri": "/reference/cli_options/",
   452  	"title": "",
   453  	"tags": [],
   454  	"description": "",
   455  	"content": "Sparta applications delegate func main() responsibilities to one of Sparta\u0026rsquo;s Main entrypoints (Main, MainEx). This provides each application with some standard command line options as shown below:\n$ go run main.go --help Simple Sparta application that demonstrates core functionality Usage: main [command] Available Commands: delete Delete service describe Describe service execute Start the application and begin handling events explore Interactively explore a provisioned service help Help about any command profile Interactively examine service pprof output provision Provision service status Produce a report for a provisioned service version Display version information Flags: -f, --format string Log format [text, json] (default \u0026#34;text\u0026#34;) -h, --help help for main --ldflags string Go linker string definition flags (https://golang.org/cmd/link/) -l, --level string Log level [panic, fatal, error, warn, info, debug] (default \u0026#34;info\u0026#34;) --nocolor Boolean flag to suppress colorized TTY output -n, --noop Dry-run behavior only (do not perform mutations) -t, --tags string Optional build tags for conditional compilation -z, --timestamps Include UTC timestamp log line prefix Use \u0026#34;main [command] --help\u0026#34; for more information about a command. It\u0026rsquo;s also possible to add custom flags and/or custom commands to extend your application\u0026rsquo;s behavior.\nThese command line options are briefly described in the following sections. For the most up to date information, use the --help subcommand option.\nStandard Commands Delete This simply deletes the stack (if present). Attempting to delete a non-empty stack is not treated as an error.\nDescribe The describe command line option produces an HTML summary (see graph.html for an example) of your Sparta service.\nThe report also includes the automatically generated CloudFormation template which can be helpful when diagnosing provisioning errors.\nExecute This command is used when the cross compiled binary is provisioned in AWS lambda. It is not (typically) applicable to the local development workflow.\nExplore The explore option creates a terminal GUI that supports interactive exploration of lambda functions deployed to AWS. This ui recursively searches for all *.json files in the source tree to populate the set of eligible events that can be submitted.\nProfile The profile command line option enters an interactive session where a previously profiled application can be locally visualized using snapshots posted to S3 and provided to a local pprof ui.\nProvision The provision option is the subcommand most likely to be used during development. It provisions the Sparta application to AWS Lambda.\nStatus The status option queries AWS for the current stack status and produces an optionally account-id redacted report. Stack outputs, tags, and other metadata are included in the status report:\n$ go run main.go status --redact INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.5.0 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : 8f199e1 INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.11.1 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: MyHelloWorldStack-mweagle LinkFlags= Option=status UTC=\u0026#34;2018-10-14T12:28:18Z\u0026#34; INFO[0000] ════════════════════════════════════════════════ INFO[0001] StackId Id=\u0026#34;arn:aws:cloudformation:us-west-2:************:stack/MyHelloWorldStack-mweagle/5817dff0-c5f1-11e8-b43a-503ac9841a99\u0026#34; INFO[0001] Stack status State=UPDATE_COMPLETE INFO[0001] Created Time=\u0026#34;2018-10-02 03:14:59.127 +0000 UTC\u0026#34; INFO[0001] Last Update Time=\u0026#34;2018-10-06 14:20:40.267 +0000 UTC\u0026#34; INFO[0001] Tag io:gosparta:buildId=7ee3e1bc52f15c4a636e05061eaec7b748db22a9 Version The version option is a diagnostic command that prints the version of the Sparta framework embedded in the application.\n$ go run main.go version INFO[0000] ════════════════════════════════════════════════ INFO[0000] ╔═╗╔═╗╔═╗╦═╗╔╦╗╔═╗ Version : 1.5.0 INFO[0000] ╚═╗╠═╝╠═╣╠╦╝ ║ ╠═╣ SHA : 8f199e1 INFO[0000] ╚═╝╩ ╩ ╩╩╚═ ╩ ╩ ╩ Go : go1.11.1 INFO[0000] ════════════════════════════════════════════════ INFO[0000] Service: MyHelloWorldStack-mweagle LinkFlags= Option=version UTC=\u0026#34;2018-10-14T12:27:36Z\u0026#34; INFO[0000] ════════════════════════════════════════════════ "
   456  },
   457  {
   458  	"uri": "/reference/application/",
   459  	"title": "",
   460  	"tags": [],
   461  	"description": "",
   462  	"content": "Application Customization Sparta-based applications use the Cobra package to expose a rich set of command line options. This section describes:\n Custom Commands  In addition to custom flags, an application may register completely new commands. For example, to support alternative topologies or integrated automated acceptance tests as part of a CI/CD pipeline. To register a custom command, define a new cobra.Command and add it to the sparta.CommandLineOptions.Root command value. Ensure you use the xxxxE Cobra functions so that errors can be properly propagated. httpServerCommand := \u0026amp;cobra.Command{ Use: \u0026#34;httpServer\u0026#34;, Short: \u0026#34;Sample HelloWorld HTTP server\u0026#34;, Long: `Sample HelloWorld HTTP server that binds to port: ` + HTTPServerPort, RunE: func(cmd *cobra.\n Custom Flags  Some commands (eg: provision) may require additional options. For instance, your application\u0026rsquo;s provision logic may require VPC subnets or EC2 SSH Key Names. The default Sparta command line option flags may be extended and validated by building on the exposed Cobra command objects. Adding Flags To add a flag, use one of the pflag functions to register your custom flag with one of the standard CommandLineOption values. For example: // SSHKeyName is the SSH KeyName to use when provisioning new EC2 instance var SSHKeyName string func main() { // And add the SSHKeyName option to the provision step sparta.\n Managing Environments  It\u0026rsquo;s common for a single Sparta application to target multiple environments. For example: Development Staging Production Each environment is largely similar, but the application may need slightly different configuration in each context. To support this, Sparta uses Go\u0026rsquo;s conditional compilation support to ensure that configuration information is validated at build time. Conditional compilation is supported via the --tags/-t command line argument. This example will work through the SpartaConfig sample.\n CloudFormation Resources  In addition to per-lambda custom resources, a service may benefit from the ability to include a service-scoped Lambda backed CustomResource. Including a custom service scoped resource is a multi-step process. The code excerpts below are from the SpartaCustomResource sample application. 1. Resource Type The first step is to define a custom CloudFormation Resource Type //////////////////////////////////////////////////////////////////////////////// // 1 - Define the custom type const spartaHelloWorldResourceType = \u0026#34;Custom::sparta::HelloWorldResource\u0026#34; 2. Request Parameters The next step is to define the parameters that are supplied to the custom resource invocation.\n Custom Resources  In some circumstances your service may need to provision or access resources that fall outside the standard workflow. In this case you can use CloudFormation Lambda-backed CustomResources to create or access resources during your CloudFormation stack\u0026rsquo;s lifecycle. Sparta provides unchecked access to the CloudFormation resource lifecycle via the RequireCustomResource function. This function registers an AWS Lambda Function as an CloudFormation custom resource lifecycle. In this section we\u0026rsquo;ll walk through a sample user-defined custom resource and discuss how a custom resource\u0026rsquo;s outputs can be propagated to an application-level Sparta lambda function.\n Adding custom flags or commands is typically a prerequisite to supporting alternative topologies.\n"
   463  },
   464  {
   465  	"uri": "/reference/step/",
   466  	"title": "",
   467  	"tags": [],
   468  	"description": "",
   469  	"content": "Step Functions Sparta is designed to facilitate all serverless development strategies. While it provides an AWS Lambda optimized framework, it is also possible to deploy Lambda-free workflows comprised of AWS Step Functions.\nFor examples of AWS Step Function workflows, see the overviews at:\n  AWS Lambda-based Step Functions\n  AWS Fargate Step Functions\nReference information is provided in the services section.\n  "
   470  },
   471  {
   472  	"uri": "/reference/application/custom_lambda_resources/",
   473  	"title": "CloudFormation Resources",
   474  	"tags": [],
   475  	"description": "",
   476  	"content": "In addition to per-lambda custom resources, a service may benefit from the ability to include a service-scoped Lambda backed CustomResource.\nIncluding a custom service scoped resource is a multi-step process. The code excerpts below are from the SpartaCustomResource sample application.\n1. Resource Type The first step is to define a custom CloudFormation Resource Type\n//////////////////////////////////////////////////////////////////////////////// // 1 - Define the custom type const spartaHelloWorldResourceType = \u0026#34;Custom::sparta::HelloWorldResource\u0026#34; 2. Request Parameters The next step is to define the parameters that are supplied to the custom resource invocation. This is done via a struct that will be later embedded into the CustomResourceCommand.\n// SpartaCustomResourceRequest is what the UserProperties // should be set to in the CustomResource invocation type SpartaCustomResourceRequest struct { Message *gocf.StringExpr } 3. Command Handler With the parameters defined, define the CustomResourceCommand that is responsible for performing the external operations based on the specified request parameters.\n// SpartaHelloWorldResource is a simple POC showing how to create custom resources type SpartaHelloWorldResource struct { gocf.CloudFormationCustomResource SpartaCustomResourceRequest } // Create implements resource create func (command SpartaHelloWorldResource) Create(awsSession *session.Session, event *spartaAWSResource.CloudFormationLambdaEvent, logger *logrus.Logger) (map[string]interface{}, error) { requestPropsErr := json.Unmarshal(event.ResourceProperties, \u0026amp;command) if requestPropsErr != nil { return nil, requestPropsErr } logger.Info(\u0026#34;create: \u0026#34;, command.Message.Literal) return map[string]interface{}{ \u0026#34;Resource\u0026#34;: \u0026#34;Created message: \u0026#34; + command.Message.Literal, }, nil } // Update implements resource update func (command SpartaHelloWorldResource) Update(awsSession *session.Session, event *spartaAWSResource.CloudFormationLambdaEvent, logger *logrus.Logger) (map[string]interface{}, error) { return \u0026#34;\u0026#34;, nil } // Delete implements resource delete func (command SpartaHelloWorldResource) Delete(awsSession *session.Session, event *spartaAWSResource.CloudFormationLambdaEvent, logger *logrus.Logger) (map[string]interface{}, error) { return \u0026#34;\u0026#34;, nil } 4. Register Type Provider To make the new type available to Sparta\u0026rsquo;s internal CloudFormation template marshalling, register the new type via go-cloudformation.RegisterCustomResourceProvider:\nfunc init() { customResourceFactory := func(resourceType string) gocf.ResourceProperties { switch resourceType { case spartaHelloWorldResourceType: return \u0026amp;SpartaHelloWorldResource{} } return nil } gocf.RegisterCustomResourceProvider(customResourceFactory) } 5. Annotate Template The final step is to ensure the custom resource command is included in the Sparta binary that defines your service and then create an invocation of that command. The annotation is expressed as a ServiceDecoratorHookHandler that performs both operations as part of the general service build lifecycle\u0026hellip;\nfunc customResourceHooks() *sparta.WorkflowHooks { // Add the custom resource decorator  customResourceDecorator := func(context map[string]interface{}, serviceName string, template *gocf.Template, S3Bucket string, S3Key string, buildID string, awsSession *session.Session, noop bool, logger *logrus.Logger) error { // 1. Ensure the Lambda Function is registered  customResourceName, customResourceNameErr := sparta.EnsureCustomResourceHandler(serviceName, spartaHelloWorldResourceType, nil, // This custom action doesn\u0026#39;t need to access other AWS resources  []string{}, template, S3Bucket, S3Key, logger) if customResourceNameErr != nil { return customResourceNameErr } // 2. Create the request for the invocation of the lambda resource with  // parameters  spartaCustomResource := \u0026amp;SpartaHelloWorldResource{} spartaCustomResource.ServiceToken = gocf.GetAtt(customResourceName, \u0026#34;Arn\u0026#34;) spartaCustomResource.Message = gocf.String(\u0026#34;Custom resource activated!\u0026#34;) resourceInvokerName := sparta.CloudFormationResourceName(\u0026#34;SpartaCustomResource\u0026#34;, fmt.Sprintf(\u0026#34;%v\u0026#34;, S3Bucket), fmt.Sprintf(\u0026#34;%v\u0026#34;, S3Key)) // Add it  template.AddResource(resourceInvokerName, spartaCustomResource) return nil } // Add the decorator to the template  hooks := \u0026amp;sparta.WorkflowHooks{} hooks.ServiceDecorators = []sparta.ServiceDecoratorHookHandler{ sparta.ServiceDecoratorHookFunc(customResourceDecorator), } return hooks } Provide the hooks structure to MainEx to include this custom resource with your service\u0026rsquo;s provisioning lifecycle.\n"
   477  },
   478  {
   479  	"uri": "/reference/application/custom_resources/",
   480  	"title": "Custom Resources",
   481  	"tags": [],
   482  	"description": "",
   483  	"content": "In some circumstances your service may need to provision or access resources that fall outside the standard workflow. In this case you can use CloudFormation Lambda-backed CustomResources to create or access resources during your CloudFormation stack\u0026rsquo;s lifecycle.\nSparta provides unchecked access to the CloudFormation resource lifecycle via the RequireCustomResource function. This function registers an AWS Lambda Function as an CloudFormation custom resource lifecycle.\nIn this section we\u0026rsquo;ll walk through a sample user-defined custom resource and discuss how a custom resource\u0026rsquo;s outputs can be propagated to an application-level Sparta lambda function.\nComponents Defining a custom resource is a two stage process, depending on whether your application-level lambda function requires access to the custom resource outputs:\n The user-defined AWS Lambda Function - This function defines your resource\u0026rsquo;s logic. The multiple return value is map[string]interface{}, error which signify resource results and operation error, respectively. The LambdaAWSInfo struct which declares a dependency on your custom resource via the RequireCustomResource member function. Optional - A call to github.com/mweagle/Sparta/aws/cloudformation/resources.SendCloudFormationResponse to signal CloudFormation creation status. Optional - The template decorator that binds your CustomResource\u0026rsquo;s data results to the owning LambdaAWSInfo caller. Optional - A call from your standard Lambda\u0026rsquo;s function body to discover the CustomResource outputs via sparta.Discover().  Custom Resource Functioon A Custom Resource Function is a standard AWS Lambda Go function type that accepts a CloudFormationLambdaEvent input type. This type holds all information for the requested operation.\nThe multiple return values denote success with non-empty results, or an error.\nAs an example, we\u0026rsquo;ll use the following custom resource function:\nimport ( awsLambdaCtx \u0026#34;github.com/aws/aws-lambda-go/lambdacontext\u0026#34; spartaCFResources \u0026#34;github.com/mweagle/Sparta/aws/cloudformation/resources\u0026#34; ) // User defined λ-backed CloudFormation CustomResource func userDefinedCustomResource(ctx context.Context, event spartaCFResources.CloudFormationLambdaEvent) (map[string]interface{}, error) { logger, _ := ctx.Value(ContextKeyLogger).(*logrus.Logger) lambdaCtx, _ := awsLambdaCtx.FromContext(ctx) var opResults = map[string]interface{}{ \u0026#34;CustomResourceResult\u0026#34;: \u0026#34;Victory!\u0026#34;, } opErr := spartaCFResources.SendCloudFormationResponse(lambdaCtx, \u0026amp;event, opResults, nil, logger) return opResults, opErr } This function always succeeds and publishes a non-empty map consisting of a single key (CustomResourceResult) to CloudFormation. This value can be accessed by other CloudFormation resources.\nRequireCustomResource The next step is to associate this custom resource function with a previously created Sparta LambdaAWSInfo instance via RequireCustomResource. This function accepts:\n roleNameOrIAMRoleDefinition: The IAM role name or definition under which the custom resource function should be executed. Equivalent to the same argument in NewAWSLambda. userFunc: Custom resource function handler lambdaOptions: Lambda execution options. Equivalent to the same argument in NewAWSLambda. resourceProps: Arbitrary, optional properties that will be provided to the userFunc during execution.  The multiple return values denote the logical, stable CloudFormation resource ID of the new custom resource, or an error if one occurred.\nFor example, our custom resource function above can be associated via:\n// Standard AWS λ function func helloWorld(ctx context.Context) (string, error) { return \u0026#34;Hello World\u0026#34;, nil } func ExampleLambdaAWSInfo_RequireCustomResource() { lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(helloWorld), helloWorld, sparta.IAMRoleDefinition{}) cfResName, _ := lambdaFn.RequireCustomResource(IAMRoleDefinition{}, userDefinedCustomResource, nil, nil) } Since our custom resource function doesn\u0026rsquo;t require any additional AWS resources, we provide an empty IAMRoleDefinition.\nThese two steps are sufficient to include your custom resource function in the CloudFormation stack lifecycle.\nIt\u0026rsquo;s possible to share state from the custom resource to a standard Sparta lambda function by annotating your Sparta lambda function\u0026rsquo;s metadata and then discovering it at execution time.\nOptional - Template Decorator To link these resources together, the first step is to include a TemplateDecorator that annotates your Sparta lambda function\u0026rsquo;s CloudFormation resource metadata. This function specifies which user defined output keys (CustomResourceResult in this example) you wish to make available during your lambda function\u0026rsquo;s execution.\nlambdaFn.Decorator = func(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, cfTemplate *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error { // Pass CustomResource outputs to the λ function  resourceMetadata[\u0026#34;CustomResource\u0026#34;] = gocf.GetAtt(cfResName, \u0026#34;CustomResourceResult\u0026#34;) return nil } The cfResName value is the CloudFormation resource name returned by RequireCustomResource. The template decorator specifies which of your CustomResourceFunction outputs should be discoverable during the paren\u0026rsquo;t lambda functions execution time through a go-cloudformation version of Fn::GetAtt.\nOptional - Discovery Discovery is handled by sparta.Discover() which returns a DiscoveryInfo instance pointer containing the linked Custom Resource outputs. The calling Sparta lambda function can discover its own DiscoveryResource keyname via the top-level ResourceID field. Once found, the calling function then looks up the linked custom resource output via the Properties field using the keyname (CustomResource) provided in the previous template decorator.\nIn this example, the unmarshalled DiscoveryInfo struct looks like:\n{ \u0026#34;Discovery\u0026#34;: { \u0026#34;ResourceID\u0026#34;: \u0026#34;mainhelloWorldLambda837e49c53be175a0f75018a148ab6cd22841cbfb\u0026#34;, \u0026#34;Region\u0026#34;: \u0026#34;us-west-2\u0026#34;, \u0026#34;StackID\u0026#34;: \u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaHelloWorld/70b28170-13f9-11e6-b0c7-50d5ca11b8d2\u0026#34;, \u0026#34;StackName\u0026#34;: \u0026#34;SpartaHelloWorld\u0026#34;, \u0026#34;Resources\u0026#34;: { \u0026#34;mainhelloWorldLambda837e49c53be175a0f75018a148ab6cd22841cbfb\u0026#34;: { \u0026#34;ResourceID\u0026#34;: \u0026#34;mainhelloWorldLambda837e49c53be175a0f75018a148ab6cd22841cbfb\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;CustomResource\u0026#34;: \u0026#34;Victory!\u0026#34; }, \u0026#34;Tags\u0026#34;: {} } } }, \u0026#34;level\u0026#34;: \u0026#34;info\u0026#34;, \u0026#34;msg\u0026#34;: \u0026#34;Custom resource request\u0026#34;, \u0026#34;time\u0026#34;: \u0026#34;2016-05-07T14:13:37Z\u0026#34; } To lookup the output, the calling function might do something like:\nconfiguration, _ := sparta.Discover() customResult := configuration.Resources[configuration.ResourceID].Properties[\u0026#34;CustomResourceResult\u0026#34;] Wrapping Up CloudFormation Custom Resources are a powerful tool that can help pre-existing applications migrate to a Sparta application.\nNotes  Sparta uses Lambda-backed CustomResource functions, so they are subject to the same Lambda limits as application-level Sparta lambda functions. Returning an error from the CustomResourceFunction will result in a FAILED reason being returned in the CloudFormation response object.  "
   484  },
   485  {
   486  	"uri": "/reference/discovery/",
   487  	"title": "Discovery Service",
   488  	"tags": [],
   489  	"description": "",
   490  	"content": "The ability to provision dynamic infrastructure (see also the SES Event Source Example) as part of a Sparta application creates a need to discover those resources at lambda execution time.\nSparta exposes this functionality via sparta.Discover. This function returns information about the current stack (eg, name, region, ID) as well as metadata about the immediate dependencies of the calling go lambda function.\nThe following sections walk through provisioning a S3 bucket, declaring an explicit dependency on that resource, and then discovering the resource at lambda execution time. It is extracted from appendDynamicS3BucketLambda in the SpartaApplication source.\nIf you haven\u0026rsquo;t already done so, please review the Dynamic Infrastructure section for background on dynamic infrastructure provisioning.\nDiscovery For reference, we provision an S3 bucket and declare an explicit dependency with the code below. Because our gocf.S3Bucket{} struct uses a zero-length BucketName property, CloudFormation will dynamically assign one.\ns3BucketResourceName := sparta.CloudFormationResourceName(\u0026#34;S3DynamicBucket\u0026#34;) lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoS3DynamicBucketEvent), echoS3DynamicBucketEvent, sparta.IAMRoleDefinition{}) lambdaFn.Permissions = append(lambdaFn.Permissions, sparta.S3Permission{ BasePermission: sparta.BasePermission{ SourceArn: gocf.Ref(s3BucketResourceName), }, Events: []string{\u0026#34;s3:ObjectCreated:*\u0026#34;, \u0026#34;s3:ObjectRemoved:*\u0026#34;}, }) lambdaFn.DependsOn = append(lambdaFn.DependsOn, s3BucketResourceName) // Add permission s.t. the lambda function could read from the S3 bucket lambdaFn.RoleDefinition.Privileges = append(lambdaFn.RoleDefinition.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:HeadObject\u0026#34;}, Resource: spartaCF.S3AllKeysArnForBucket(gocf.Ref(s3BucketResourceName)), }) lambdaFn.Decorator = func(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, template *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error { cfResource := template.AddResource(s3BucketResourceName, \u0026amp;gocf.S3Bucket{ AccessControl: gocf.String(\u0026#34;PublicRead\u0026#34;), Tags: \u0026amp;gocf.TagList{gocf.Tag{ Key: gocf.String(\u0026#34;SpecialKey\u0026#34;), Value: gocf.String(\u0026#34;SpecialValue\u0026#34;), }, }, }) cfResource.DeletionPolicy = \u0026#34;Delete\u0026#34; return nil } The key to sparta.Discovery is the DependsOn slice value.\nTemplate Marshaling \u0026amp; Decoration By default, Sparta uses CloudFormation to update service state. During template marshaling, Sparta scans for DependsOn relationships and propagates information (immediate-children only) across CloudFormation resource definitions. Most importantly, this information includes Ref and any other outputs of referred resources. This information then becomes available as a DisocveryInfo value returned by sparta.Discovery(). Behind the scenes, Sparta\nSample DiscoveryInfo In our example, a DiscoveryInfo from a sample stack might be:\n{ \u0026#34;ResourceID\u0026#34;: \u0026#34;mainechoS3DynamicBucketEventLambda41ca034273726cf36154137cbf8d7e5bd45f863a\u0026#34;, \u0026#34;Region\u0026#34;: \u0026#34;us-west-2\u0026#34;, \u0026#34;StackID\u0026#34;: \u0026#34;arn:aws:cloudformation:us-west-2:123412341234:stack/SpartaApplication-mweagle/d4e07d80-03eb-11e8-b6fd-50d5ca789e4a\u0026#34;, \u0026#34;StackName\u0026#34;: \u0026#34;SpartaApplication-mweagle\u0026#34;, \u0026#34;Resources\u0026#34;: { \u0026#34;S3DynamicBucket62b0e7a664dc29c1c4fbe231fbcef30f8463aaa3\u0026#34;: { \u0026#34;ResourceID\u0026#34;: \u0026#34;S3DynamicBucket62b0e7a664dc29c1c4fbe231fbcef30f8463aaa3\u0026#34;, \u0026#34;ResourceRef\u0026#34;: \u0026#34;spartaapplication-mweagl-s3dynamicbucket62b0e7a66-194zzvtfk757a\u0026#34;, \u0026#34;ResourceType\u0026#34;: \u0026#34;AWS::S3::Bucket\u0026#34;, \u0026#34;Properties\u0026#34;: { \u0026#34;DomainName\u0026#34;: \u0026#34;spartaapplication-mweagl-s3dynamicbucket62b0e7a66-194zzvtfk757a.s3.amazonaws.com\u0026#34;, \u0026#34;WebsiteURL\u0026#34;: \u0026#34;http://spartaapplication-mweagl-s3dynamicbucket62b0e7a66-194zzvtfk757a.s3-website-us-west-2.amazonaws.com\u0026#34; } } } } This JSON data is Base64 encoded and published into the Lambda function\u0026rsquo;s Environment using the SPARTA_DISCOVERY_INFO key. The sparta.Discover() function is responsible for accessing the encoded discovery information:\nconfiguration, _ := sparta.Discover() bucketName := \u0026#34;\u0026#34; for _, eachResource := range configuration.Resources { if eachResource.ResourceType == \u0026#34;AWS::S3::Bucket\u0026#34; { bucketName = eachResource.ResourceRef } } The Properties object includes resource-specific Fn::GetAtt outputs (see each resource type\u0026rsquo;s documentation for the complete set)\nWrapping Up Combined with dynamic infrastructure, sparta.Discover() enables a Sparta service to define its entire AWS infrastructure requirements. Coupling application logic with infrastructure requirements moves a service towards being completely self-contained and in the direction of immutable infrastructure.\nNotes  sparta.Discovery() only succeeds within a Sparta-compliant lambda function call block.  Call-site restrictions are validated in the discovery_tests.go tests.    "
   491  },
   492  {
   493  	"uri": "/reference/docker/",
   494  	"title": "Docker",
   495  	"tags": [],
   496  	"description": "",
   497  	"content": " TODO\n Docker Support Document the SpartaDocker project.\n"
   498  },
   499  {
   500  	"uri": "/reference/hybrid_topologies/",
   501  	"title": "Hybrid Topologies",
   502  	"tags": [],
   503  	"description": "",
   504  	"content": "At a broad level, AWS Lambda represents a new level of compute abstraction for services. Developers don\u0026rsquo;t immediately concern themselves with HA topologies, configuration management, capacity planning, or many of the other areas traditionally handled by operations. These are handled by the vendor supplied execution environment.\nHowever, Lambda is a relatively new technology and is not ideally suited to certain types of tasks. For example, given the current Lambda limits, the following task types might better be handled by \u0026ldquo;legacy\u0026rdquo; AWS services:\n Long running tasks Tasks with significant disk space requirements Large HTTP(S) I/O tasks  It may also make sense to integrate EC2 when:\n Applications are being gradually decomposed into Lambda functions Latency-sensitive request paths can\u0026rsquo;t afford cold container startup times Price/performance justifies using EC2 Using EC2 as a failover for system-wide Lambda outages  For such cases, Sparta supports running the exact same binary on EC2. This section describes how to create a single Sparta service that publishes a function via AWS Lambda and EC2 as part of the same application codebase. It\u0026rsquo;s based on the SpartaOmega project.\nMixed Topology Deploying your application to a mixed topology is accomplished by combining existing Sparta features. There is no \u0026ldquo;make mixed\u0026rdquo; command line option.\nAdd Custom Command Line Option The first step is to add a custom command line option. This command option will be used when your binary is running in \u0026ldquo;mixed topology\u0026rdquo; mode. The SpartaOmega project starts up a localhost HTTP server, so we\u0026rsquo;ll add a httpServer command line option with:\n// Custom command to startup a simple HelloWorld HTTP server httpServerCommand := \u0026amp;cobra.Command{ Use: \u0026#34;httpServer\u0026#34;, Short: \u0026#34;Sample HelloWorld HTTP server\u0026#34;, Long: `Sample HelloWorld HTTP server that binds to port: ` + HTTPServerPort, RunE: func(cmd *cobra.Command, args []string) error { http.HandleFunc(\u0026#34;/\u0026#34;, helloWorldResource) return http.ListenAndServe(fmt.Sprintf(\u0026#34;:%d\u0026#34;, HTTPServerPort), nil) }, } sparta.CommandLineOptions.Root.AddCommand(httpServerCommand) Our command doesn\u0026rsquo;t accept any additional flags. If your command needs additional user flags, consider adding a ParseOptions call to validate they are properly set.\nCreate CloudInit Userdata The next step is to write a user-data script that will be used to configure your EC2 instance(s) at startup. Your script is likely to differ from the one below, but at a minimum it will include code to download and unzip the archive containing your Sparta binary.\n#!/bin/bash -xe #!/bin/bash -xe SPARTA_OMEGA_BINARY_PATH=/home/ubuntu/{{.SpartaBinaryName}} ################################################################################ # # Tested on Ubuntu 16.04 # # AMI: ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20160516.1 (ami-06b94666) if [ ! -f \u0026#34;/home/ubuntu/userdata.sh\u0026#34; ] then curl -vs http://169.254.169.254/latest/user-data -o /home/ubuntu/userdata.sh chmod +x /home/ubuntu/userdata.sh fi # Install everything service supervisor stop || true apt-get update -y apt-get upgrade -y apt-get install supervisor awscli unzip git -y ################################################################################ # Our own binary aws s3 cp s3://{{ .S3Bucket }}/{{ .S3Key }} /home/ubuntu/application.zip unzip -o /home/ubuntu/application.zip -d /home/ubuntu chmod +x $SPARTA_OMEGA_BINARY_PATH ################################################################################ # SUPERVISOR # REF: http://supervisord.org/ # Cleanout secondary directory mkdir -pv /etc/supervisor/conf.d SPARTA_OMEGA_SUPERVISOR_CONF=\u0026#34;[program:spartaomega] command=$SPARTA_OMEGA_BINARY_PATHhttpServer numprocs=1 directory=/tmp priority=999 autostart=true autorestart=unexpected startsecs=10 startretries=3 exitcodes=0,2 stopsignal=TERM stopwaitsecs=10 stopasgroup=false killasgroup=false user=ubuntu stdout_logfile=/var/log/spartaomega.log stdout_logfile_maxbytes=1MB stdout_logfile_backups=10 stdout_capture_maxbytes=1MB stdout_events_enabled=false redirect_stderr=false stderr_logfile=spartaomega.err.log stderr_logfile_maxbytes=1MB stderr_logfile_backups=10 stderr_capture_maxbytes=1MB stderr_events_enabled=false \u0026#34; echo \u0026#34;$SPARTA_OMEGA_SUPERVISOR_CONF\u0026#34; \u0026gt; /etc/supervisor/conf.d/spartaomega.conf # Patch up the directory chown -R ubuntu:ubuntu /home/ubuntu # Startup Supervisor service supervisor restart || service supervisor start The script uses the command line option (command=$SPARTA_OMEGA_BINARY_PATH httpServer) that was defined in the first step.\nIt also uses the S3Bucket and S3Key properties that Sparta creates during the build and provides to your decorator function (next section).\nNotes The script is using text/template markup to expand properties known at build time. Because this content will be parsed by ConvertToTemplateExpression (next section), it\u0026rsquo;s also possible to use Fn::Join compatible JSON serializations (single line only) to reference properties that are known only during CloudFormation provision time.\nFor example, if we were also provisioning a PostgreSQL instance and needed to dynamically discover the endpoint address, a shell script variable could be assigned via:\nPOSTGRES_ADDRESS={ \u0026#34;Fn::GetAtt\u0026#34; : [ \u0026#34;{{ .DBInstanceResourceName }}\u0026#34; , \u0026#34;Endpoint.Address\u0026#34; ] } This expression combines both a build-time variable (DBInstanceResourceName: the CloudFormation resource name) and a provision time one (Endpoint.Address: dynamically assigned by the CloudFormation RDS Resource).\nDecorate Toplogy The final step is to use a TemplateDecorator to tie everything together. A decorator can annotate the CloudFormation template with any supported go-cloudformation resource. For this example, we\u0026rsquo;ll create a single AutoScalingGroup and EC2 instance that\u0026rsquo;s bootstrapped with our custom userdata.sh script.\n// The CloudFormation template decorator that inserts all the other // AWS components we need to support this deployment... func lambdaDecorator(customResourceAMILookupName string) sparta.TemplateDecorator { return func(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, cfTemplate *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error { // Create the launch configuration with Metadata to download the ZIP file, unzip it \u0026amp; launch the  // golang binary...  ec2SecurityGroupResourceName := sparta.CloudFormationResourceName(\u0026#34;SpartaOmegaSecurityGroup\u0026#34;, \u0026#34;SpartaOmegaSecurityGroup\u0026#34;) asgLaunchConfigurationName := sparta.CloudFormationResourceName(\u0026#34;SpartaOmegaASGLaunchConfig\u0026#34;, \u0026#34;SpartaOmegaASGLaunchConfig\u0026#34;) asgResourceName := sparta.CloudFormationResourceName(\u0026#34;SpartaOmegaASG\u0026#34;, \u0026#34;SpartaOmegaASG\u0026#34;) ec2InstanceRoleName := sparta.CloudFormationResourceName(\u0026#34;SpartaOmegaEC2InstanceRole\u0026#34;, \u0026#34;SpartaOmegaEC2InstanceRole\u0026#34;) ec2InstanceProfileName := sparta.CloudFormationResourceName(\u0026#34;SpartaOmegaEC2InstanceProfile\u0026#34;, \u0026#34;SpartaOmegaEC2InstanceProfile\u0026#34;) //////////////////////////////////////////////////////////////////////////////  // 1 - Create the security group for the SpartaOmega EC2 instance  ec2SecurityGroup := \u0026amp;gocf.EC2SecurityGroup{ GroupDescription: gocf.String(\u0026#34;SpartaOmega Security Group\u0026#34;), SecurityGroupIngress: \u0026amp;gocf.EC2SecurityGroupRuleList{ gocf.EC2SecurityGroupRule{ CidrIp: gocf.String(\u0026#34;0.0.0.0/0\u0026#34;), IpProtocol: gocf.String(\u0026#34;tcp\u0026#34;), FromPort: gocf.Integer(HTTPServerPort), ToPort: gocf.Integer(HTTPServerPort), }, gocf.EC2SecurityGroupRule{ CidrIp: gocf.String(\u0026#34;0.0.0.0/0\u0026#34;), IpProtocol: gocf.String(\u0026#34;tcp\u0026#34;), FromPort: gocf.Integer(22), ToPort: gocf.Integer(22), }, }, } template.AddResource(ec2SecurityGroupResourceName, ec2SecurityGroup) //////////////////////////////////////////////////////////////////////////////  // 2 - Create the ASG and associate the userdata with the EC2 init  // EC2 Instance Role...  statements := sparta.CommonIAMStatements.Core // Add the statement that allows us to fetch the S3 object with this compiled  // binary  statements = append(statements, spartaIAM.PolicyStatement{ Effect: \u0026#34;Allow\u0026#34;, Action: []string{\u0026#34;s3:GetObject\u0026#34;}, Resource: gocf.String(fmt.Sprintf(\u0026#34;arn:aws:s3:::%s/%s\u0026#34;, S3Bucket, S3Key)), }) iamPolicyList := gocf.IAMPoliciesList{} iamPolicyList = append(iamPolicyList, gocf.IAMPolicies{ PolicyDocument: sparta.ArbitraryJSONObject{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: statements, }, PolicyName: gocf.String(\u0026#34;EC2Policy\u0026#34;), }, ) ec2InstanceRole := \u0026amp;gocf.IAMRole{ AssumeRolePolicyDocument: sparta.AssumePolicyDocument, Policies: \u0026amp;iamPolicyList, } template.AddResource(ec2InstanceRoleName, ec2InstanceRole) // Create the instance profile  ec2InstanceProfile := \u0026amp;gocf.IAMInstanceProfile{ Path: gocf.String(\u0026#34;/\u0026#34;), Roles: []gocf.Stringable{gocf.Ref(ec2InstanceRoleName).String()}, } template.AddResource(ec2InstanceProfileName, ec2InstanceProfile) //Now setup the properties map, expand the userdata, and attach it...  userDataProps := map[string]interface{}{ \u0026#34;S3Bucket\u0026#34;: S3Bucket, \u0026#34;S3Key\u0026#34;: S3Key, \u0026#34;ServiceName\u0026#34;: serviceName, } userDataTemplateInput, userDataTemplateInputErr := resources.FSString(false, \u0026#34;/resources/source/userdata.sh\u0026#34;) if nil != userDataTemplateInputErr { return userDataTemplateInputErr } templateReader := strings.NewReader(userDataTemplateInput) userDataExpression, userDataExpressionErr := spartaCF.ConvertToTemplateExpression(templateReader, userDataProps) if nil != userDataExpressionErr { return userDataExpressionErr } logger.WithFields(logrus.Fields{ \u0026#34;Parameters\u0026#34;: userDataProps, \u0026#34;Expanded\u0026#34;: userDataExpression, }).Debug(\u0026#34;Expanded userdata\u0026#34;) asgLaunchConfigurationResource := \u0026amp;gocf.AutoScalingLaunchConfiguration{ ImageId: gocf.GetAtt(customResourceAMILookupName, \u0026#34;HVM\u0026#34;), InstanceType: gocf.String(\u0026#34;t2.micro\u0026#34;), KeyName: gocf.String(SSHKeyName), IamInstanceProfile: gocf.Ref(ec2InstanceProfileName).String(), UserData: gocf.Base64(userDataExpression), SecurityGroups: gocf.StringList(gocf.GetAtt(ec2SecurityGroupResourceName, \u0026#34;GroupId\u0026#34;)), } launchConfigResource := template.AddResource(asgLaunchConfigurationName, asgLaunchConfigurationResource) launchConfigResource.DependsOn = append(launchConfigResource.DependsOn, customResourceAMILookupName) // Create the ASG  asgResource := \u0026amp;gocf.AutoScalingAutoScalingGroup{ // Empty Region is equivalent to all region AZs  // Ref: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-getavailabilityzones.html  AvailabilityZones: gocf.GetAZs(gocf.String(\u0026#34;\u0026#34;)), LaunchConfigurationName: gocf.Ref(asgLaunchConfigurationName).String(), MaxSize: gocf.String(\u0026#34;1\u0026#34;), MinSize: gocf.String(\u0026#34;1\u0026#34;), } template.AddResource(asgResourceName, asgResource) return nil } } There are a few things to point out in this function:\n Security Groups - The decorator adds an ingress rule so that the endpoint is publicly accessible:  gocf.EC2SecurityGroupRule{ CidrIp: gocf.String(\u0026#34;0.0.0.0/0\u0026#34;), IpProtocol: gocf.String(\u0026#34;tcp\u0026#34;), FromPort: gocf.Integer(HTTPServerPort), ToPort: gocf.Integer(HTTPServerPort), }  IAM Role - In order to download the S3 archive, the EC2 IAM Policy includes a custom privilege:  statements = append(statements, spartaIAM.PolicyStatement{ Effect: \u0026#34;Allow\u0026#34;, Action: []string{\u0026#34;s3:GetObject\u0026#34;}, Resource: gocf.String(fmt.Sprintf(\u0026#34;arn:aws:s3:::%s/%s\u0026#34;, S3Bucket, S3Key)), })  UserData Marshaling - Marshaling the userdata.sh script is handled by ConvertToTemplateExpression:  // Now setup the properties map, expand the userdata, and attach it...  userDataProps := map[string]interface{}{ \u0026#34;S3Bucket\u0026#34;: S3Bucket, \u0026#34;S3Key\u0026#34;: S3Key, \u0026#34;ServiceName\u0026#34;: serviceName, } // ...  templateReader := strings.NewReader(userDataTemplateInput) userDataExpression, userDataExpressionErr := spartaCF.ConvertToTemplateExpression(templateReader, userDataProps) // ...  asgLaunchConfigurationResource := \u0026amp;gocf.AutoScalingLaunchConfiguration{ // ...  UserData: gocf.Base64(userDataExpression), // ...  }  Custom Command Line Flags - To externalize the SSH Key Name, the binary expects a custom flag (not shown above):  // And add the SSHKeyName option to the provision step  sparta.CommandLineOptions.Provision.Flags().StringVarP(\u0026amp;SSHKeyName, \u0026#34;key\u0026#34;, \u0026#34;k\u0026#34;, \u0026#34;\u0026#34;, \u0026#34;SSH Key Name to use for EC2 instances\u0026#34;) This value is used as an input to the AutoScalingLaunchConfiguration value:\nasgLaunchConfigurationResource := \u0026amp;gocf.AutoScalingLaunchConfiguration{ // ...  KeyName: gocf.String(SSHKeyName), // ...  } Result Deploying your Go application using a mixed topology enables your \u0026ldquo;Lambda\u0026rdquo; endpoint to be addressable via AWS Lambda and standard HTTP.\nHTTP Access $ curl -vs http://ec2-52-26-146-138.us-west-2.compute.amazonaws.com:9999/ * Trying 52.26.146.138... * Connected to ec2-52-26-146-138.us-west-2.compute.amazonaws.com (52.26.146.138) port 9999 (#0) \u0026gt; GET / HTTP/1.1 \u0026gt; Host: ec2-52-26-146-138.us-west-2.compute.amazonaws.com:9999 \u0026gt; User-Agent: curl/7.43.0 \u0026gt; Accept: */* \u0026gt; \u0026lt; HTTP/1.1 200 OK \u0026lt; Date: Fri, 10 Jun 2016 14:58:15 GMT \u0026lt; Content-Length: 29 \u0026lt; Content-Type: text/plain; charset=utf-8 \u0026lt; * Connection #0 to host ec2-52-26-146-138.us-west-2.compute.amazonaws.com left intact Hello world from SpartaOmega! Lambda Access Conclusion Mixed topology deployment is a powerful feature that enables your application to choose the right set of resources. It provides a way for services to non-destructively migrate to AWS Lambda or shift existing Lambda workloads to alternative compute resources.\nNotes  The SpartaOmega sample application uses supervisord for process monitoring. The current userdata.sh script isn\u0026rsquo;t sufficient to reconfigure in response to CloudFormation update events. Production systems should also include cfn-hup listeners. Production deployments may consider CodeDeploy to assist in HA binary rollover. Forwarding CloudWatch Logs is not handled by this sample. Consider using HTTPS \u0026amp; Let\u0026rsquo;s Encrypt on your EC2 instances.  "
   505  },
   506  {
   507  	"uri": "/reference/apigateway/s3site/",
   508  	"title": "S3 Sites with CORS",
   509  	"tags": [],
   510  	"description": "",
   511  	"content": "Sparta supports provisioning an S3-backed static website as part of provisioning. We\u0026rsquo;ll walk through provisioning a minimal Bootstrap website that accesses API Gateway lambda functions provisioned by a single service in this example.\nThe source for this is the SpartaHTML example application.\nLambda Definition We\u0026rsquo;ll start by creating a very simple lambda function:\nimport ( spartaAPIGateway \u0026#34;github.com/mweagle/Sparta/aws/apigateway\u0026#34; spartaAWSEvents \u0026#34;github.com/mweagle/Sparta/aws/events\u0026#34; ) type helloWorldResponse struct { Message string Request spartaAWSEvents.APIGatewayRequest } //////////////////////////////////////////////////////////////////////////////// // Hello world event handler func helloWorld(ctx context.Context, gatewayEvent spartaAWSEvents.APIGatewayRequest) (*spartaAPIGateway.Response, error) { logger, loggerOk := ctx.Value(sparta.ContextKeyLogger).(*logrus.Logger) if loggerOk { logger.Info(\u0026#34;Hello world structured log message\u0026#34;) } // Return a message, together with the incoming input...  return spartaAPIGateway.NewResponse(http.StatusOK, \u0026amp;helloWorldResponse{ Message: fmt.Sprintf(\u0026#34;Hello world 🌏\u0026#34;), Request: gatewayEvent, }), nil } This lambda function returns a reply that consists of the inbound request plus a sample message. See the API Gateway examples for more information.\nAPI Gateway The next step is to create an API Gateway instance and Stage, so that the API will be publicly available.\napiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaHTML\u0026#34;, apiStage) Since we want to be able to access this API from another domain (the one provisioned by the S3 bucket), we\u0026rsquo;ll need to enable CORS as well:\n// Enable CORS s.t. the S3 site can access the resources apiGateway.CORSEnabled = true Finally, we register the helloWorld lambda function with an API Gateway resource:\nfunc spartaLambdaFunctions(api *sparta.API) []*sparta.LambdaAWSInfo { var lambdaFunctions []*sparta.LambdaAWSInfo lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(helloWorld), helloWorld, sparta.IAMRoleDefinition{}) if nil != api { apiGatewayResource, _ := api.NewResource(\u0026#34;/hello\u0026#34;, lambdaFn) _, err := apiGatewayResource.NewMethod(\u0026#34;GET\u0026#34;, http.StatusOK) if nil != err { panic(\u0026#34;Failed to create /hello resource\u0026#34;) } } return append(lambdaFunctions, lambdaFn) } S3 Site The next part is to define the S3 site resources via sparta.NewS3Site(localFilePath). The localFilePath parameter typically points to a directory, which will be:\n Recursively ZIP\u0026rsquo;d Posted to S3 alongside the Lambda code archive and CloudFormation Templates Dynamically unpacked by a CloudFormation CustomResource during provision to a new S3 bucket.  Provision Putting it all together, our main() function looks like:\n//////////////////////////////////////////////////////////////////////////////// // Main func main() { // Register the function with the API Gateway  apiStage := sparta.NewStage(\u0026#34;v1\u0026#34;) apiGateway := sparta.NewAPIGateway(\u0026#34;SpartaHTML\u0026#34;, apiStage) // Enable CORS s.t. the S3 site can access the resources  apiGateway.CORSEnabled = true // Provision a new S3 bucket with the resources in the supplied subdirectory  s3Site, _ := sparta.NewS3Site(\u0026#34;./resources\u0026#34;) // Deploy it  sparta.Main(\u0026#34;SpartaHTML\u0026#34;, fmt.Sprintf(\u0026#34;Sparta app that provisions a CORS-enabled API Gateway together with an S3 site\u0026#34;), spartaLambdaFunctions(apiGateway), apiGateway, s3Site) } which can be provisioned using the standard command line option.\nThe Outputs section of the provision command includes the hostname of our new S3 site:\nINFO[0092] ──────────────────────────────────────────────── INFO[0092] Stack Outputs INFO[0092] ──────────────────────────────────────────────── INFO[0092] S3SiteURL Description=\u0026quot;S3 Website URL\u0026quot; Value=\u0026quot;http://spartahtml-mweagle-s3site89c05c24a06599753eb3ae4e-9kil6qlqk0yt.s3-website-us-west-2.amazonaws.com\u0026quot; INFO[0092] APIGatewayURL Description=\u0026quot;API Gateway URL\u0026quot; Value=\u0026quot;https://ksuo0qlc3m.execute-api.us-west-2.amazonaws.com/v1\u0026quot; INFO[0092] ──────────────────────────────────────────────── Open your browser to the S3SiteURL value (eg: http://spartahtml-mweagle-s3site89c05c24a06599753eb3ae4e-9kil6qlqk0yt.s3-website-us-west-2.amazonaws.com) and view the deployed site.\nDiscover An open issue is how to communicate the dynamically assigned API Gateway hostname to the dynamically provisioned S3 site.\nAs part of expanding the ZIP archive to a target S3 bucket, Sparta also creates a MANIFEST.json discovery file with discovery information. If your application has provisioned an APIGateway this JSON file will include that dynamically assigned URL as in:\n MANIFEST.json  { \u0026#34;APIGatewayURL\u0026#34;: { \u0026#34;Description\u0026#34;: \u0026#34;API Gateway URL\u0026#34;, \u0026#34;Value\u0026#34;: \u0026#34;https://ksuo0qlc3m.execute-api.us-west-2.amazonaws.com/v1\u0026#34; } } Notes  See the Medium post for an additional walk through this sample.  "
   512  },
   513  {
   514  	"uri": "/reference/operations/",
   515  	"title": "",
   516  	"tags": [],
   517  	"description": "",
   518  	"content": "Operations The real agenda of #serverless @AWSreInvent @awscloud #AWSreInvent2018 pic.twitter.com/wNHJYsbaTt\n\u0026mdash; Ran Ribenzaft (@ranrib) November 26, 2018   Magefiles  Magefile To support cross platform development and usage, Sparta uses magefiles rather than Makefiles. Most projects can start with the magefile.go sample below. The Magefiles provide a discoverable CLI, but are entirely optional. go run main.go XXXX style invocation remains available as well. Default Sparta magefile.go This magefile.go can be used, unchanged, for most standard Sparta projects. // +build mage // File: magefile.go package main import ( spartaMage \u0026#34;github.\n CloudWatch Alarms  The CloudWatchErrorAlarmDecorator associates a CloudWatch alarm and destination with your Lambda function. Sample usage: lambdaFn, _ := sparta.NewAWSLambda(\u0026#34;Hello World\u0026#34;, helloWorld, sparta.IAMRoleDefinition{}) lambdaFn.Decorators = []sparta.TemplateDecoratorHandler{ spartaDecorators.CloudWatchErrorAlarmDecorator(1, 1, 1, gocf.String(\u0026#34;MY_SNS_ARN\u0026#34;)), }  CloudWatch Dashboard  The DashboardDecorator creates a CloudWatch Dashboard that produces a single CloudWatch Dashboard to summarize your stack\u0026rsquo;s behavior. Sample usage: func workflowHooks(connections *service.Connections, lambdaFunctions []*sparta.LambdaAWSInfo, websiteURL *gocf.StringExpr) *sparta.WorkflowHooks { // Setup the DashboardDecorator lambda hook workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorators: []sparta.ServiceDecoratorHookHandler{ spartaDecorators.DashboardDecorator(lambdaFunctions, 60), serviceResourceDecorator(connections, websiteURL), }, } return workflowHooks } A sample dashboard for the SpartaGeekwire project is: Related to this, see the recently announced AWS Lambda Application Dashboard.\n CodeDeploy Service Update   TODO: Document the CodeDeployServiceUpdateDecorator decorator. See also the Deployment Strategy page.  CI/CD   TODO: Document the SpartaCodePipeline example. Also see the Medium Post  Deployment Strategies   Document the SpartaSafeDeploy example.  Metrics Publisher  AWS Lambda is tightly integrated with other AWS services and provides excellent opportunities for improving your service\u0026rsquo;s observability posture. Sparta includes a CloudWatch Metrics publisher that periodically publishes metrics to CloudWatch. This periodic task publishes environment-level metrics that have been detected by the gopsutil package. Metrics include: CPU Percent used Disk Percent used Host Uptime (milliseconds) Load Load1 (no units) Load5 (no units) Load15 (no units) Network NetBytesSent (bytes) NetBytesRecv (bytes) NetErrin (count) NetErrout (count) You can provide an optional map[string]string set of dimensions to which the metrics should be published.\n Profiling  One of Lambda\u0026rsquo;s biggest strengths, its ability to automatically scale across ephemeral containers in response to increased load, also creates one of its biggest problems: observability. The traditional set of tools used to identify performance bottlenecks are no longer valid, as there is no host into which one can SSH and interactively interrogate. Identifying performance bottlenecks is even more significant due to the Lambda pricing model, where idle time often directly translates into increased costs.\n "
   519  },
   520  {
   521  	"uri": "/reference/decorators/dynamic_infrastructure/",
   522  	"title": "Dynamic Infrastructure",
   523  	"tags": [],
   524  	"description": "",
   525  	"content": "In addition to provisioning AWS Lambda functions, Sparta supports the creation of other CloudFormation Resources. This enables a service to move towards immutable infrastructure, where the service and its infrastructure requirements are treated as a logical unit.\nFor instance, consider the case where two developers are working in the same AWS account.\n Developer 1 is working on analyzing text documents.  Their lambda code is triggered in response to uploading sample text documents to S3.   Developer 2 is working on image recognition.  Their lambda code is triggered in response to uploading sample images to S3.    graph LR sharedBucket[S3 Bucket] dev1Lambda[Dev1 LambdaCode] dev2Lambda[Dev2 LambdaCode] sharedBucket -- dev1Lambda sharedBucket -- dev2Lambda  Using a shared, externally provisioned S3 bucket has several impacts:\n Adding conditionals in each lambda codebase to scope valid processing targets. Ambiguity regarding which codebase handled an event. Infrastructure ownership/lifespan management. When a service is decommissioned, its infrastructure requirements may be automatically decommissioned as well.  Eg, \u0026ldquo;Is this S3 bucket in use by any service?\u0026quot;.   Overly permissive IAM roles due to static Arns.  Eg, \u0026ldquo;Arn hugging\u0026rdquo;.   Contention updating the shared bucket\u0026rsquo;s notification configuration.  Alternatively, each developer could provision and manage disjoint topologies:\ngraph LR dev1S3Bucket[Dev1 S3 Bucket] dev1Lambda[Dev1 LambdaCode] dev2S3Bucket[Dev2 S3 Bucket] dev2Lambda[Dev2 LambdaCode] dev1S3Bucket -- dev1Lambda dev2S3Bucket -- dev2Lambda  Enabling each developer to create other AWS resources also means more complex topologies can be expressed. These topologies can benefit from CloudWatch monitoring (eg, per-Lambda Metrics ) without the need to add custom metrics.\ngraph LR dev1S3Bucket[Dev1 S3 Bucket] dev1Lambda[Dev1 LambdaCode] dev2S3Bucket[Dev2 S3 Images Bucket] dev2PNGLambda[Dev2 PNG LambdaCode] dev2JPGLambda[Dev2 JPEG LambdaCode] dev2TIFFLambda[Dev2 TIFF LambdaCode] dev2S3VideoBucket[Dev2 VideoBucket] dev2VideoLambda[Dev2 Video LambdaCode] dev1S3Bucket -- dev1Lambda dev2S3Bucket --|SuffixFilter=*.PNG|dev2PNGLambda dev2S3Bucket --|SuffixFilter=*.JPEG,*.JPG|dev2JPGLambda dev2S3Bucket --|SuffixFilter=*.TIFF|dev2TIFFLambda dev2S3VideoBucket --dev2VideoLambda  Sparta supports Dynamic Resources via TemplateDecoratorHandler satisfying types.\nTemplate Decorator Handler A template decorator is a go interface:\ntype TemplateDecoratorHandler interface { DecorateTemplate(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, template *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error } Clients use go-cloudformation types for CloudFormation resources and template.AddResource to add them to the *template parameter. After a decorator is invoked, Sparta also verifies that the user-supplied function did not add entities that that collide with the internally-generated ones.\nUnique Resource Names CloudFormation uses Logical IDs as resource key names.\nTo minimize collision likelihood, Sparta publishes CloudFormationResourceName(prefix, \u0026hellip;parts) to generate compliant identifiers. To produce content-based hash values, callers can provide a non-empty set of values as the ...parts variadic argument. This produces stable identifiers across Sparta execution (which may affect availability during updates).\nWhen called with only a single value (eg: CloudFormationResourceName(\u0026quot;myResource\u0026quot;)), Sparta will return a random resource name that is NOT stable across executions.\nExample - S3 Bucket Let\u0026rsquo;s work through an example to make things a bit more concrete. We have the following requirements:\n Our lambda function needs a immutable-infrastructure compliant S3 bucket Our lambda function should be notified when items are created or deleted from the bucket Our lambda function must be able to access the contents in the bucket (not shown below)  Lambda Function To start with, we\u0026rsquo;ll need a Sparta lambda function to expose:\nimport ( awsLambdaEvents \u0026#34;github.com/aws/aws-lambda-go/events\u0026#34; ) func echoS3DynamicBucketEvent(ctx context.Context, s3Event awsLambdaEvents.S3Event) (*awsLambdaEvents.S3Event, error) { logger, _ := ctx.Value(sparta.ContextKeyRequestLogger).(*logrus.Entry) discoveryInfo, discoveryInfoErr := sparta.Discover() logger.WithFields(logrus.Fields{ \u0026#34;Event\u0026#34;: s3Event, \u0026#34;Discovery\u0026#34;: discoveryInfo, \u0026#34;DiscoveryErr\u0026#34;: discoveryInfoErr, }).Info(\u0026#34;Event received\u0026#34;) return \u0026amp;s3Event, nil } For brevity our demo function doesn\u0026rsquo;t access the S3 bucket objects. To support that please see the sparta.Discover functionality.\nS3 Resource Name The next thing we need is a Logical ID for our bucket:\ns3BucketResourceName := sparta.CloudFormationResourceName(\u0026#34;S3DynamicBucket\u0026#34;, \u0026#34;myServiceBucket\u0026#34;) Sparta Integration With these two values we\u0026rsquo;re ready to get started building up the lambda function:\nlambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoS3DynamicBucketEvent), echoS3DynamicBucketEvent, sparta.IAMRoleDefinition{}) The open issue is how to publish the CloudFormation-defined S3 Arn to the compile-time application. Our lambda function needs to provide both:\n IAMRolePrivilege values that reference the (as yet) undefined Arn. S3Permission values to configure our lambda\u0026rsquo;s event triggers on the (as yet) undefined Arn.  The missing piece is gocf.Ref(), whose single argument is the Logical ID of the S3 resource we\u0026rsquo;ll be inserting in the decorator call.\nDynamic IAM Role Privilege The IAMRolePrivilege struct references the dynamically assigned S3 Arn as follows:\nlambdaFn.Permissions = append(lambdaFn.Permissions, sparta.S3Permission{ BasePermission: sparta.BasePermission{ SourceArn: gocf.Ref(s3BucketResourceName), }, Events: []string{\u0026#34;s3:ObjectCreated:*\u0026#34;, \u0026#34;s3:ObjectRemoved:*\u0026#34;}, }) lambdaFn.DependsOn = append(lambdaFn.DependsOn, s3BucketResourceName) Dynamic S3 Permissions The S3Permission struct also requires the dynamic Arn, to which it will append \u0026quot;/*\u0026quot; to enable object read access.\nlambdaFn.RoleDefinition.Privileges = append(lambdaFn.RoleDefinition.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:HeadObject\u0026#34;}, Resource: spartaCF.S3AllKeysArnForBucket(gocf.Ref(s3BucketResourceName)), }) The spartaCF.S3AllKeysArnForBucket call is a convenience wrapper around gocf.Join to generate the concatenated, dynamic Arn expression.\nS3 Resource Insertion All that\u0026rsquo;s left to do is actually insert the S3 resource in our decorator:\ns3Decorator := func(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, template *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error { cfResource := template.AddResource(s3BucketResourceName, \u0026amp;gocf.S3Bucket{ AccessControl: gocf.String(\u0026#34;PublicRead\u0026#34;), Tags: \u0026amp;gocf.TagList{gocf.Tag{ Key: gocf.String(\u0026#34;SpecialKey\u0026#34;), Value: gocf.String(\u0026#34;SpecialValue\u0026#34;), }, }, }) cfResource.DeletionPolicy = \u0026#34;Delete\u0026#34; return nil } lambdaFn.Decorators = []sparta.TemplateDecoratorHandler{ sparta.TemplateDecoratorHookFunc(s3Decorator), } Dependencies In reality, we shouldn\u0026rsquo;t even attempt to create the AWS Lambda function if the S3 bucket creation fails. As application developers, we can help CloudFormation sequence infrastructure operations by stating this hard dependency on the S3 bucket via the DependsOn attribute:\nlambdaFn.DependsOn = append(lambdaFn.DependsOn, s3BucketResourceName) Code Listing Putting everything together, our Sparta lambda function with dynamic infrastructure is listed below.\ns3BucketResourceName := sparta.CloudFormationResourceName(\u0026#34;S3DynamicBucket\u0026#34;) lambdaFn, _ := sparta.NewAWSLambda(sparta.LambdaName(echoS3DynamicBucketEvent), echoS3DynamicBucketEvent, sparta.IAMRoleDefinition{}) // Our lambda function requires the S3 bucket lambdaFn.DependsOn = append(lambdaFn.DependsOn, s3BucketResourceName) // Add a permission s.t. the lambda function could read from the S3 bucket lambdaFn.RoleDefinition.Privileges = append(lambdaFn.RoleDefinition.Privileges, sparta.IAMRolePrivilege{ Actions: []string{\u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:HeadObject\u0026#34;}, Resource: spartaCF.S3AllKeysArnForBucket(gocf.Ref(s3BucketResourceName)), }) // Configure the S3 event source lambdaFn.Permissions = append(lambdaFn.Permissions, sparta.S3Permission{ BasePermission: sparta.BasePermission{ SourceArn: gocf.Ref(s3BucketResourceName), }, Events: []string{\u0026#34;s3:ObjectCreated:*\u0026#34;, \u0026#34;s3:ObjectRemoved:*\u0026#34;}, }) // Actually add the resource lambdaFn.Decorator = func(lambdaResourceName string, lambdaResource gocf.LambdaFunction, template *gocf.Template, logger *logrus.Logger) error { cfResource := template.AddResource(s3BucketResourceName, \u0026amp;gocf.S3Bucket{ AccessControl: gocf.String(\u0026#34;PublicRead\u0026#34;), }) cfResource.DeletionPolicy = \u0026#34;Delete\u0026#34; return nil } Wrapping Up Sparta provides an opportunity to bring infrastructure management into the application programming model. It\u0026rsquo;s still possible to use literal Arn strings, but the ability to include other infrastructure requirements brings a service closer to being self-contained and more operationally sustainable.\nNotes  The echoS3DynamicBucketEvent function can also access the bucket Arn via sparta.Discover. See the DeletionPolicy documentation regarding S3 management. CloudFormation resources also publish other outputs that can be retrieved via gocf.GetAtt. go-cloudformation exposes gocf.Join to create compound, dynamic expressions.  See the CloudWatch docs on Fn::Join for more information.    "
   526  },
   527  {
   528  	"uri": "/presentations/",
   529  	"title": "Presentations",
   530  	"tags": [],
   531  	"description": "",
   532  	"content": "  Slides  Getting Started - February Overview - October 2017  Image courtesy of Ashley McNamara "
   533  },
   534  {
   535  	"uri": "/reference/testing/",
   536  	"title": "Testing",
   537  	"tags": [],
   538  	"description": "",
   539  	"content": "While developing Sparta lambda functions it may be useful to test them locally without needing to provision each new code change. You can test your lambda functions using standard go test functionality.\nTo create proper event types, consider:\n AWS Lambda Go types Sparta types Use NewAPIGatewayMockRequest to generate API Gateway style requests.  "
   540  },
   541  {
   542  	"uri": "/reference/supporting_packages/",
   543  	"title": "Supporting Packages",
   544  	"tags": [],
   545  	"description": "",
   546  	"content": "The following packages are part of the Sparta ecosystem and can be used in combination or as standalone in other applications.\ngo-cloudcondensor The go-cloudcondensor package provides utilities to express CloudFormation templates as a set of go functions. Templates are evaluated and the and the resulting JSON can be integrated into existing CLI-based workflows.\ngo-cloudformation The go-cloudformation package provides a Go object model for the official CloudFormation JSON Schema.\nSpartaVault SpartaVault uses KMS to encrypt values into Go types that can be safely committed to source control. It includes a command line utility that produces an encrypted set of credentials that are statically compiled into your application.\nssm-cache The ssm-cache package provides an expiring cache for AWS Systems Manager. SSM is the preferred service to use for sharing credentials with your service.\nExamples There are also many Sparta example repos that demonstrate core concepts.\n"
   547  },
   548  {
   549  	"uri": "/reference/limitations/",
   550  	"title": "Limitations",
   551  	"tags": [],
   552  	"description": "",
   553  	"content": "AWS Lambda Limitations  Lambda is not yet globally available. Please view the Global Infrastructure page for the latest deployment status. There are Lambda Limits that may affect your development It\u0026rsquo;s not possible to dynamically set HTTP response headers based on the Lambda response body:  https://forums.aws.amazon.com/thread.jspa?threadID=203889 https://forums.aws.amazon.com/thread.jspa?threadID=210826   Similarly, it\u0026rsquo;s not possible to set proper error response bodies.  "
   554  },
   555  {
   556  	"uri": "/meta/",
   557  	"title": "Meta",
   558  	"tags": [],
   559  	"description": "",
   560  	"content": "Documentation contributions are most appreciated. The documentation is built using:\n Hugo Mage  All documentation is contained in the master branch of the Sparta repo.\n$ mage -l | grep docs docsBuild builds the public documentation site in the /docs folder docsCommit commits the current documentation with an autogenerated comment docsEdit starts a Hugo server and hot reloads the documentation at http://localhost:1313 docsInstallRequirements installs the required Hugo version Editing  go get -u -v github.com/magefile/mage mage -v docsEdit  The last step will launch a hot reloading documentation server and open a browser viewing http://localhost:1313.\nTo make changes to the documentation:\n Start the server: mage -v docsBuild Edit the markdown in /docs_source/content Check the changes in the browser (http://localhost:1313) Commit your changes Open a PR  Thanks!\nThe documentation site uses the Hugo Learn Theme. Please visit that site for additional documentation regarding shortcodes and included libraries.\n"
   561  },
   562  {
   563  	"uri": "/reference/faq/",
   564  	"title": "FAQ",
   565  	"tags": [],
   566  	"description": "",
   567  	"content": "CloudFormation How do I create dynamic resource ARNs? Linking AWS resources together often requires creating dynamic ARN references. This can be achieved by using cloudformation.Join expressions.\nFor instance:\nimport ( gocf \u0026#34;github.com/mweagle/go-cloudformation\u0026#34; ) s3SiteBucketAllKeysResourceValue := gocf.Join(\u0026#34;\u0026#34;, gocf.String(\u0026#34;arn:aws:s3:::\u0026#34;), gocf.Ref(s3BucketResourceName), gocf.String(\u0026#34;/*\u0026#34;)) import ( gocf \u0026#34;github.com/mweagle/go-cloudformation\u0026#34; ) AuthorizerURI: gocf.Join(\u0026#34;\u0026#34;, gocf.String(\u0026#34;arn:aws:apigateway:\u0026#34;), gocf.Ref(\u0026#34;AWS::Region\u0026#34;).String(), gocf.String(\u0026#34;:lambda:path/2015-03-31/functions/\u0026#34;), gocf.GetAtt(myAWSLambdaInfo.LogicalResourceName(), \u0026#34;Arn\u0026#34;), gocf.String(\u0026#34;/invocations\u0026#34;)), See the CloudFormation Fn::GetAtt docs for the set of attributes created by each resource. CloudFormation pseudo-parameters can be included in dynamic expresssions via gocf.Ref expressions. For instance:\ngocf.Ref(\u0026#34;AWS::Region\u0026#34;) gocf.Ref(\u0026#34;AWS::AccountId\u0026#34;) gocf.Ref(\u0026#34;AWS::StackId\u0026#34;) gocf.Ref(\u0026#34;AWS::StackName\u0026#34;) Development How do I configure AWS SDK settings? Sparta relies on standard AWS SDK configuration settings. See the official documentation for more information.\nDuring development, configuration is typically done through environment variables:\n AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_REGION  What are the Minimum IAM Privileges for Sparta developers? The absolute minimum set of privileges an account needs is the following IAM Policy:\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;Stmt1505975332000\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;cloudformation:DescribeStacks\u0026#34;, \u0026#34;cloudformation:CreateStack\u0026#34;, \u0026#34;cloudformation:CreateChangeSet\u0026#34;, \u0026#34;cloudformation:DescribeChangeSet\u0026#34;, \u0026#34;cloudformation:ExecuteChangeSet\u0026#34;, \u0026#34;cloudformation:DeleteChangeSet\u0026#34;, \u0026#34;cloudformation:DeleteStack\u0026#34;, \u0026#34;iam:GetRole\u0026#34;, \u0026#34;iam:DeleteRole\u0026#34;, \u0026#34;iam:DeleteRolePolicy\u0026#34;, \u0026#34;iam:PutRolePolicy\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;*\u0026#34; ] }, { \u0026#34;Sid\u0026#34;: \u0026#34;Stmt1505975332000\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:GetBucketVersioning\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:s3:::PROVISION_TARGET_BUCKETNAME\u0026#34; ] } ] } This set of privileges should be sufficient to deploy a Sparta application similar to SpartaHelloWorld. Additional privileges may be required to enable different datasources.\nYou can view the exact set of AWS API calls by enabling --level debug log verbosity. This log level includes all AWS API calls starting with release 0.20.0.\nWhat are the minimum IAM privileges to provision a Sparta app and API Gateway Your AWS user must have the following privileges. Ensure to update the YOUR_S3_BUCKETNAME_HERE value with your own S3 bucket name.\n{ \u0026#34;Version\u0026#34;: \u0026#34;2012-10-17\u0026#34;, \u0026#34;Statement\u0026#34;: [ { \u0026#34;Sid\u0026#34;: \u0026#34;VisualEditor0\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;cloudformation:CreateChangeSet\u0026#34;, \u0026#34;cloudformation:DeleteChangeSet\u0026#34;, \u0026#34;cloudformation:DescribeStacks\u0026#34;, \u0026#34;cloudformation:DescribeStackEvents\u0026#34;, \u0026#34;cloudformation:CreateStack\u0026#34;, \u0026#34;cloudformation:DeleteStack\u0026#34;, \u0026#34;cloudformation:DescribeChangeSet\u0026#34;, \u0026#34;cloudformation:ExecuteChangeSet\u0026#34;, \u0026#34;iam:GetRole\u0026#34;, \u0026#34;iam:DeleteRole\u0026#34;, \u0026#34;iam:CreateRole\u0026#34;, \u0026#34;iam:PutRolePolicy\u0026#34;, \u0026#34;iam:PassRole\u0026#34;, \u0026#34;iam:DeleteRolePolicy\u0026#34;, \u0026#34;lambda:CreateFunction\u0026#34;, \u0026#34;lambda:GetFunction\u0026#34;, \u0026#34;lambda:GetFunctionConfiguration\u0026#34;, \u0026#34;lambda:AddPermission\u0026#34;, \u0026#34;lambda:DeleteFunction\u0026#34;, \u0026#34;lambda:RemovePermission\u0026#34; ], \u0026#34;Resource\u0026#34;: \u0026#34;*\u0026#34; }, { \u0026#34;Sid\u0026#34;: \u0026#34;VisualEditor1\u0026#34;, \u0026#34;Effect\u0026#34;: \u0026#34;Allow\u0026#34;, \u0026#34;Action\u0026#34;: [ \u0026#34;apigateway:DELETE\u0026#34;, \u0026#34;apigateway:PUT\u0026#34;, \u0026#34;apigateway:PATCH\u0026#34;, \u0026#34;apigateway:POST\u0026#34;, \u0026#34;apigateway:GET\u0026#34;, \u0026#34;s3:PutObject\u0026#34;, \u0026#34;s3:GetObject\u0026#34;, \u0026#34;s3:GetBucketVersioning\u0026#34;, \u0026#34;s3:DeleteObject\u0026#34; ], \u0026#34;Resource\u0026#34;: [ \u0026#34;arn:aws:apigateway:*::*\u0026#34;, \u0026#34;arn:aws:s3:::\u0026lt;YOUR_S3_BUCKETNAME_HERE\u0026gt;\u0026#34; \u0026#34;arn:aws:s3:::\u0026lt;YOUR_S3_BUCKETNAME_HERE\u0026gt;/*\u0026#34; ] } ] } What flags are defined during AWS AMI compilation?  TAGS: -tags lambdabinary ENVIRONMENT: GOOS=linux GOARCH=amd64  What working directory should I use? Your working directory should be the root of your Sparta application. Eg, use\ngo run main.go provision --level info --s3Bucket $S3_BUCKET rather than\ngo run ./some/child/path/main.go provision --level info --s3Bucket $S3_BUCKET See GitHub for more details.\nHow can I make provision faster? Starting with Sparta v0.11.2, you can supply an optional \u0026ndash;inplace argument to the provision command. If this is set when provisioning updates to an existing stack, your Sparta application will verify that the only updates to the CloudFormation stack are code-level updates. If only code updates are detected, your Sparta application will parallelize UpdateFunctionCode API calls directly to update the application code.\nWhether \u0026ndash;inplace is valid is based on evaluating the ChangeSet results of the requested update operation.\nNOTE: The inplace argument implies that your service state is not reflected in CloudFormation.\nEvent Sources - SES Where does the SpartaRuleSet come from? SES only permits a single active receipt rule. Additionally, it\u0026rsquo;s possible that multiple Sparta-based services are handing different SES recipients.\nAll Sparta-based services share the SpartaRuleSet SES ruleset, and uniquely identify their Rules by including the current servicename as part of the SES ReceiptRule.\nWhy does provision not always enable the SpartaRuleSet? Initial SpartaRuleSet will make it the active ruleset, but Sparta assumes that manual updates made outside of the context of the framework were done with good reason and doesn\u0026rsquo;t attempt to override the user setting.\nOperations How can I provision a service dashboard? Sparta v0.13.0 adds support for the provisioning of a CloudWatch Dashboard that\u0026rsquo;s dynamically created based on your service\u0026rsquo;s topology. The dashboard is attached to the standard Sparta workflow via a WorkflowHook as in:\n// Setup the DashboardDecorator lambda hook workflowHooks := \u0026amp;sparta.WorkflowHooks{ ServiceDecorator: sparta.DashboardDecorator(lambdaFunctions, 60), } See the SpartaXRay project for a complete example of provisioning a dashboard as below:\nHow can I monitor my Lambda function? If you plan on using your Lambdas in production, you\u0026rsquo;ll probably want to be made aware of any excessive errors.\nYou can easily do this by adding a CloudWatch alarm to your Lambda, in the decorator method.\nThis example will push a notification to an SNS topic, and you can configure whatever action is appropriate from there.\nfunc lambdaDecorator(serviceName string, lambdaResourceName string, lambdaResource gocf.LambdaFunction, resourceMetadata map[string]interface{}, S3Bucket string, S3Key string, buildID string, cfTemplate *gocf.Template, context map[string]interface{}, logger *logrus.Logger) error { // setup CloudWatch alarm \tvar alarmDimensions gocf.CloudWatchMetricDimensionList alarmDimension := gocf.CloudWatchMetricDimension{Name: gocf.String(\u0026#34;FunctionName\u0026#34;), Value: gocf.Ref(lambdaResourceName).String()} alarmDimensions = []gocf.CloudWatchMetricDimension{alarmDimension} lambdaErrorsAlarm := \u0026amp;gocf.CloudWatchAlarm{ ActionsEnabled: gocf.Bool(true), AlarmActions: gocf.StringList(gocf.String(\u0026#34;arn:aws:sns:us-east-1:123456789:SNSToNotifyMe\u0026#34;)), AlarmName: gocf.String(\u0026#34;LambdaErrorAlarm\u0026#34;), ComparisonOperator: gocf.String(\u0026#34;GreaterThanOrEqualToThreshold\u0026#34;), Dimensions: \u0026amp;alarmDimensions, EvaluationPeriods: gocf.String(\u0026#34;1\u0026#34;), Period: gocf.String(\u0026#34;300\u0026#34;), MetricName: gocf.String(\u0026#34;Errors\u0026#34;), Namespace: gocf.String(\u0026#34;AWS/Lambda\u0026#34;), Statistic: gocf.String(\u0026#34;Sum\u0026#34;), Threshold: gocf.String(\u0026#34;3.0\u0026#34;), Unit: gocf.String(\u0026#34;Count\u0026#34;), } cfTemplate.AddResource(\u0026#34;LambdaErrorAlaram\u0026#34;, lambdaErrorsAlarm) return nil } Where can I view my function\u0026rsquo;s *logger output? Each lambda function includes privileges to write to CloudWatch Logs. The *logrus.logger output is written (with a brief delay) to a lambda-specific log group.\nThe CloudWatch log group name includes a sanitized version of your go function name \u0026amp; owning service name.\nWhere can I view Sparta\u0026rsquo;s golang spawn metrics? Visit the CloudWatch Metrics AWS console page and select the Sparta/{SERVICE_NAME} namespace:\nSparta publishes two counters:\n ProcessSpawned: A new go process was spawned to handle requests ProcessReused: An existing go process was used to handle requests. See also the discussion on AWS Lambda container reuse.  How can I include additional AWS resources as part of my Sparta application? Define a TemplateDecorator function and annotate the *gocf.Template with additional AWS resources.\nFor more flexibility, use a WorkflowHook.\nHow can I provide environment variables to lambda functions? Sparta uses conditional compilation rather than environment variables. See Managing Environments for more information.\nDoes Sparta support Versioning \u0026amp; Aliasing? Yes.\nDefine a TemplateDecorator function and annotate the *gocf.Template with an AutoIncrementingLambdaVersionInfo resource. During each provision operation, the AutoIncrementingLambdaVersionInfo resource will dynamically update the CloudFormation template with a new version.\nautoIncrementingInfo, autoIncrementingInfoErr := spartaCF.AddAutoIncrementingLambdaVersionResource(serviceName, lambdaResourceName, cfTemplate, logger) You can also move the \u0026ldquo;alias pointer\u0026rdquo; by referencing one or more of the versions available in the returned struct. For example, to set the alias pointer to the most recent version:\n// Add an alias to the version we\u0026#39;re publishing as part of this `provision` operation aliasResourceName := sparta.CloudFormationResourceName(\u0026#34;Alias\u0026#34;, lambdaResourceName) aliasResource := \u0026amp;gocf.LambdaAlias{ Name: gocf.String(\u0026#34;MostRecentVersion\u0026#34;), FunctionName: gocf.Ref(lambdaResourceName).String(), FunctionVersion: gocf.GetAtt(autoIncrementingInfo.CurrentVersionResourceName, \u0026#34;Version\u0026#34;).String(), } cfTemplate.AddResource(aliasResourceName, aliasResource) How do I forward additional metrics? Sparta-deployed AWS Lambda functions always operate with CloudWatch Metrics putMetric privileges. Your lambda code can call putMetric with application-specific data.\nHow do I setup alerts on additional metrics? Define a TemplateDecorator function and annotate the *gocf.Template with the needed AWS::CloudWatch::Alarm values. Use CloudFormationResourceName(prefix, \u0026hellip;parts) to help generate unique resource names.\nHow can I determine the outputs available in sparta.Discover() for dynamic AWS resources? The list of registered output provider types is defined by cloudformationTypeMapDiscoveryOutputs in cloudformation_resources.go. See the CloudFormation Resource Types Reference for information on interpreting the values.\nFuture "
   568  },
   569  {
   570  	"uri": "/credits/",
   571  	"title": "Credits",
   572  	"tags": [],
   573  	"description": "Sparta contributors",
   574  	"content": "Thanks! 🎉🙏  Kyle Anderson James Brook Ryan Brown sdbeard Scott Raine Nick Scheiblauer Paul Seiffert Thom Shutt Patrick Steger  "
   575  },
   576  {
   577  	"uri": "/reference/step/services/",
   578  	"title": "",
   579  	"tags": [],
   580  	"description": "",
   581  	"content": "Service Integrations The following Step Function service integrations:\n Amazon DynamoDb   TODO: Document Dynamo integration.  Amazon SageMaker   TODO: Document SageMaker integration.  Amazon SNS   TODO: Document SNS integration.  Amazon SQS   TODO: Document SQS integration.  AWS Batch   TODO: Document Batch integration.  AWS Fargate   TODO: Document Fargate integration.  AWS Glue   TODO: Document Glue integration.  "
   582  },
   583  {
   584  	"uri": "/",
   585  	"title": "",
   586  	"tags": [],
   587  	"description": "",
   588  	"content": "Serverless go microservices for AWS   Sparta is a framework that transforms a go application into a self-deploying AWS Lambda powered service.  All configuration and infrastructure requirements are expressed as go types for GitOps, repeatable, typesafe deployments.      Features  Unified Use a go monorepo to define and your microservice's:  Application logic AWS infrastructure Operational metrics Alert conditions Security policies   Complete AWS Ecosystem Sparta enables your lambda-based service to seamlessly integrate with the entire set of AWS lambda event sources such as:  DynamoDB S3 Kinesis SNS SES CloudMap CloudWatch Events CloudWatch Logs Step Functions  Additionally, your service may provision any other CloudFormation supported resource and even your own CustomResources.    Security Define IAM Roles with limited privileges to minimize your service's attack surface. Both string literal and ARN expressions are supported in order to reference dynamically created resources. Sparta treats POLA and #SecOps as first-class goals.  Discovery A service may provision dynamic AWS infrastructure, and discover, at lambda execution time, the dependent resources' AWS-assigned outputs (Ref \u0026amp; Fn::Att). Eliminate hardcoded Magic ARNs from your codebase and move towards immutable infrastructure\n   API Gateways Make your service HTTPS accessible by binding it to an API Gateway REST API during provisioning. Alternatively, expose a WebSocket [APIV2Gateway](https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/) API for an even more interactive experience.\n Static Sites Include a CORS-enabled S3-backed site with your service. S3-backed sites include API Gateway discovery information for turnkey deployment.\n    Sparta relies on CloudFormation to deploy and update your application. For resources that CloudFormation does not yet support, it uses Lambda-backed Custom Resources so that all service updates support both update and rollback semantics. Sparta\u0026rsquo;s automatically generated CloudFormation resources use content-based logical IDs whenever possible to preserve service availability and minimize resource churn during updates.\nGetting Started To get started using Sparta, begin with the Overview.\nAdministration  Problems? Please open an issue in GitHub.  \n Courtesy of gophers   Questions? Get in touch via:\n @mweagle Gophers: @mweagle  Signup page   Serverless: @mweagle  Signup page    Related Projects  go-cloudcondensor  Define AWS CloudFormation templates in go   go-cloudformation  go types for CloudFormation resources   ssm-cache  Lightweight cache for Systems Manager Parameter Store values    Support Sparta Help support continued Sparta development by becoming a Patreon patron!\nBecome a Patron!\nOther resources  AWS SAM Build an S3 website with API Gateway and AWS Lambda for Go using Sparta AWS blog post announcing Go support Sparta - A Go framework for AWS Lambda Other libraries \u0026amp; frameworks:  Serverless PAWS Apex lambda_proc go-lambda go-lambda (GRPC)   Supported AWS Lambda programming models Serverless Code Blog AWS Serverless Multi-Tier Architectures Whitepaper Lambda limits The Twelve Days of Lambda CloudCraft is a great tool for AWS architecture diagrams  "
   589  },
   590  {
   591  	"uri": "/_footer/",
   592  	"title": "",
   593  	"tags": [],
   594  	"description": "",
   595  	"content": ""
   596  },
   597  {
   598  	"uri": "/_header/",
   599  	"title": "",
   600  	"tags": [],
   601  	"description": "",
   602  	"content": ""
   603  },
   604  {
   605  	"uri": "/categories/",
   606  	"title": "Categories",
   607  	"tags": [],
   608  	"description": "",
   609  	"content": ""
   610  },
   611  {
   612  	"uri": "/tags/",
   613  	"title": "Tags",
   614  	"tags": [],
   615  	"description": "",
   616  	"content": ""
   617  }]