github.com/adrian-bl/terraform@v0.7.0-rc2.0.20160705220747-de0a34fc3517/website/source/docs/providers/aws/r/kinesis_firehose_delivery_stream.html.markdown (about)

     1  ---
     2  layout: "aws"
     3  page_title: "AWS: aws_kinesis_firehose_delivery_stream"
     4  sidebar_current: "docs-aws-resource-kinesis-firehose-delivery-stream"
     5  description: |-
     6    Provides a AWS Kinesis Firehose Delivery Stream
     7  ---
     8  
     9  # aws\_kinesis\_firehose\_delivery\_stream
    10  
    11  Provides a Kinesis Firehose Delivery Stream resource. Amazon Kinesis Firehose is a fully managed, elastic service to easily deliver real-time data streams to destinations such as Amazon S3 and Amazon Redshift.
    12  
    13  For more details, see the [Amazon Kinesis Firehose Documentation][1].
    14  
    15  ## Example Usage
    16  
    17  ### S3 Destination
    18  ```
    19  resource "aws_s3_bucket" "bucket" {
    20    bucket = "tf-test-bucket"
    21    acl = "private"
    22  }
    23  
    24  resource "aws_iam_role" "firehose_role" {
    25     name = "firehose_test_role"
    26     assume_role_policy = <<EOF
    27  {
    28    "Version": "2012-10-17",
    29    "Statement": [
    30      {
    31        "Action": "sts:AssumeRole",
    32        "Principal": {
    33          "Service": "firehose.amazonaws.com"
    34        },
    35        "Effect": "Allow",
    36        "Sid": ""
    37      }
    38    ]
    39  }
    40  EOF
    41  }
    42  
    43  resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
    44    name = "terraform-kinesis-firehose-test-stream"
    45    destination = "s3"
    46    s3_configuration {
    47      role_arn = "${aws_iam_role.firehose_role.arn}"
    48      bucket_arn = "${aws_s3_bucket.bucket.arn}"
    49    }
    50  }
    51  ```
    52  
    53  ### Redshift Destination
    54  
    55  ```
    56  resource "aws_redshift_cluster" "test_cluster" {
    57    cluster_identifier = "tf-redshift-cluster-%d"
    58    database_name = "test"
    59    master_username = "testuser"
    60    master_password = "T3stPass"
    61    node_type = "dc1.large"
    62    cluster_type = "single-node"
    63  }
    64  
    65  resource "aws_kinesis_firehose_delivery_stream" "test_stream" {
    66    name = "terraform-kinesis-firehose-test-stream"
    67    destination = "redshift"
    68    s3_configuration {
    69      role_arn = "${aws_iam_role.firehose_role.arn}"
    70      bucket_arn = "${aws_s3_bucket.bucket.arn}"
    71      buffer_size = 10
    72      buffer_interval = 400
    73      compression_format = "GZIP"
    74    }
    75    redshift_configuration {
    76      role_arn = "${aws_iam_role.firehose_role.arn}"
    77      cluster_jdbcurl = "jdbc:redshift://${aws_redshift_cluster.test_cluster.endpoint}/${aws_redshift_cluster.test_cluster.database_name}"
    78      username = "testuser"
    79      password = "T3stPass"
    80      data_table_name = "test-table"
    81      copy_options = "GZIP"
    82      data_table_columns = "test-col"
    83    }
    84  }
    85  ```
    86  
    87  ~> **NOTE:** Kinesis Firehose is currently only supported in us-east-1, us-west-2 and eu-west-1.
    88  
    89  ## Argument Reference
    90  
    91  The following arguments are supported:
    92  
    93  * `name` - (Required) A name to identify the stream. This is unique to the
    94  AWS account and region the Stream is created in.
    95  * `destination` – (Required) This is the destination to where the data is delivered. The only options are `s3` & `redshift`.
    96  * `s3_configuration` - (Required) Configuration options for the s3 destination (or the intermediate bucket if the destination
    97  is redshift). More details are given below.
    98  * `redshift_configuration` - (Optional) Configuration options if redshift is the destination. 
    99  Using `redshift_configuration` requires the user to also specify a
   100  `s3_configuration` block. More details are given below.
   101  
   102  The `s3_configuration` object supports the following:
   103  
   104  * `role_arn` - (Required) The ARN of the AWS credentials.
   105  * `bucket_arn` - (Required) The ARN of the S3 bucket
   106  * `prefix` - (Optional) The "YYYY/MM/DD/HH" time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket
   107  * `buffer_size` - (Optional) Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
   108                                  We recommend setting SizeInMBs to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be 10 MB or higher.
   109  * `buffer_interval` - (Optional) Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300.
   110  * `compression_format` - (Optional) The compression format. If no value is specified, the default is NOCOMPRESSION. Other supported values are GZIP, ZIP & Snappy. If the destination is redshift you cannot use ZIP or Snappy.
   111  * `kms_key_arn` - (Optional) If set, the stream will encrypt data using the key in KMS, otherwise, no encryption will
   112  be used.
   113  
   114  The `redshift_configuration` object supports the following:
   115  
   116  * `cluster_jdbcurl` - (Required) The jdbcurl of the redshift cluster.
   117  * `username` - (Required) The username that the firehose delivery stream will assume. It is strongly recommend that the username and password provided is used exclusively for Amazon Kinesis Firehose purposes, and that the permissions for the account are restricted for Amazon Redshift INSERT permissions.
   118  * `password` - (Required) The passowrd for the username above.
   119  * `role_arn` - (Required) The arn of the role the stream assumes.
   120  * `data_table_name` - (Required) The name of the table in the redshift cluster that the s3 bucket will copy to.
   121  * `copy_options` - (Optional) Copy options for copying the data from the s3 intermediate bucket into redshift.
   122  * `data_table_columns` - (Optional) The data table columns that will be targeted by the copy command.
   123  
   124  ## Attributes Reference
   125  
   126  * `arn` - The Amazon Resource Name (ARN) specifying the Stream
   127  
   128  [1]: https://aws.amazon.com/documentation/firehose/