Skip to main content

aws_kinesis

Receive messages from one or more Kinesis streams.

Introduced in version 1.0.0.

# Common config fields, showing default values
input:
label: ""
aws_kinesis:
streams: [] # No default (required)
dynamodb:
table: ""
create: false
checkpoint_limit: 1024
auto_replay_nacks: true
commit_period: 5s
start_from_oldest: true
batching:
count: 0
byte_size: 0
period: ""
check: ""

Consumes messages from one or more Kinesis streams either by automatically balancing shards across other instances of this input, or by consuming shards listed explicitly. The latest message sequence consumed by this input is stored within a DynamoDB table, which allows it to resume at the correct sequence of the shard during restarts. This table is also used for coordination across distributed inputs when shard balancing.

Bento will not store a consumed sequence unless it is acknowledged at the output level, which ensures at-least-once delivery guarantees.

Ordering

By default messages of a shard can be processed in parallel, up to a limit determined by the field checkpoint_limit. However, if strict ordered processing is required then this value must be set to 1 in order to process shard messages in lock-step. When doing so it is recommended that you perform batching at this component for performance as it will not be possible to batch lock-stepped messages at the output level.

Table Schema

It's possible to configure Bento to create the DynamoDB table required for coordination if it does not already exist. However, if you wish to create this yourself (recommended) then create a table with a string HASH key StreamID and a string RANGE key ShardID.

Batching

Use the batching fields to configure an optional batching policy. Each stream shard will be batched separately in order to ensure that acknowledgements aren't contaminated.

Fields

streams

One or more Kinesis data streams to consume from. Streams can either be specified by their name or full ARN. Shards of a stream are automatically balanced across consumers by coordinating through the provided DynamoDB table. Multiple comma separated streams can be listed in a single element. Shards are automatically distributed across consumers of a stream by coordinating through the provided DynamoDB table. Alternatively, it's possible to specify an explicit shard to consume from with a colon after the stream name, e.g. foo:0 would consume the shard 0 of the stream foo.

Type: array

# Examples

streams:
- foo
- arn:aws:kinesis:*:111122223333:stream/my-stream

dynamodb

Determines the table used for storing and accessing the latest consumed sequence for shards, and for coordinating balanced consumers of streams.

Type: object

dynamodb.table

The name of the table to access.

Type: string
Default: ""

dynamodb.create

Whether, if the table does not exist, it should be created.

Type: bool
Default: false

dynamodb.billing_mode

When creating the table determines the billing mode.

Type: string
Default: "PAY_PER_REQUEST"
Options: PROVISIONED, PAY_PER_REQUEST.

dynamodb.read_capacity_units

Set the provisioned read capacity when creating the table with a billing_mode of PROVISIONED.

Type: int
Default: 0

dynamodb.write_capacity_units

Set the provisioned write capacity when creating the table with a billing_mode of PROVISIONED.

Type: int
Default: 0

checkpoint_limit

The maximum gap between the in flight sequence versus the latest acknowledged sequence at a given time. Increasing this limit enables parallel processing and batching at the output level to work on individual shards. Any given sequence will not be committed unless all messages under that offset are delivered in order to preserve at least once delivery guarantees.

Type: int
Default: 1024

auto_replay_nacks

Whether messages that are rejected (nacked) at the output level should be automatically replayed indefinitely, eventually resulting in back pressure if the cause of the rejections is persistent. If set to false these messages will instead be deleted. Disabling auto replays can greatly improve memory efficiency of high throughput streams as the original shape of the data can be discarded immediately upon consumption and mutation.

Type: bool
Default: true

commit_period

The period of time between each update to the checkpoint table.

Type: string
Default: "5s"

rebalance_period

The period of time between each attempt to rebalance shards across clients.

Type: string
Default: "30s"

lease_period

The period of time after which a client that has failed to update a shard checkpoint is assumed to be inactive.

Type: string
Default: "30s"

start_from_oldest

Whether to consume from the oldest message when a sequence does not yet exist for the stream.

Type: bool
Default: true

region

The AWS region to target.

Type: string
Default: ""

endpoint

Allows you to specify a custom endpoint for the AWS API.

Type: string
Default: ""

credentials

Optional manual configuration of AWS credentials to use. More information can be found in this document.

Type: object

credentials.profile

A profile from ~/.aws/credentials to use.

Type: string
Default: ""

credentials.id

The ID of credentials to use.

Type: string
Default: ""

credentials.secret

The secret for the credentials being used.

Secret

This field contains sensitive information that usually shouldn't be added to a config directly, read our secrets page for more info.

Type: string
Default: ""

credentials.token

The token for the credentials being used, required when using short term credentials.

Type: string
Default: ""

credentials.from_ec2_role

Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance.

Type: bool
Default: false
Requires version 1.0.0 or newer

credentials.role

A role ARN to assume.

Type: string
Default: ""

credentials.role_external_id

An external ID to provide when assuming a role.

Type: string
Default: ""

batching

Allows you to configure a batching policy.

Type: object

# Examples

batching:
byte_size: 5000
count: 0
period: 1s

batching:
count: 10
period: 1s

batching:
check: this.contains("END BATCH")
count: 0
period: 1m

batching.count

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: int
Default: 0

batching.byte_size

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: int
Default: 0

batching.period

A period in which an incomplete batch should be flushed regardless of its size.

Type: string
Default: ""

# Examples

period: 1s

period: 1m

period: 500ms

batching.check

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string
Default: ""

# Examples

check: this.type == "end_of_transaction"

batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array

# Examples

processors:
- archive:
format: concatenate

processors:
- archive:
format: lines

processors:
- archive:
format: json_array