gcp_bigtable
This component is mostly stable but breaking changes could still be made outside of major version releases if a fundamental problem with the component is found.
Writes messages to a GCP BigTable instance.
- Common
- Advanced
# Common config fields, showing default values
output:
label: ""
gcp_bigtable:
project: "" # No default (required)
instance: "" # No default (required)
table: my-table # No default (required)
row_key: ${!metadata("kafka_key")} # No default (required)
column: payload # No default (required)
family: cf1 # No default (required)
batching:
count: 0
byte_size: 0
period: ""
jitter: 0
check: ""
max_in_flight: 64
# All config fields, showing default values
output:
label: ""
gcp_bigtable:
project: "" # No default (required)
instance: "" # No default (required)
table: my-table # No default (required)
row_key: ${!metadata("kafka_key")} # No default (required)
column: payload # No default (required)
family: cf1 # No default (required)
batching:
count: 0
byte_size: 0
period: ""
jitter: 0
check: ""
processors: [] # No default (optional)
max_in_flight: 64
Each message is written as a SetCell mutation into the specified column family and column qualifier.
The table, row_key, column, and family fields support interpolation functions, allowing values to be resolved dynamically. This enables routing messages to different tables, rows, column families, or column qualifiers based on message content or metadata.
The first message in the batch will resolve the bloblang query for the field table and that value will be used for all messages in the batch.
Credentials
By default Bento will use a shared credentials file when connecting to GCP services. You can find out more in this document.
Batching
This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc.
Fields
project
The GCP project ID that contains the Bigtable instance.
Type: string
instance
The Bigtable instance ID to connect to.
Type: string
table
The table to write messages to. This field supports interpolation functions.
Type: string
# Examples
table: my-table
table: ${!metadata("table")}
row_key
The row key for each mutation. Row keys must be unique per message to avoid overwriting previous writes within the same batch. This field supports interpolation functions.
Type: string
# Examples
row_key: ${!metadata("kafka_key")}
row_key: ${!json("id")}
row_key: ${!metadata("type")}#${!uuid_v4()}
column
The column qualifier to set within the column family. This field supports interpolation functions.
Type: string
# Examples
column: payload
column: ${!metadata("column")}
family
The column family to write into. This field supports interpolation functions.
Type: string
# Examples
family: cf1
family: ${!metadata("family")}
batching
Allows you to configure a batching policy.
Type: object
# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m
batching:
count: 10
jitter: 0.1
period: 10s
batching.count
A number of messages at which the batch should be flushed. If 0 disables count based batching.
Type: int
Default: 0
batching.byte_size
An amount of bytes at which the batch should be flushed. If 0 disables size based batching.
Type: int
Default: 0
batching.period
A period in which an incomplete batch should be flushed regardless of its size.
Type: string
Default: ""
# Examples
period: 1s
period: 1m
period: 500ms
batching.jitter
A non-negative factor that adds random delay to batch flush intervals, where delay is determined uniformly at random between 0 and jitter * period. For example, with period: 100ms and jitter: 0.1, each flush will be delayed by a random duration between 0-10ms.
Type: float
Default: 0
# Examples
jitter: 0.01
jitter: 0.1
jitter: 1
batching.check
A Bloblang query that should return a boolean value indicating whether a message should end a batch.
Type: string
Default: ""
# Examples
check: this.type == "end_of_transaction"
batching.processors
A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
Type: array
# Examples
processors:
- archive:
format: concatenate
processors:
- archive:
format: lines
processors:
- archive:
format: json_array
max_in_flight
The maximum number of messages to have in flight at a given time. Increase this to improve throughput.
Type: int
Default: 64