Skip to main content

datadog_logs

BETA

This component is mostly stable but breaking changes could still be made outside of major version releases if a fundamental problem with the component is found.

Sends log messages to the Datadog Logs API.

# Common config fields, showing default values
output:
label: ""
datadog_logs:
api_key: "" # No default (optional)
source: "" # No default (optional)
tags: env:${!json("environment")},version:${!json("version")} # No default (optional)
hostname: "" # No default (optional)
service: "" # No default (optional)
status: "" # No default (optional)
timestamp: "" # No default (optional)
content_encoding: gzip
endpoint: "" # No default (optional)
batching:
count: 0
byte_size: 0
period: ""
jitter: 0
check: ""
max_in_flight: 64

Submits log entries to Datadog using the HTTP Logs intake API.

Limits

  • Maximum payload size (uncompressed): 5 MB
  • Maximum size for a single log: 1 MB
  • Maximum number of logs per batch: 1,000 entries

Logs exceeding 1 MB are truncated by Datadog but still accepted (2xx). Payloads exceeding 5 MB are rejected with a 413.

warning

Log events with a timestamp older than 18 hours in the past will be rejected.

Authentication

Set api_key explicitly or via the DD_API_KEY environment variable.

Fields

api_key

The Datadog API key. If unset, falls back to the DD_API_KEY environment variable.

Secret

This field contains sensitive information that usually shouldn't be added to a config directly, read our secrets page for more info.

Type: string

site

The Datadog site to send logs to. If unset, falls back to the DD_SITE environment variable, then datadoghq.com.

Type: string

# Examples

site: datadoghq.com

site: datadoghq.eu

site: us3.datadoghq.com

site: us5.datadoghq.com

source

The source of the log, used for log processing rules. This field supports interpolation functions.

Type: string

tags

A comma-separated list of tags to attach to the log. This field supports interpolation functions.

Type: string

# Examples

tags: env:${!json("environment")},version:${!json("version")}

hostname

The hostname of the machine that produced the log. This field supports interpolation functions.

Type: string

service

The name of the service that generated the log. This field supports interpolation functions.

Type: string

status

The status of the log (e.g. info, warn, error). This field supports interpolation functions.

Type: string

timestamp

The timestamp of the log in epoch milliseconds. Defaults to the current time. This field supports interpolation functions.

Type: string

content_encoding

HTTP content encoding used to compress log payloads.

Type: string
Default: "gzip"
Options: gzip, identity, deflate.

endpoint

Override the API's destination endpoint with a custom host. Protocol scheme defaults to 'http'.

Type: string

batching

Allows you to configure a batching policy.

Type: object

# Examples

batching:
byte_size: 5000
count: 0
period: 1s

batching:
count: 10
period: 1s

batching:
check: this.contains("END BATCH")
count: 0
period: 1m

batching:
count: 10
jitter: 0.1
period: 10s

batching.count

A number of messages at which the batch should be flushed. If 0 disables count based batching.

Type: int
Default: 0

batching.byte_size

An amount of bytes at which the batch should be flushed. If 0 disables size based batching.

Type: int
Default: 0

batching.period

A period in which an incomplete batch should be flushed regardless of its size.

Type: string
Default: ""

# Examples

period: 1s

period: 1m

period: 500ms

batching.jitter

A non-negative factor that adds random delay to batch flush intervals, where delay is determined uniformly at random between 0 and jitter * period. For example, with period: 100ms and jitter: 0.1, each flush will be delayed by a random duration between 0-10ms.

Type: float
Default: 0

# Examples

jitter: 0.01

jitter: 0.1

jitter: 1

batching.check

A Bloblang query that should return a boolean value indicating whether a message should end a batch.

Type: string
Default: ""

# Examples

check: this.type == "end_of_transaction"

batching.processors

A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.

Type: array

# Examples

processors:
- archive:
format: concatenate

processors:
- archive:
format: lines

processors:
- archive:
format: json_array

max_in_flight

The maximum number of messages to have in flight at a given time. Increase this to improve throughput.

Type: int
Default: 64