Skip to content

Fluent Bit

Fluent Bit is a an open source observability pipeline. Tenzir embeds Fluent Bit, exposing all its inputs via from_fluent_bit and outputs via to_fluent_bit

This makes Tenzir effectively a superset of Fluent Bit.

Amazon CloudWatchAmazon Kinesis Data FirehoseAmazon Kinesis Data StreamsAmazon S3Azure BlobAzure Data ExplorerAzure Log AnalyticsAzure Logs Ingestion APICounterDatadogElasticsearchFileFlowCounterForwardGELFGoogle ChronicleGoogle Cloud BigQueryHTTPInfluxDBKafkaKafka REST ProxyLogDNALokiNATSNew RelicObserveOracle Log AnalyticsOpenSearchOpenTelemetryPostgreSQLPrometheus ExporterPrometheus Remote WriteSkyWalkingSlackSplunkStackdriverStandard OutputSyslogTCP & TLSTreasure DataVivo ExporterWebSocket​​CollectdCPU Log Based MetricsDisk I/O Log Based MetricsDocker Log Based MetricsDocker EventsDummyElasticsearchExecExec WasiFluent Bit MetricsForwardHeadHTTPHealthKafkaKernel LogsKubernetes EventsMemory MetricsMQTTNetwork I/O Log Based MetricsNGINX Exporter MetricsNode Exporter MetricsPodman MetricsProcess Log Based MetricsPrometheus Scrape MetricsRandomSerial InterfaceSplunkStandard InputStatsDSyslogSystemdTailTCPThermalUDPOpenTelemetryWindows Event LogWindows Event Log (winevtlog)Windows Exporter Metrics

Fluent Bit parsers map to Tenzir operators that accept bytes as input and produce events as output. Fluent Bit filters correspond to Tenzir operators that perform event-to-event transformations. Tenzir does not expose Fluent Bit parsers and filters, only inputs and output.

Internally, Fluent Bit uses MsgPack to encode events whereas Tenzir uses Arrow record batches. The fluentbit source operator transposes MsgPack to Arrow, and the fluentbit sink performs the reverse operation.

An invocation of the fluent-bit commandline utility

Terminal window
fluent-bit -o input_plugin -p key1=value1 -p key2=value2 -p…

translates to Tenzir’s from_fluent_bit operator as follows:

from_fluent_bit "input_plugin", options={key1: value1, key2: value2, …}

with the to_fluent_bit operator working exactly analogous.

Ingest OpenTelemetry logs, metrics, and traces

Section titled “Ingest OpenTelemetry logs, metrics, and traces”
from_fluent_bit "opentelemetry"

You can then send JSON-encoded log data to a freshly created API endpoint:

Terminal window
curl \
--header "Content-Type: application/json" \
--request POST \
--data '{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1660296023390371588","body":{"stringValue":"{\"message\":\"dummy\"}"},"traceId":"","spanId":""}]}]}]}' \
http://0.0.0.0:4318/v1/logs
from_fluent_bit "splunk", options = {port: 8088}

Imitate an ElasticSearch & OpenSearch Bulk API endpoint

Section titled “Imitate an ElasticSearch & OpenSearch Bulk API endpoint”

This allows you to ingest from beats (e.g., Filebeat, Metricbeat, Winlogbeat).

from_fluent_bit "elasticsearch", options = {port: 9200}
to_fluent_bit "datadog", options = {apikey: "XXX"}
to_fluent_bit "es", options = {host: 192.168.2.3, port: 9200, index: "my_index", type: "my_type"}