Skip to content

to

Saves to an URI, inferring the destination, compression and format.

to uri:string, [saver_args… { … }]

The to operator is an easy way to get data out of Tenzir into It will try to infer the connector, compression and format based on the given URI.

The URI to load from.

An optional set of arguments passed to the saver. This can be used to e.g. pass credentials to a connector:

to "https://example.org/file.json", headers={Token: "XYZ"}

The optional pipeline argument allows for explicitly specifying how to compresses and writes data. By default, the pipeline is inferred based on a set of rules.

If inference is not possible, or not sufficient, this argument can be used to control compression and writing. Providing this pipeline disables the inference.

Saving Tenzir data into some resource consists of three steps:

The to operator tries to infer all three steps from the given URI.

The format to write inferred from the file-ending. Supported file formats are the common file endings for our read_* operators.

If you want to provide additional arguments to the writer, you can use the pipeline argument to specify the parsing manually.

The compression, just as the format, is inferred from the “file-ending” in the URI. Under the hood, this uses the decompress_* operators. Supported compressions can be found in the list of compression extensions.

The compression step is optional and will only happen if a compression could be inferred. If you want to write with specific compression settings, you can use the pipeline argument to specify the decompression manually.

The connector is inferred based on the URI scheme://. If no scheme is present, the connector attempts to save to the local filesystem.

SchemeOperatorExample
abfs,abfsssave_azure_blob_storageto "abfs://path/to/file.json"
amqpsave_amqpto "amqp://…
elasticsearchto_opensearchto "elasticsearch://…
filesave_fileto "file://path/to/file.json"
fluent-bitto_fluent_bitto "fluent-bit://elasticsearch"
ftp, ftpssave_ftpto "ftp://example.com/file.json"
gcpssave_google_cloud_pubsubto "gcps://project_id/topic_id" { … }
gssave_gcsto "gs://bucket/object.json"
http, httpssave_httpto "http://example.com/file.json"
inprocsave_zmqto "inproc://127.0.0.1:56789" { write_json }
kafkasave_kafkato "kafka://topic" { write_json }
opensearchto_opensearchto "opensearch://…
s3save_s3to "s3://bucket/file.json"
sqssave_sqsto "sqs://my-queue" { write_json }
tcpsave_tcpto "tcp://127.0.0.1:56789" { write_json }
udpsave_udpto "udp://127.0.0.1:56789" { write_json }
zmqsave_zmqto "zmq://127.0.0.1:56789" { write_json }

Please see the respective operator pages for details on the URI’s locator format.

The to operator can deduce the file format based on these file-endings:

FormatFile EndingsOperator
CSV.csvwrite_csv
Feather.feather, .arrowwrite_feather
JSON.jsonwrite_json
NDJSON.ndjson, .jsonlwrite_ndjson
Parquet.parquetwrite_parquet
Pcap.pcapwrite_pcap
SSV.ssvwrite_ssv
TSV.tsvwrite_tsv
YAML.yamlwrite_yaml

The to operator can deduce the following compressions based on these file-endings:

CompressionFile Endings
Brotli.br, .brotli
Bzip2.bz2
Gzip.gz, .gzip
LZ4.lz4
Zstd.zst, .zstd
to operator
to "myfile.json.gz"
Effective pipeline
write_json
compress_gzip
save_file "myfile.json.gz"
to "path/to/my/output.csv"
to "path/to/my/output.csv.bz2"

from