Vector v0.50.0 release notes

The Vector team is excited to announce version 0.50.0!

Release highlights

  • The opentelemetry source can now decode data according to the standard OpenTelemetry protocol for all telemetry data types (logs, metrics and traces). This eliminates the need for complex event remapping. It greatly simplifies configuration for OTEL -> Vector -> OTEL use cases or when forwarding data to any system that expects OTLP-formatted telemetry.
  • A new varint_length_delimited framing option is now available which enables compatibility with standard protobuf streaming implementations and tools like ClickHouse.
  • Introduced a new incremental_to_absolute transform, useful when metric data might be lost in transit or for creating a historical record of the metric.
  • A new okta source for consuming Okta system logs is now available.
  • The exec secrets option now supports protocol version v1.1 which can be used with the Datadog Secret Backend.

Breaking Changes

  • The azure_blob sink now requires a connection_string. This is the only supported authentication method for now. For more details, see this pull request).
Upgrading Vector
When upgrading, we recommend stepping through minor versions as these can each contain breaking changes while Vector is pre-1.0. These breaking changes are noted in their respective upgrade guides.

Vector Changelog

7 new features

  • Introduced a new okta source for consuming Okta system logs
    Thanks to sonnens for contributing this change!
  • Added an optional ttl_field configuration option to the memory enrichment table, to override the global memory table TTL on a per event basis.
    Thanks to esensar, Quad9DNS for contributing this change!
  • The request_retry_partial behavior for the aws_kinesis_streams sink was changed. Now only the failed records in a batch will be retried (instead of all records in the batch).
    Thanks to lht for contributing this change!
  • Secrets options now support the protocol version 1.1 and can be used with the datadog-secret-backend.

    Sample config:

    secret:
    	exec_backend:
    		type: "exec"
    		command: [/usr/bin/datadog-secret-backend]
    		protocol:
    			version: v1_1
    			backend_type: file.json
    			backend_config:
    				file_path: ~/secrets.json
    

    Thanks to graphcareful for contributing this change!
  • Add a new incremental_to_absolute transform which converts incremental metrics to absolute metrics. This is useful for use cases when sending metrics to a sink is lossy or you want to get a historical record of metrics, in which case incremental metrics may be inaccurate since any gaps in metrics sent will result in an inaccurate reading of the ending value.
    Thanks to GreyLilac09 for contributing this change!
  • When config reload is aborted due to GlobalOptions changes, the specific top-level fields that differ are now logged to help debugging.
    Thanks to suikammd for contributing this change!
  • The prometheus_remote_write source now supports optional NaN value filtering via the skip_nan_values configuration option.

    When enabled, metric samples with NaN values are discarded during parsing, preventing downstream processing of invalid metrics. For counters and gauges, individual samples with NaN values are filtered. For histograms and summaries, the entire metric is filtered if any component contains NaN values (sum, bucket limits, or quantile values).

    This feature defaults to false to maintain backward compatibility.


    Thanks to elohmeier for contributing this change!

6 enhancements

  • The gelf encoding format now supports chunking when used with the socket sink in udp mode. The maximum chunk size can be configured using encoding.gelf.max_chunk_size.
    Thanks to aramperes for contributing this change!
  • The nats source now drains subscriptions during shutdown, ensuring that in-flight and pending messages are processed.
    Thanks to benjamin-awd for contributing this change!
  • Added JetStream support to the nats source.
    Thanks to benjamin-awd for contributing this change!
  • Added insert_namespace_fields config option which can be used to disable listing Kubernetes namespaces, reducing resource usage in clusters with many namespaces.
    Thanks to imbstack for contributing this change!
  • The opentelemetry source now supports a new decoding mode which can be enabled by setting use_otlp_decoding to true. In this mode, all events preserve the OTLP format. These events can be forwarded directly to the opentelemetry sink without modifications.

    Note: The OTLP metric format and the Vector metric format differ, so the opentelemetry source emits OTLP formatted metrics as Vector log events. These events cannot be used with existing metrics transforms. However, they can be ingested by the OTEL collectors as metrics.


    Thanks to pront for contributing this change!
  • Added support for varint length delimited framing for protobuf, which is compatible with standard protobuf streaming implementations and tools like ClickHouse.

    Users can now opt-in to varint framing by explicitly specifying framing.method: varint_length_delimited in their configuration. The default remains length-delimited framing for backward compatibility.


    Thanks to modev2301 for contributing this change!

7 bug fixes

  • Fixed the splunk_hec sink to not use compression on indexer acknowledgement queries.
    Thanks to sbalmos for contributing this change!
  • Fixed a bug where certain floating-point values such as f64::NAN, f64::INFINITY, and similar would cause Vector to panic when sorting more than 20 items in some internal functions.
    Thanks to thomasqueirozb for contributing this change!
  • Prevent panic in file source during timing stats reporting.
    Thanks to mayuransw for contributing this change!
  • Fixed the default aws_s3 sink retry strategy. The default configuration now correctly retries common transient errors instead of requiring manual configuration.
    Thanks to pront for contributing this change!
  • Fix disk buffer panics when both reader and writer are on the last data file and it is corrupted. This scenario typically occurs when a node shuts down improperly, leaving the final data file in a corrupted state.
    Thanks to anil-db for contributing this change!
  • When there is an error encoding udp and unix socket datagrams, the event status is now updated correctly to indicate an error.
    Thanks to aramperes for contributing this change!
  • Added validation to ensure a test that expects no output from a source, does not perform operations on said source.
    Thanks to kalopsian-tz for contributing this change!

1 chore

  • The azure_blob sink now requires a connection_string. This simplifies configuration and ensures predictable behavior in production. Other authentication methods will be not supported at least until azure_* crates mature.
    Thanks to pront for contributing this change!

VRL Changelog

0.27.0 (2025-09-18)

Breaking Changes & Upgrade Guide

  • The validate_json_schema functionality has been enhanced to collect and return validation error(s) in the error message return value, in addition to the existing primary Boolean true / false return value. (https://github.com/vectordotdev/vrl/pull/1483)

Using JSON schema test-schema.json below:

{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"properties": {
"test": {
"type": "boolean"
},
"id": {
"type": "integer"
}
},
"required": ["test"],
"additionalProperties": false
}

Before:

$ invalid_object = { "id": "123" }
{ "id": "123" }

$ valid, err = validate_json_schema(encode_json(invalid_object), "test-schema.json")
false

$ valid
false

$ err
null

After:

$ invalid_object = { "id": "123" }
{ "id": "123" }

$ valid, err = validate_json_schema(encode_json(invalid_object), "test-schema.json")
"function call error for "validate_json_schema" at (13:82): JSON schema validation failed: "123" is not of type "integer" at /id, "test" is a required property at /"

$ valid
false

$ err
"function call error for "validate_json_schema" at (13:82): JSON schema validation failed: "123" is not of type "integer" at /id, "test" is a required property at /"

New Features

  • Added a new xxhash function implementing xxh32/xxh64/xxh3_64/xxh3_128 hashing algorithms.

authors: stigglor (https://github.com/vectordotdev/vrl/pull/1473)

  • Added an optional strict_mode parameter to parse_aws_alb_log. When set to false, the parser ignores any newly added/trailing fields in AWS ALB logs instead of failing. Defaults to true to preserve current behavior.

authors: anas-aso (https://github.com/vectordotdev/vrl/pull/1482)

  • Added a new array function pop that removes the last item from an array.

authors: jlambatl (https://github.com/vectordotdev/vrl/pull/1501)

  • Added two new cryptographic functions encrypt_ip and decrypt_ip for IP address encryption

These functions use the IPCrypt specification and support both IPv4 and IPv6 addresses with two encryption modes: aes128 (IPCrypt deterministic, 16-byte key) and pfx (IPCryptPfx, 32-byte key). Both algorithms are format-preserving (output is a valid IP address) and deterministic. (https://github.com/vectordotdev/vrl/pull/1506)

Enhancements

  • Added an optional body parameter to http_request. Best used when sending a POST or PUT request.

This does not perform automatic setting of Content-Type or Content-Length header(s). The caller should add these headers using the headers map parameter. (https://github.com/vectordotdev/vrl/pull/1502)

Fixes

  • The validate_json_schema function no longer panics if the JSON schema file cannot be accessed or is invalid. (https://github.com/vectordotdev/vrl/pull/1476)
  • Fixed the http_request function’s ability to run from the VRL CLI, no longer panics.

authors: sbalmos (https://github.com/vectordotdev/vrl/pull/1510)

Download Version 0.50.0