All notable changes to this project will be documented in this file.
- Improved statsd client with better cached aggregation.
- New
tls
fields foramqp
input and output types.
- New
type
field forelasticsearch
output.
- New
throttle
processor.
- New
less_than
andgreater_than
operators formetadata
condition.
- New
metadata
condition type. - More metadata fields for
kafka
input. - Field
commit_period_ms
forkafka
andkafka_balanced
inputs for specifying a commit period.
- New
retries
field tos3
input, to cap the number of download attempts made on the same bucket item. - Added metadata based mechanism to detect final message from a
read_until
input. - Added field to
split
processor for specifying target batch sizes.
- Metadata fields are now per message part within a batch.
- New
metadata_json_object
function interpolation to return a JSON object of metadata key/value pairs.
- The
metadata
function interpolation now allows part indexing and no longer returns a JSON object when no key is specified, this behaviour can now be done using themetadata_json_object
function.
- Fields for the
http
processor to enable parallel requests from message batches.
- Broker level output processors are now applied before the individual output processors.
- The
dynamic
input and output HTTP paths for CRUD operations are now/inputs/{input_id}
and/outputs/{output_id}
respectively. - Removed deprecated
amazon_s3
,amazon_sqs
andscalability_protocols
input and output types. - Removed deprecated
json_fields
field from thededupe
processor.
- Add conditions to
process_map
processor.
- TLS config fields have been cleaned up for multiple types. This affects
the
kafka
,kafka_balanced
andhttp_client
input and output types, as well as thehttp
processor type.
- New
delete_all
anddelete_prefix
operators formetadata
processor. - More metadata fields extracted from the AMQP input.
- HTTP clients now support function interpolation on the URL and header values,
this includes the
http_client
input and output as well as thehttp
processor.
- New
key
field added to thededupe
processor, allowing you to deduplicate using function interpolation. This deprecates thejson_paths
array field.
- New
s3
andsqs
input and output types, these replace the now deprecatedamazon_s3
andamazon_sqs
types respectively, which will eventually be removed. - New
nanomsg
input and output types, these replace the now deprecatedscalability_protocols
types, which will eventually be removed.
- Metadata fields are now collected from MQTT input.
- AMQP output writes all metadata as headers.
- AMQP output field
key
now supports function interpolation.
- New
metadata
processor and configuration interpolation function.
- New config interpolator function
json_field
for extracting parts of a JSON message into a config value.
- Log level config field no longer stutters,
logger.log_level
is nowlogger.level
.
- Ability to create batches via conditions on message payloads in the
batch
processor. - New
--examples
flag for generating specific examples from Benthos.
- New
text
processor.
- Processor
process_map
replaced fieldstrict_premapping
withpremap_optional
.
- New
process_field
processor. - New
process_map
processor.
- Removed mapping fields from the
http
processor, this behaviour has been put into the newprocess_map
processor instead.
- Renamed
content
condition type totext
in order to clarify its purpose.
- Latency metrics for caches.
- TLS options for
kafka
andkafka_partitions
inputs and outputs.
- Metrics for items configured within the
resources
section are now namespaced under their identifier.
- New
copy
andmove
operators for thejson
processor.
- Metrics for recording
http
request latencies.
- Improved and rearranged fields for
http_client
input and output.
- More compression and decompression targets.
- New
lines
option for archive/unarchive processors. - New
encode
anddecode
processors. - New
period_ms
field for thebatch
processor. - New
clean
operator for thejson
processor.
- New
http
processor, where payloads can be sent to arbitrary HTTP endpoints and the result constructed into a new payload. - New
inproc
inputs and outputs for linking streams together.
- New streams endpoint
/streams/{id}/stats
for obtaining JSON metrics for a stream.
- Allow comma separated topics for
kafka_balanced
.
- Support for PATCH verb on the streams mode
/streams/{id}
endpoint.
- Sweeping changes were made to the environment variable configuration file. This file is now auto generated along with its supporting document. This change will impact the docker image.
- New
filter_parts
processor for filtering individual parts of a message batch. - New field
open_message
forwebsocket
input.
- No longer setting default input processor.
- New
root_path
field for service widehttp
config.
- New
regexp_exact
andregexp_partial
content condition operators.
- The
statsd
metrics target will now periodically report connection errors.
- The
json
processor will nowappend
array values in expanded form.
- More granular config options in the
http_client
output for controlling retry logic. - New
try
pattern for the outputbroker
type, which can be used in order to configure fallback outputs. - New
json
processor, this replacesdelete_json
,select_json
,set_json
.
- The
streams
API endpoints have been changed to become more "RESTy". - Removed the
delete_json
,select_json
andset_json
processors, please use thejson
processor instead.
- New
grok
processor for creating structured objects from unstructured data.
- New
files
input type for reading multiple files as discrete messages.
- Increase default
max_buffer
forstdin
,file
andhttp_client
inputs. - Command flags
--print-yaml
and--print-json
changed to provide sanitised outputs unless accompanied by new--all
flag.
- Badger based buffer option has been removed.
- New metrics wrapper for more basic interface implementations.
- New
delete_json
processor. - New field
else_processors
forconditional
processor.
- New websocket endpoint for
http_server
input. - New websocket endpoint for
http_server
output. - New
websocket
input type. - New
websocket
output type.
- Goreleaser config for generating release packages.
- Back to using Scratch as base for Docker image, instead taking ca-certificates from the build image.
- New
batch
processor for combining payloads up to a number of bytes. - New
conditional
processor, allows you to configure a chain of processors to only be run if the payload passes acondition
. - New
--stream
mode features:- POST verb for
/streams
path now supported. - New
--streams-dir
flag for parsing a directory of stream configs.
- POST verb for
- The
condition
processor has been renamedfilter
. - The
custom_delimiter
fields in any line reader typesfile
,stdin
,stdout
, etc have been renameddelimiter
, where the behaviour is the same. - Now using Alpine as base for Docker image, includes ca-certificates.