Update module github.com/twmb/franz-go to v1.18.0 #21
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v1.15.2
->v1.18.0
Release Notes
twmb/franz-go (github.com/twmb/franz-go)
v1.18.0
Compare Source
===
This release adds support for Kafka 3.7, adds a few community requested APIs,
some internal improvements, and fixes two bugs. One of the bugfixes is for a
deadlock; it is recommended to bump to this release to ensure you do not run
into the deadlock. The features in this release are relatively small.
This adds protocol support for KIP-890 and KIP-994, and
adds further protocol support for [KIP-848][KIP-848]. If you are using
transactions, you may see a new
kerr.TransactionAbortable
error, whichsignals that your ongoing transaction should be aborted and will not be
successful if you try to commit it.
Lastly, there have been a few improvements to
pkg/sr
that are not mentionedin these changelog notes.
Bug fixes
If you canceled the context used while producing while your client was
at the maximum buffered records or bytes, it was possible to experience
deadlocks. This has been fixed. See #832 for more details.
Previously, if using
GetConsumeTopics
while regex consuming, the functionwould return all topics ever discovered. It now returns only the topics that
are being consumed.
Improvements
encountered when consuming if possible. If a producer produces very infrequently,
it is possible the broker forgets the producer by the next time the producer
produces. In this case, the producer receives an OutOfOrderSequenceNumber error.
The client now internally resets properly so that you do not see the error.
Features
AllowRebalance
andCloseAllowingRebalance
have been added toGroupTransactSession
.FetchTopic
type now has includes the topic'sTopicID
.ErrGroupSession
internal error field is now public, allowing you to test how you handle the internal error.kerr.TransactionAbortable
error from many functions while using transactions.Relevant commits
0fd1959d
kgo: support Kafka 3.8's kip-890 modifications68163c55
bugfix kgo: do not add all topics to internal tps map when regex consuming3548d1f7
improvement kgo: ignore OOOSN where possible6a759401
bugfix kgo: fix potential deadlock when reaching max buffered (records|bytes)4bfb0c68
feature kgo: add TopicID to the FetchTopic type06a9c47d
feature kgo: export the wrapped error from ErrGroupSession4affe8ef
feature kgo: add AllowRebalance and CloseAllowingRebalance to GroupTransactSessionv1.17.1
Compare Source
===
This patch release fixes four bugs (two are fixed in one commit), contains two
internal improvements, and adds two other minor changes.
Bug fixes
If you were using the
MaxBufferedBytes
option and ever hit the max, odds arelikely that you would experience a deadlock eventually. That has been fixed.
If you ever produced a record with no topic field and without using
DefaultProduceTopic
,or if you produced a transactional record while not in a transaction, AND if the client
was at the maximum buffered records, odds are you would eventually deadlock.
This has been fixed.
It was previously not possible to set lz4 compression levels.
There was a data race on a boolean field if a produce request was being
written at the same time a metadata update happened, and if the metadata
update has an error on the topic or partition that is actively being written.
Note that the race was unlikely and if you experienced it, you would have noticed
an OutOfOrderSequenceNumber error. See this comment
for more details.
Improvements
Canceling the context you pass to
Produce
now propagates in two more areas:the initial
InitProducerID
request that occurs the first time you produce,and if the client is internally backing off due to a produce request failure.
Note that there is no guarantee on which context is used for cancelation if
you produce many records, and the client does not allow canceling if it is
currently unsafe to do so. However, this does mean that if your cluster is
somewhat down such that
InitProducerID
is failing on your new client, youcan now actually cause the
Produce
to quit. See this commentfor what it means for a record to be "safe" to fail.
The client now ignores aborted records while consuming only if you have
configured
FetchIsolationLevel(ReadCommitted())
. Previously, the client reliedentirely on the
FetchResponse
AbortedTransactions
field, but it's possiblethat brokers could send aborted transactions even when not using read committed.
Specifically, this was a behavior difference in Redpanda, and the KIP that introduced
transactions and all relevant documents do not mention what the broker behavior
actually should be here. Redpanda itself was also changed to not send aborted
transactions when using read committed, but we may as well improve franz-go as well.
Decompression now better reuses buffers under the hood, reducing allocations.
Brokers that return preferred replicas to fetch from now causes an info level
log in the client.
Relevant commits
305d8dc
kgo: allow record ctx cancelation to propagate a bit more24fbb0f
bugfix kgo: fix deadlock in Produce when using MaxBufferedBytes1827add
bugfix kgo sink: fix read/write race for recBatch.canFailFromLoadErrsd7ea2c3
bugfix fix setting lz4 compression levels (thanks @asg0451!)5809dec
optimise: use byteBuffer pool in decompression (thanks @kalbhor!)cda897d
kgo: add log for preferred replicase62b402
improvement kgo sink: do not back off on certain edge case9e32bf9
kgo: ignore aborted txns if usingREAD_UNCOMMITTED
v1.17.0
Compare Source
===
This long-coming release, four months after v1.16.0, adds support for Kafka 3.7
and adds a few community added or requested APIs. There will be a kadm release
shortly following this one, and maybe a plugin release.
This adds full support for KIP-951, as well as protocol support for
KIP-919 (which has no client facing features) and KIP-848
(protocol only, not the feature!). KIP-951 should make the client faster at
handling when the broker moves partition leadership to a different broker.
There are two fairly minor bug fixes in the kgo package in this release, both
described below. There is also one bugfix in the pkg/sr independent (and
currently) untagged module. Because pkg/sr is untagged, the bugfix was released
a long time ago, but the relevant commit is still mentioned below.
Bug fixes
Previously, upgrading a consumer group from non-cooperative to cooperative
while the group was running did not work. This is now fixed (by @hamdanjaveed, thank you!).
Previously, if a cooperative consumer group member rebalanced while fetching
offsets for partitions, if those partitions were not lost in the rebalance,
the member would call OnPartitionsAssigned with those partitions again.
Now, those partitions are passed to OnPartitionsAssigned only once (the first time).
Improvements
The client will now stop lingering if you hit max buffered records or bytes.
Previously, if your linger was long enough, you could stall once you hit
either of the Max options; that is no longer the case.
If you are issuing admin APIs on the same client you are using for consuming
or producing, you may see fewer metadata requests being issued.
There are a few other even more minor improvements in the commit list if you
wish to go spelunking :).
Features
The
Offset
type now has a new methodAtCommitted()
, which causes theconsumer to not fetch any partitions that do not have a previous commit.
This mirrors Kafka's
auto.offset.reset=none
option.KIP-951, linked above and the commit linked below, improves latency around
partition leader transfers on brokers.
Client.GetConsumeTopics
allows you to query what topics the client iscurrently consuming. This may be useful if you are consuming via regex.
Client.MarkCommitOffsets
allows you to mark offsets to be committed inbulk, mirroring the non-mark API
CommitOffsets
.Relevant commits
franz-go
a7caf20
feature kgo.Offset: add AtCommitted()55dc7a0
bugfix kgo: re-add fetch-canceled partitions AFTER the user callbackdb24bbf
improvement kgo: avoid / wakeup lingering if we hit max bytes or max records993544c
improvement kgo: Optimistically cache mapped metadata when cluster metadata is periodically refreshed (thanks @pracucci!)1ed02eb
feature kgo: add support for KIP-9512fbbda5
bugfix fix: clear lastAssigned when revoking eager consumerd9c1a41
pkg/kerr: add new errors54d3032
pkg/kversion: add 3.7892db71
pkg/sr bugfix sr SubjectVersions calls pathSubjectVersioned26ed0
feature kgo: adds Client.GetConsumeTopics (thanks @UnaffiliatedCode!)929d564
feature kgo: adds Client.MarkCommitOffsets (thanks @sudo-sturbia!)kfake
kfake as well has a few improvements worth calling out:
18e2cc3
kfake: support committing to non-existing groupsb05c3b9
kfake: support KIP-951, fix OffsetForLeaderEpoch5d8aa1c
kfake: fix handling ListOffsets with requested timestampv1.16.1
Compare Source
===
This patch release fixes one bug and un-deprecates SaramaHasher.
SaramaHasher, while not identical to Sarama's partitioner, actually is
identical to some other partitioners in the Kafka client ecosystem. So, the old
function is now un-deprecated, but the documentation correctly points you to
SaramaCompatHasher and mentions why you may still want to use SaramaHasher.
For the bug: if you tried using CommitOffsetsSync during a group rebalance, and
you canceled your context while the group was still rebalancing, then
CommitOffsetsSync would enter a deadlock and never return. That has been fixed.
cd65d77
and99d6dfb
kgo: fix bugd40ac19
kgo: un-deprecate SaramaHasher and add docs explaining whyv1.16.0
Compare Source
===
This release contains a few minor APIs and internal improvements and fixes two
minor bugs.
One new API that is introduced also fixes a bug. API-wise, the
SaramaHasher
was actually not a 1:1 compatible hasher. The logic was identical, but there
was a rounding error because Sarama uses int32 module arithmetic, whereas kgo
used int (which is likely int64) which caused a different hash result. A new
SaramaCompatHasher
has been introduced and the oldSaramaHasher
has beendeprecated.
The other bugfix is that
OptValue
on thekgo.Logger
option panicked if youwere not using a logger. That has been fixed.
The only other APIs that are introduced are in the
kversions
package; theyare minor, see the commit list below.
If you issue a sharded request and any of the responses has a retryable error
in the response, this is no-longer returned as a top-level shard error. The
shard error is now nil, and you can properly inspect the response fully.
Lastly (besides other internal minor improvements not worth mentioning),
metadata fetches can now inject fake fetches if the metadata response has topic
or partition load errors. This is unconditionally true for non-retryable
errors. If you use
KeepRetryableFetchErrors
, you can now also see whenmetadata fetching is showing unknown topic errors or other retryable errors.
a2340eb
improvement pkg/kgo: inject fake fetches on metadata load errorsd07efd9
feature kversion: addVersionStrings
,FromString
,V3_6_0
8d30de0
bugfix pkg/kgo: fix OptValue with no logger set012cd7c
improvement kgo: do not return response ErrorCode's as shard errors1dc3d40
bugfix: actually have correct sarama compatible hasher (thanks @C-Pro)v1.15.4
Compare Source
===
This patch release fixes a difficult to encounter, but
fatal-for-group-consuming bug.
The sequence of events to trigger this bug:
NOT_COORDINATOR
error was received)In this sequence of events, FindCoordinator will fail with
context.Canceled
and, importantly, also return that error to Heartbeat. In the guts of the
client, a
context.Canceled
error should only happen when a group is beingleft, so this error is recognized as a group-is-leaving error and the group
management goroutine exits. Thus, the group is never rejoined.
This likely requires a system to be overloaded to begin with, because
FindCoordinator requests are usually very fast.
The fix is to use the client context when issuing FindCoordinator, rather than
the parent request. The parent request can still quit, but FindCoordinator
continues. No parent request can affect any other waiting request.
This patch also includes a dep bump for everything but klauspost/compress;
klauspost/compress changed go.mod to require go1.19, while this repo still
requires 1.18. v1.16 will change to require 1.19 and then this repo will bump
klauspost/compress.
There were multiple additions to the yet-unversioned kfake package, so that an
advanced "test" could be written to trigger the behavior for this patch and
then ensure it is fixed. To see the test, please check the comment on PR
650.
7d050fc
kgo: do not cancel FindCoordinator if the parent context cancelsv1.15.3
Compare Source
===
This patch release fixes one minor bug, reduces allocations on gzip and lz4
decompression, and contains a behavior improvement when OffsetOutOfRange is
received while consuming.
For the bugfix: previously, if the client was using a fetch session (as is the
default when consuming), and all partitions for a topic transfer to a different
broker, the client would not properly unregister the topic from the prior
broker's fetch session. This could result in more data being consumed and
discarded than necessary (although, it's possible the broker just reset the
fetch session anyway, I'm not entirely positive).
fdf371c
use bytes buffer instead of ReadAll (thanks @kalbhor!)e6ed69f
consuming: reset to nearest if we receive OOOR while fetching1b6a721
bugfix kgo source: use the proper topic-to-id map when forgetting topicsConfiguration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about this update again.
This PR was generated by Mend Renovate. View the repository job log.