You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We just realized that we are receiving all blocks, including failed blocks with data. For example, in our Kafka, we have data for both of these blocks: 293553856 and 293553861, where 293553861 is a failed block. Is there a way to filter these out?
I noticed that we have a filter_by_commitment option in the configuration, but I'm unsure how to use it in the grpc2kafka JSON config. Could you provide guidance on how to set this up?
293553861 (failed block, not real)
293553856 (valid block, real)
This is my config for TX and I want to know is there a way to get only finalized blocks?
but I'd not recommend stream blocks, sometimes we are not able to construct them in the plugin, stream finalized transactions instead of blocks would be the preferred way.
@fanatid Thank you for your response and advice. You are awesome. Could you please provide a sample config for the transactions stream with filtering for failed transactions? Does it include blocktime? The only reason we stream blocks is to have blocktime associated with the slot.
Hi Team,
We just realized that we are receiving all blocks, including failed blocks with data. For example, in our Kafka, we have data for both of these blocks: 293553856 and 293553861, where 293553861 is a failed block. Is there a way to filter these out?
I noticed that we have a
filter_by_commitment
option in the configuration, but I'm unsure how to use it in the grpc2kafka JSON config. Could you provide guidance on how to set this up?293553861 (failed block, not real)
293553856 (valid block, real)
This is my config for TX and I want to know is there a way to get only finalized blocks?
The text was updated successfully, but these errors were encountered: