Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typo in docs #3043

Merged
merged 1 commit into from
Dec 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/kafka/kafka-binder/partitions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ You can override this default by using the `partitionSelectorExpression` or `par
Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side.
Kafka allocates partitions across the instances.

NOTE: The partitionCount for a kafka topic may change during runtime (e.g. due to an adminstration task).
NOTE: The partitionCount for a kafka topic may change during runtime (e.g. due to an administration task).
The calculated partitions will be different after that (e.g. new partitions will be used then).
Since 4.0.3 of Spring Cloud Stream runtime changes of partition count will be supported.
See also parameter 'spring.kafka.producer.properties.metadata.max.age.ms' to configure update interval.
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/kafka/kafka-binder/retry-dlq.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[retry-and-dlq-processing]]
= Retry and Dead Letter Processing

By default, when you configure retry (e.g. `maxAttemts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.
By default, when you configure retry (e.g. `maxAttempts`) and `enableDlq` in a consumer binding, these functions are performed within the binder, with no participation by the listener container or Kafka consumer.

There are situations where it is preferable to move this functionality to the listener container, such as:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ spring.cloud.stream.kafka.bindings.lowercase-in-0.consumer.converterBeanName=ful
```

`lowercase-in-0` is the input binding name for our `lowercase` function.
For the outbound (`lowecase-out-0`), we still use the regular `MessagingMessageConverter`.
For the outbound (`lowercase-out-0`), we still use the regular `MessagingMessageConverter`.

In the `toMessage` implementation above, we receive the raw `ConsumerRecord` (`ReceiverRecord` since we are in a reactive binder context) and then wrap it inside a `Message`.
Then that message payload which is the `ReceiverRecord` is provided to the user method.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ In these cases, the acknowledgment header is not present.

IMPORTANT: 4.0.2 also provided `reactiveAutoCommit`, but the implementation was incorrect, it behaved similarly to `reactiveAtMostOnce`.

The following is an example of how to use `reaciveAutoCommit`.
The following is an example of how to use `reactiveAutoCommit`.

[source, java]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
= Destination is Pattern

Starting with version 4.0.3, the `destination-is-pattern` Kafka binding consumer property is now supported.
The receiver options are conigured with a regex `Pattern`, allowing the binding to consume from any topic that matches the pattern.
The receiver options are configured with a regex `Pattern`, allowing the binding to consume from any topic that matches the pattern.
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ For Spring Boot version 2.2.x, the metrics support is provided through a custom
For Spring Boot version 2.3.x, the Kafka Streams metrics support is provided natively through Micrometer.

When accessing metrics through the Boot actuator endpoint, make sure to add `metrics` to the property `management.endpoints.web.exposure.include`.
Then you can access `/acutator/metrics` to get a list of all the available metrics, which then can be individually accessed through the same URI (`/actuator/metrics/<metric-name>`).
Then you can access `/actuator/metrics` to get a list of all the available metrics, which then can be individually accessed through the same URI (`/actuator/metrics/<metric-name>`).

Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ Default: See the discussion above on outbound partition support.
producedAs::
Custom name for the sink component to which the processor is producing to.
+
Deafult: `none` (generated by Kafka Streams)
Default: `none` (generated by Kafka Streams)

[[kafka-streams-consumer-properties]]
== Kafka Streams Consumer Properties
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ For instance, if we want to change the header key on this binding to `my_event`

`spring.cloud.stream.kafka.streams.bindings.process-in-0.consumer.eventTypeHeaderKey=my_event`.

When using the event routing feature in Kafkfa Streams binder, it uses the byte array `Serde` to deserialze all incoming records.
When using the event routing feature in Kafka Streams binder, it uses the byte array `Serde` to deserialize all incoming records.
If the record headers match the event type, then only it uses the actual `Serde` to do a proper deserialization using either the configured or the inferred `Serde`.
This introduces issues if you set a deserialization exception handler on the binding as the expected deserialization only happens down the stack causing unexpected errors.
In order to address this issue, you can set the following property on the binding to force the binder to use the configured or inferred `Serde` instead of byte array `Serde`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -353,15 +353,15 @@ spring.cloud.function.definition=foo|bar;foo;bar

The composed function's default binding names in this example becomes `foobar-in-0` and `foobar-out-0`.

[[limitations-of-functional-composition-in-kafka-streams-bincer]]
==== Limitations of functional composition in Kafka Streams bincer
[[limitations-of-functional-composition-in-kafka-streams-binder]]
==== Limitations of functional composition in Kafka Streams binder

When you have `java.util.function.Function` bean, that can be composed with another function or multiple functions.
The same function bean can be composed with a `java.util.function.Consumer` as well. In this case, consumer is the last component composed.
A function can be composed with multiple functions, then end with a `java.util.function.Consumer` bean as well.

When composing the beans of type `java.util.function.BiFunction`, the `BiFunction` must be the first function in the definition.
The composed entities must be either of type `java.util.function.Function` or `java.util.funciton.Consumer`.
The composed entities must be either of type `java.util.function.Function` or `java.util.function.Consumer`.
In other words, you cannot take a `BiFunction` bean and then compose with another `BiFunction`.

You cannot compose with types of `BiConsumer` or definitions where `Consumer` is the first component.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ If you have multiple processors, you want to attach the global state store to th
== Using StreamsBuilderFactoryBeanConfigurer to register a production exception handler

In the error handling section, we indicated that the binder does not provide a first class way to deal with production exceptions.
Though that is the case, you can still use the `StreamsBuilderFacotryBean` customizer to register production exception handlers. See below.
Though that is the case, you can still use the `StreamsBuilderFactoryBean` customizer to register production exception handlers. See below.

```
@Bean
Expand Down
Loading