All Downloads are FREE. Search and download functionalities are using the official Maven repository.

mp.reactivemessaging.kafka.adoc Maven / Gradle / Ivy

There is a newer version: 4.1.4
Show newest version
///////////////////////////////////////////////////////////////////////////////

    Copyright (c) 2020, 2024 Oracle and/or its affiliates.

    Licensed under the Apache License, Version 2.0 (the "License");
    you may not use this file except in compliance with the License.
    You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software
    distributed under the License is distributed on an "AS IS" BASIS,
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.

///////////////////////////////////////////////////////////////////////////////

= Kafka Connector
:description: Reactive Messaging support for Kafka in Helidon MP
:keywords: helidon, mp, messaging, kafka
:feature-name: Reactive Kafka Connector
:microprofile-bundle: false
:rootdir: {docdir}/../..

include::{rootdir}/includes/mp.adoc[]

== Contents

- <>
- <>
- <>
- <>
- <>

== Overview
Connecting streams to Kafka with Reactive Messaging is easy to do.
There is a standard Kafka client behind the scenes, all the link:{kafka-client-base-url}#producerconfigs[producer] and link:{kafka-client-base-url}#consumerconfigs[consumer] configs can
be propagated through messaging config.

include::{rootdir}/includes/dependencies.adoc[]

[source,xml]
----

    io.helidon.messaging.kafka
    helidon-messaging-kafka

----

== Config Example

[source,yaml]
.Example of connector config:
----
mp.messaging:

  incoming.from-kafka:
    connector: helidon-kafka
    topic: messaging-test-topic-1
    auto.offset.reset: latest # <1>
    enable.auto.commit: true
    group.id: example-group-id

  outgoing.to-kafka:
    connector: helidon-kafka
    topic: messaging-test-topic-1

  connector:
    helidon-kafka:
      bootstrap.servers: localhost:9092 # <2>
      key.serializer: org.apache.kafka.common.serialization.StringSerializer
      value.serializer: org.apache.kafka.common.serialization.StringSerializer
      key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
----

<1> Kafka client consumer's property auto.offset.reset configuration for `from-kafka` channel only
<2> Kafka client's property link:{kafka-client-base-url}#consumerconfigs_bootstrap.servers[bootstrap.servers] configuration for all channels using the connector

[source,java]
.Example of consuming from Kafka:
----
include::{sourcedir}/mp/reactivemessaging/KafkaSnippets.java[tag=snippet_1, indent=0]
----


[source,java]
.Example of producing to Kafka:
----
include::{sourcedir}/mp/reactivemessaging/KafkaSnippets.java[tag=snippet_2, indent=0]
----

== NACK Strategy

|===
| Strategy | Description
| Kill channel | Nacked message sends error signal and causes channel failure so Messaging Health check can report it as DOWN
| DLQ | Nacked messages are sent to specified dead-letter-queue
| Log only | Nacked message is logged and channel continues normally
|===

=== Kill channel
Default NACK strategy for Kafka connector. When

=== Dead Letter Queue

Sends nacked messages to error topic, link:https://en.wikipedia.org/wiki/Dead_letter_queue[DLQ] is well known pattern for dealing with unprocessed messages.

Helidon can derive connection settings for DLQ topic automatically if the error
topic is present on the same Kafka cluster.
Serializers are derived from deserializers used for consumption
`org.apache.kafka.common.serialization.StringDeserializer` >
`org.apache.kafka.common.serialization.StringSerializer`.
Note that the name of the error topic is needed only in this case.

[source,yaml]
.Example of derived DLQ config:
----
mp.messaging:
  incoming:
    my-channel:
      nack-dlq: dql_topic_name
----

If a custom connection is needed, then use the 'nack-dlq' key for all of the producer configuration.

[source,yaml]
.Example of custom DLQ config:
----
mp.messaging:
  incoming:
    my-channel:
      nack-dlq:
        topic: dql_topic_name
        bootstrap.servers: localhost:9092
        key.serializer: org.apache.kafka.common.serialization.StringSerializer
        value.serializer: org.apache.kafka.common.serialization.StringSerializer

----

=== Log only

Only logs nacked messages and throws them away, offset is committed and channel
continues normally consuming subsequent messages.

[source,yaml]
.Example of log only enabled nack strategy
----
mp.messaging:
  incoming:
    my-channel:
      nack-log-only: true
----

== Examples

Don't forget to check out the examples with pre-configured Kafka docker image, for easy testing:

* {helidon-github-tree-url}/examples/messaging




© 2015 - 2025 Weber Informatics LLC | Privacy Policy