What's the key differences in existent approaches to mirror Kafka topics
Kafka MirrorMaker
is a basic approach to mirror Kafka topics from source to target brokers. Unfortunately, it doesn't fit my requirements to be configurable enough.
My requirements are very simple:
- the solution should be JVM application
- if destination topic doesn't exist, creates it
- solution should have the ability to add prefixes/suffixes to destination topic names
- it should reload and apply configurations on the fly if they're changed
According to this answer there are several alternative solutions to do this stuff:
MirrorTool-for-Kafka-Connect
Salesforce Mirus (based on Kafka Connect API)- Confluent's Replicator
- Build my own application (based on Kafka Streams functionality)
Moreover, KIP-382 was created to make Mirror Maker more flexible and configurable.
So, my question is what's the killer features of these solutions (comparing to others) and finally what's the better one according to provided requirements.
apache-kafka apache-kafka-connect
add a comment |
Kafka MirrorMaker
is a basic approach to mirror Kafka topics from source to target brokers. Unfortunately, it doesn't fit my requirements to be configurable enough.
My requirements are very simple:
- the solution should be JVM application
- if destination topic doesn't exist, creates it
- solution should have the ability to add prefixes/suffixes to destination topic names
- it should reload and apply configurations on the fly if they're changed
According to this answer there are several alternative solutions to do this stuff:
MirrorTool-for-Kafka-Connect
Salesforce Mirus (based on Kafka Connect API)- Confluent's Replicator
- Build my own application (based on Kafka Streams functionality)
Moreover, KIP-382 was created to make Mirror Maker more flexible and configurable.
So, my question is what's the killer features of these solutions (comparing to others) and finally what's the better one according to provided requirements.
apache-kafka apache-kafka-connect
add a comment |
Kafka MirrorMaker
is a basic approach to mirror Kafka topics from source to target brokers. Unfortunately, it doesn't fit my requirements to be configurable enough.
My requirements are very simple:
- the solution should be JVM application
- if destination topic doesn't exist, creates it
- solution should have the ability to add prefixes/suffixes to destination topic names
- it should reload and apply configurations on the fly if they're changed
According to this answer there are several alternative solutions to do this stuff:
MirrorTool-for-Kafka-Connect
Salesforce Mirus (based on Kafka Connect API)- Confluent's Replicator
- Build my own application (based on Kafka Streams functionality)
Moreover, KIP-382 was created to make Mirror Maker more flexible and configurable.
So, my question is what's the killer features of these solutions (comparing to others) and finally what's the better one according to provided requirements.
apache-kafka apache-kafka-connect
Kafka MirrorMaker
is a basic approach to mirror Kafka topics from source to target brokers. Unfortunately, it doesn't fit my requirements to be configurable enough.
My requirements are very simple:
- the solution should be JVM application
- if destination topic doesn't exist, creates it
- solution should have the ability to add prefixes/suffixes to destination topic names
- it should reload and apply configurations on the fly if they're changed
According to this answer there are several alternative solutions to do this stuff:
MirrorTool-for-Kafka-Connect
Salesforce Mirus (based on Kafka Connect API)- Confluent's Replicator
- Build my own application (based on Kafka Streams functionality)
Moreover, KIP-382 was created to make Mirror Maker more flexible and configurable.
So, my question is what's the killer features of these solutions (comparing to others) and finally what's the better one according to provided requirements.
apache-kafka apache-kafka-connect
apache-kafka apache-kafka-connect
edited Nov 16 '18 at 17:27
Matthias J. Sax
31.5k45582
31.5k45582
asked Nov 16 '18 at 8:15
yevtsyyevtsy
9729
9729
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
I see you are referring to my comment there...
As for your bullets
the solution should be JVM application
All the listed ones are Java based
if destination topic doesn't exist, creates it
This is dependent on the Kafka broker version supporting the AdminClient
API. Otherwise, as the MirrorMaker documentation says, you should create the destination topic before mirroring, otherwise you either get (1) denied to produce because auto topic creation is disable (2) problems seeing "consistent" data because a default configured topic was created.
That being said, by default, MirrorMaker doesn't "propogate" topic configurations on its own. When I looked, MirrorTool similarly did not. I have not looked throughly at Mirus.
Confluent Replicator does copy configurations and it will use the AdminClient.
Replicator, MirrorTool, and Mirus are all based on Kafka Connect API. And KIP-382 will be as well
Build my own application (based on Kafka Streams functionality
Kafka Streams can only communicate from()
and to()
a single cluster.
You might as well just use MirrorMaker because it's a wrapper around Producer/Consumer already, and supports one cluster to another. If you need custom features, that's what the MessageHandler
interface is for.
solution should have the ability to add prefixes/suffixes to destination topic names
Each one can do that, but MirrorMaker requires extra/custom code. See example by @gwenshap
reload and apply configurations on the fly if they're changed
That's the tricky one... Usually, you just bounce the Java process because most configurations are only loaded at startup. The exception being whitelist
or topics.regex
for finding new topics to consume.
KIP-382
Hard to say that'll be accepted. While it was thoughly written, and I personally think it's reasonably scoped, it somewhat defeats the purpose of having Replicator for Confluent. The AvroConverter for Confluent Schema Registry is open sourced, and with large majority of Kafka commits and support happening out of Confluent, it's a conflict of interest as, AFAICT, that's the only missing piece between Replicator and the other Connect implementations... Case in point, MirrorTool had a KIP, but it was seemingly rejected on the mailing list with the explanation of "Kafka Connect is a pluggable ecosystem, and anyone can go ahead and install this mirroring extension, but it shouldn't be part of the core Kafka Connect project", or at least that's how I read it.
What's "better" is a matter of opinion, and there are still other options (Apache Nifi or Streamsets come to mind). Even using kafkacat
and netcat
you can hack together cluster syncing.
If you need Avro support and are paying for an enterprise license, then you might as well use Replicator.
One thing that might be important to point out, that if you are mirroring a topic that is not using the DefaultPartitioner
, then data will be reshuffled into the DefaultPartitioner
on the destination cluster if you don't otherwise configure the destination Kafka producer to use the same partition value or partitioner class as the source Kafka Producer.
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
1
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentionedDefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based onKafka Connect API
andKafka Streams
?
– yevtsy
Nov 20 '18 at 14:50
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
add a comment |
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333860%2fwhats-the-key-differences-in-existent-approaches-to-mirror-kafka-topics%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I see you are referring to my comment there...
As for your bullets
the solution should be JVM application
All the listed ones are Java based
if destination topic doesn't exist, creates it
This is dependent on the Kafka broker version supporting the AdminClient
API. Otherwise, as the MirrorMaker documentation says, you should create the destination topic before mirroring, otherwise you either get (1) denied to produce because auto topic creation is disable (2) problems seeing "consistent" data because a default configured topic was created.
That being said, by default, MirrorMaker doesn't "propogate" topic configurations on its own. When I looked, MirrorTool similarly did not. I have not looked throughly at Mirus.
Confluent Replicator does copy configurations and it will use the AdminClient.
Replicator, MirrorTool, and Mirus are all based on Kafka Connect API. And KIP-382 will be as well
Build my own application (based on Kafka Streams functionality
Kafka Streams can only communicate from()
and to()
a single cluster.
You might as well just use MirrorMaker because it's a wrapper around Producer/Consumer already, and supports one cluster to another. If you need custom features, that's what the MessageHandler
interface is for.
solution should have the ability to add prefixes/suffixes to destination topic names
Each one can do that, but MirrorMaker requires extra/custom code. See example by @gwenshap
reload and apply configurations on the fly if they're changed
That's the tricky one... Usually, you just bounce the Java process because most configurations are only loaded at startup. The exception being whitelist
or topics.regex
for finding new topics to consume.
KIP-382
Hard to say that'll be accepted. While it was thoughly written, and I personally think it's reasonably scoped, it somewhat defeats the purpose of having Replicator for Confluent. The AvroConverter for Confluent Schema Registry is open sourced, and with large majority of Kafka commits and support happening out of Confluent, it's a conflict of interest as, AFAICT, that's the only missing piece between Replicator and the other Connect implementations... Case in point, MirrorTool had a KIP, but it was seemingly rejected on the mailing list with the explanation of "Kafka Connect is a pluggable ecosystem, and anyone can go ahead and install this mirroring extension, but it shouldn't be part of the core Kafka Connect project", or at least that's how I read it.
What's "better" is a matter of opinion, and there are still other options (Apache Nifi or Streamsets come to mind). Even using kafkacat
and netcat
you can hack together cluster syncing.
If you need Avro support and are paying for an enterprise license, then you might as well use Replicator.
One thing that might be important to point out, that if you are mirroring a topic that is not using the DefaultPartitioner
, then data will be reshuffled into the DefaultPartitioner
on the destination cluster if you don't otherwise configure the destination Kafka producer to use the same partition value or partitioner class as the source Kafka Producer.
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
1
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentionedDefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based onKafka Connect API
andKafka Streams
?
– yevtsy
Nov 20 '18 at 14:50
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
add a comment |
I see you are referring to my comment there...
As for your bullets
the solution should be JVM application
All the listed ones are Java based
if destination topic doesn't exist, creates it
This is dependent on the Kafka broker version supporting the AdminClient
API. Otherwise, as the MirrorMaker documentation says, you should create the destination topic before mirroring, otherwise you either get (1) denied to produce because auto topic creation is disable (2) problems seeing "consistent" data because a default configured topic was created.
That being said, by default, MirrorMaker doesn't "propogate" topic configurations on its own. When I looked, MirrorTool similarly did not. I have not looked throughly at Mirus.
Confluent Replicator does copy configurations and it will use the AdminClient.
Replicator, MirrorTool, and Mirus are all based on Kafka Connect API. And KIP-382 will be as well
Build my own application (based on Kafka Streams functionality
Kafka Streams can only communicate from()
and to()
a single cluster.
You might as well just use MirrorMaker because it's a wrapper around Producer/Consumer already, and supports one cluster to another. If you need custom features, that's what the MessageHandler
interface is for.
solution should have the ability to add prefixes/suffixes to destination topic names
Each one can do that, but MirrorMaker requires extra/custom code. See example by @gwenshap
reload and apply configurations on the fly if they're changed
That's the tricky one... Usually, you just bounce the Java process because most configurations are only loaded at startup. The exception being whitelist
or topics.regex
for finding new topics to consume.
KIP-382
Hard to say that'll be accepted. While it was thoughly written, and I personally think it's reasonably scoped, it somewhat defeats the purpose of having Replicator for Confluent. The AvroConverter for Confluent Schema Registry is open sourced, and with large majority of Kafka commits and support happening out of Confluent, it's a conflict of interest as, AFAICT, that's the only missing piece between Replicator and the other Connect implementations... Case in point, MirrorTool had a KIP, but it was seemingly rejected on the mailing list with the explanation of "Kafka Connect is a pluggable ecosystem, and anyone can go ahead and install this mirroring extension, but it shouldn't be part of the core Kafka Connect project", or at least that's how I read it.
What's "better" is a matter of opinion, and there are still other options (Apache Nifi or Streamsets come to mind). Even using kafkacat
and netcat
you can hack together cluster syncing.
If you need Avro support and are paying for an enterprise license, then you might as well use Replicator.
One thing that might be important to point out, that if you are mirroring a topic that is not using the DefaultPartitioner
, then data will be reshuffled into the DefaultPartitioner
on the destination cluster if you don't otherwise configure the destination Kafka producer to use the same partition value or partitioner class as the source Kafka Producer.
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
1
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentionedDefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based onKafka Connect API
andKafka Streams
?
– yevtsy
Nov 20 '18 at 14:50
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
add a comment |
I see you are referring to my comment there...
As for your bullets
the solution should be JVM application
All the listed ones are Java based
if destination topic doesn't exist, creates it
This is dependent on the Kafka broker version supporting the AdminClient
API. Otherwise, as the MirrorMaker documentation says, you should create the destination topic before mirroring, otherwise you either get (1) denied to produce because auto topic creation is disable (2) problems seeing "consistent" data because a default configured topic was created.
That being said, by default, MirrorMaker doesn't "propogate" topic configurations on its own. When I looked, MirrorTool similarly did not. I have not looked throughly at Mirus.
Confluent Replicator does copy configurations and it will use the AdminClient.
Replicator, MirrorTool, and Mirus are all based on Kafka Connect API. And KIP-382 will be as well
Build my own application (based on Kafka Streams functionality
Kafka Streams can only communicate from()
and to()
a single cluster.
You might as well just use MirrorMaker because it's a wrapper around Producer/Consumer already, and supports one cluster to another. If you need custom features, that's what the MessageHandler
interface is for.
solution should have the ability to add prefixes/suffixes to destination topic names
Each one can do that, but MirrorMaker requires extra/custom code. See example by @gwenshap
reload and apply configurations on the fly if they're changed
That's the tricky one... Usually, you just bounce the Java process because most configurations are only loaded at startup. The exception being whitelist
or topics.regex
for finding new topics to consume.
KIP-382
Hard to say that'll be accepted. While it was thoughly written, and I personally think it's reasonably scoped, it somewhat defeats the purpose of having Replicator for Confluent. The AvroConverter for Confluent Schema Registry is open sourced, and with large majority of Kafka commits and support happening out of Confluent, it's a conflict of interest as, AFAICT, that's the only missing piece between Replicator and the other Connect implementations... Case in point, MirrorTool had a KIP, but it was seemingly rejected on the mailing list with the explanation of "Kafka Connect is a pluggable ecosystem, and anyone can go ahead and install this mirroring extension, but it shouldn't be part of the core Kafka Connect project", or at least that's how I read it.
What's "better" is a matter of opinion, and there are still other options (Apache Nifi or Streamsets come to mind). Even using kafkacat
and netcat
you can hack together cluster syncing.
If you need Avro support and are paying for an enterprise license, then you might as well use Replicator.
One thing that might be important to point out, that if you are mirroring a topic that is not using the DefaultPartitioner
, then data will be reshuffled into the DefaultPartitioner
on the destination cluster if you don't otherwise configure the destination Kafka producer to use the same partition value or partitioner class as the source Kafka Producer.
I see you are referring to my comment there...
As for your bullets
the solution should be JVM application
All the listed ones are Java based
if destination topic doesn't exist, creates it
This is dependent on the Kafka broker version supporting the AdminClient
API. Otherwise, as the MirrorMaker documentation says, you should create the destination topic before mirroring, otherwise you either get (1) denied to produce because auto topic creation is disable (2) problems seeing "consistent" data because a default configured topic was created.
That being said, by default, MirrorMaker doesn't "propogate" topic configurations on its own. When I looked, MirrorTool similarly did not. I have not looked throughly at Mirus.
Confluent Replicator does copy configurations and it will use the AdminClient.
Replicator, MirrorTool, and Mirus are all based on Kafka Connect API. And KIP-382 will be as well
Build my own application (based on Kafka Streams functionality
Kafka Streams can only communicate from()
and to()
a single cluster.
You might as well just use MirrorMaker because it's a wrapper around Producer/Consumer already, and supports one cluster to another. If you need custom features, that's what the MessageHandler
interface is for.
solution should have the ability to add prefixes/suffixes to destination topic names
Each one can do that, but MirrorMaker requires extra/custom code. See example by @gwenshap
reload and apply configurations on the fly if they're changed
That's the tricky one... Usually, you just bounce the Java process because most configurations are only loaded at startup. The exception being whitelist
or topics.regex
for finding new topics to consume.
KIP-382
Hard to say that'll be accepted. While it was thoughly written, and I personally think it's reasonably scoped, it somewhat defeats the purpose of having Replicator for Confluent. The AvroConverter for Confluent Schema Registry is open sourced, and with large majority of Kafka commits and support happening out of Confluent, it's a conflict of interest as, AFAICT, that's the only missing piece between Replicator and the other Connect implementations... Case in point, MirrorTool had a KIP, but it was seemingly rejected on the mailing list with the explanation of "Kafka Connect is a pluggable ecosystem, and anyone can go ahead and install this mirroring extension, but it shouldn't be part of the core Kafka Connect project", or at least that's how I read it.
What's "better" is a matter of opinion, and there are still other options (Apache Nifi or Streamsets come to mind). Even using kafkacat
and netcat
you can hack together cluster syncing.
If you need Avro support and are paying for an enterprise license, then you might as well use Replicator.
One thing that might be important to point out, that if you are mirroring a topic that is not using the DefaultPartitioner
, then data will be reshuffled into the DefaultPartitioner
on the destination cluster if you don't otherwise configure the destination Kafka producer to use the same partition value or partitioner class as the source Kafka Producer.
edited Dec 18 '18 at 0:17
answered Nov 16 '18 at 9:08
cricket_007cricket_007
83.8k1147117
83.8k1147117
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
1
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentionedDefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based onKafka Connect API
andKafka Streams
?
– yevtsy
Nov 20 '18 at 14:50
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
add a comment |
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
1
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentionedDefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based onKafka Connect API
andKafka Streams
?
– yevtsy
Nov 20 '18 at 14:50
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
Is Replicator an option? it supports the custom name, it can talk to different brokers from source and sink and you can even apply so transformations. Also kafka connect will help with the configuration updates and such. What do you think?
– Renato Mefi
Nov 18 '18 at 13:49
1
1
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@RenatoMefi thank you for the answer, but Replicator isn't open source solution, so it's not an option for me
– yevtsy
Nov 20 '18 at 10:44
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@yevtsy is there anything else you want me to address?
– cricket_007
Nov 20 '18 at 11:25
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentioned
DefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based on Kafka Connect API
and Kafka Streams
?– yevtsy
Nov 20 '18 at 14:50
@cricket_007 thank you for the detailed description and sorry for my delayed response. Yes, I have several more questions. The most valuable for me in the answers is the understanding the key difference in the solutions. You'd mentioned
DefaultPartitioner
imeplementation and working in a single cluster. It's smth I looked for. Do you know any other differences in solutions for this task based on Kafka Connect API
and Kafka Streams
?– yevtsy
Nov 20 '18 at 14:50
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
Well, as pointed out, Kafka Streams is not a mirroring solution. The Connect API or a raw Producer/Consumer such as what MirrorMaker wraps are your only building blocks for this problem. The implementation details of successful delivery (and ordering) and offset translation (if you care) are extra
– cricket_007
Nov 20 '18 at 15:37
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333860%2fwhats-the-key-differences-in-existent-approaches-to-mirror-kafka-topics%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown