Hi again, Patrik,
You'll probably be interested in this recent Jira:
https://issues.apache.org/jira/browse/KAFKA-7658
You have a good point about the overhead of going through an intermediate
topic... I can see how explicit topic management is an operational burden,
and you're also right that the changelog topic only gets read on state
restoration. That was an oversight on my part.
I think that with KAFKA-7658 and https://github.com/apache/kafka/pull/5779,
you'll have two good options in the future.
To solve your problem *right now*, you can circumvent the null filtering by
wrapping the values of your stream. For example, immediately before the
reduce, you could mapValues and wrap the values with Optional. Then, your
reduce function can unwrap the Optional and return null if it's empty. Does
that make sense?
This comes with an important caveat, though, which is part of the
motivation for this roadblock to begin with:
if your incoming data gets repartitioned in your topology, then the order
of records for the key is not deterministic. This would break the semantics
of your reduce-to-latest function, and, indeed, any non-commutative reduce
function.
For example, if you have a topic like:
dummykey1: {realkey: A, value: 4}
dummykey2: {realkey: A, value: 5}
and you do a groupBy( select realkey )
and then reduce( keep latest value)
Then, if dummykey1 and dummykey2 are in different partitions, the result
would be either A:4 or A:5, depending on which input partition processed
first.
We have discussed several times solutions to resolve this issue, but it's
quite complex in the details.
Nevertheless, if you're careful and ensure that you don't have multiple
threads producing the same key into the input topic, and also that you
don't have a repartition in the middle, then this should work for you.
Hope this helps!
-john
Post by John RoeslerHi Patrik,
Thanks for explaining your use case to us. While we can still discuss how
if you deduplication logic can be written as a transformValues operation,
builder.table("source-topic").transformValues(...
Materialized.as("store-name"))
Note that in a recent PR that we are merging, the source KTable from
builder.table() would not be materialized if users do not specify a
materialized store name, only the value-transformed KTable will be
https://github.com/apache/kafka/pull/5779
Would that work for you?
Guozhang
Post by Patrik KleindlHi John and Matthias
We are receiving CDC records (row-level insert/update/delete) in one
topic
Post by Patrik Kleindlper table. The key is derived from the DB records, the value is null in
case of deletes. Those would be the immutable facts I guess.
These topics are first streamed through a deduplication Transformer to
drop
Post by Patrik Kleindlchanges on irrelevant fields.
The results are translated to KTables and joined to each other to
represent
Post by Patrik Kleindlthe same result as the SQLs on the database, but faster. At this stage
the
Post by Patrik Kleindldelete/null records matter because if a record gets deleted then we want
it
Post by Patrik Kleindlto drop out of the join too. -> Our reduce-approach produced unexpected
results here.
We took the deduplication step separately because in some cases we only
need the the KStream for processing.
If you see a simpler/cleaner approach here I'm open to suggestions, of
course.
1) Named topics create management/maintenance overhead because they have
to
Post by Patrik Kleindlbe created/treated separately (auto-create is not an option) and be
considered in future changes, topology changes/resets and so on. The
internal topic removes most of those issues.
2) One of our developers came up with the question if the traffic to/from
the broker was actually the same in both scenarios, we expect that the
same
Post by Patrik Kleindlis written to the broker for the named topic as well as the reduce-case,
but if the KTable is maintained inside a streams topology, does it have
to
Post by Patrik Kleindlread back everything it sends to the broker or can it keep the table
internally? I hope it is understandable what I mean, otherwise I can try
the explain it more clearly.
best regards
Patrik
Post by John RoeslerHi again Patrik,
Actually, this is a good question... Can you share some context about
why
Post by Patrik KleindlPost by John Roesleryou need to convert a stream to a table (including nulls as
retractions)?
Post by Patrik KleindlPost by John RoeslerThanks,
-John
Post by Matthias J. SaxI don't know your overall application setup. However, a KStream
semantically models immutable facts and there is not update semantic.
Thus, it seems semantically questionable, to allow changing the
semantics from facts to updates (the other way is easier IMHO, and
thus
Post by Patrik KleindlPost by John RoeslerPost by Matthias J. Saxsupported via KTable#toStream()).
Does this make sense?
Having said this: you _can_ write a KStream into a topic an read it
back
Post by John RoeslerPost by Matthias J. Saxas KTable. But it's semantically questionable to do so, IMHO. Maybe
it
Post by Patrik KleindlPost by John RoeslerPost by Matthias J. Saxmakes sense for your specific application, but in general I don't
think
Post by Patrik KleindlPost by John RoeslerPost by Matthias J. Saxit does make sense.
-Matthias
Post by John RoeslerHi Patrik,
Just to drop one observation in... Streaming to a topic and then
consuming
Post by John Roeslerit as a table does create overhead, but so does reducing a stream
to
Post by Patrik Kleindla
Post by John RoeslerPost by Matthias J. SaxPost by John Roeslertable, and I think it's actually the same in either case.
They both require a store to collect the table state, and in both
cases,
Post by Matthias J. SaxPost by John Roeslerthe stores need to have a changelog topic. For the "reduce"
version,
Post by Patrik KleindlPost by John Roeslerit's
Post by Matthias J. SaxPost by John Roesleran internal changelog topic, and for the "topic-to-table" version,
the
Post by John RoeslerPost by Matthias J. SaxPost by John Roeslerstore can use the intermediate topic as its changelog.
This doesn't address your ergonomic concern, but it seemed worth
pointing
Post by Matthias J. SaxPost by John Roeslerout that (as far as I can tell), there doesn't seem to be a
difference
Post by John Roeslerin
Post by Matthias J. SaxPost by John Roesleroverhead.
Hope this helps!
-John
Post by Patrik KleindlHello Matthias,
thank you for the explanation.
Streaming back to a topic and consuming this as a KTable does
respect
overhead.
no
Post by Patrik KleindlPost by John RoeslerPost by Matthias J. Saxsimple
Post by John RoeslerPost by Patrik Kleindlone-step stream-to-table operation exists?
Best regards
Patrik
Am 26.10.2018 um 00:07 schrieb Matthias J. Sax <
Patrik,
`null` values in a KStream don't have delete semantics (it's not
a
Post by Patrik KleindlPost by John RoeslerPost by Matthias J. SaxPost by John RoeslerPost by Patrik Kleindlchangelog stream). That's why we drop them in the KStream#reduce
implemenation.
If you want to explicitly remove results for a key from the
result
--
Post by Patrik KleindlPost by John Roeslerthe
Post by Matthias J. SaxPost by John RoeslerPost by Patrik Kleindlresult of #apply() has changelog/KTable semantics and `null` is
interpreted as delete for this case.
If you want to use `null` from your KStream to trigger reduce()
to
Post by Patrik KleindlPost by John RoeslerPost by Matthias J. SaxPost by John RoeslerPost by Patrik Kleindldelete, you will need to use a surrogate value for this, ie, do a
mapValues() before the groupByKey() call, an replace `null`
values
Post by Patrik KleindlPost by John Roeslerwith
`Reducer#apply()`
Post by Matthias J. SaxPost by John RoeslerPost by Patrik Kleindlto return `null` for this case.
Hope this helps.
-Matthias
Post by Patrik KleindlHello
Recently we noticed a lot of warning messages in the logs which
pointed
Post by John RoeslerPost by Patrik Kleindlto
Post by Patrik KleindlKStreamReduce
public void process(final K key, final V value) {
// If the key or value is null we don't need to
proceed
Post by Patrik KleindlPost by John Roeslerkey=[{}]
Post by Matthias J. SaxPost by John RoeslerPost by Patrik KleindlPost by Patrik Kleindlvalue=[{}] topic=[{}] partition=[{}] offset=[{}]",
key, value, context().topic(),
context().partition(),
Post by John RoeslerPost by Patrik KleindlPost by Patrik Kleindlcontext().offset()
);
metrics.skippedRecordsSensor().record();
return;
}
This was triggered for every record from a stream with an
existing
KTable.
records
Post by Patrik Kleindlseparate
Post by John RoeslerPost by Matthias J. SaxPost by John RoeslerPost by Patrik Kleindltopic
Post by Patrik Kleindlwhich also can't be cleaned up by the streams reset tool.
Did I miss anything relevant here?
Would it be possible to create a separate method for KStream to
achieve
--
-- Guozhang