Discussion:
Create Topic Error: Create Topic Error and cannot write to console producer
Ascot Moss
2017-08-09 13:38:36 UTC
Permalink
Hi,


I have setup Kafka 0.10.2.1 with SSL.


Check Status:

openssl s_client -debug -connect n1:9093 -tls1

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA

... SSL-Session:

Protocol : TLSv1

PSK identity hint: None

Start Time: 1502285690

Timeout : 7200 (sec)

Verify return code: 19 (self signed certificate in certificate chain)


Create Topic:

kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02

ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException:
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)

However, if I run describe topic, I can see it is created



Describe Topic:

kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
test02

Topic:test02 PartitionCount:3 ReplicationFactor:3 Configs:

Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11

Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12

Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13


Consumer:

kafka-console-consumer.sh --bootstrap-server n1:9093 --consumer.config
/home/kafka/config/consumer.n1.properties --topic test02 --from-beginning



Producer:

kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02

ERROR Error when sending message to topic test02 with key: null, value: 0
bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
test02-1: 1506 ms has passed since batch creation plus linger time


How to resolve it?

Regards
M. Manna
2017-08-09 14:29:03 UTC
Permalink
Hi,

What's the status of your SSL? Have you verified that the setup is working?

You can enable rough logins using log4j.properties file supplier with kafka
and set the root logging level to DEBUG. This prints out more info to trace
things. Also, you can enable security logging by adding
-Djavax.security.debug=all

Please share your producer/broker configs with us.

Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
test02
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
kafka-console-consumer.sh --bootstrap-server n1:9093 --consumer.config
/home/kafka/config/consumer.n1.properties --topic test02 --from-beginning
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null, value: 0
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 20:17:32 UTC
Permalink
Dear Manna,


What's the status of your SSL? Have you verified that the setup is working?
Yes, I used "

openssl s_client -debug -connect n1.test.com:9092 -tls1
Output:

CONNECTED(00000003)

write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))

0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1 ...........Y.m..
...

Server certificate

-----BEGIN CERTIFICATE-----

CwwCSEsxGT............

-----END CERTIFICATE-----

---

SSL handshake has read 2470 bytes and written 161 bytes

---

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA

PSK identity hint: None

Start Time: 1502309645

Timeout : 7200 (sec)

Verify return code: 19 (self signed certificate in certificate chain)

---

Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is working?
You can enable rough logins using log4j.properties file supplier with kafka
and set the root logging level to DEBUG. This prints out more info to trace
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExcepti
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
test02
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
kafka-console-consumer.sh --bootstrap-server n1:9093 --consumer.config
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null, value: 0
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 20:18:51 UTC
Permalink
And,

server.properties
######

broker.id=11

port=9092

host.name=n1

advertised.host.name=192.168.0.11

allow.everyone.if.no.acl.found=true

super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST

listeners=SSL://n1.test.com:9092

advertised.listeners=SSL://n1.test.com:9092

ssl.client.auth=required

ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1

ssl.keystore.type=JKS

ssl.truststore.type=JKS

security.inter.broker.protocol=SSL

ssl.keystore.location=/home/kafka/kafka.server.keystore.jks

ssl.keystore.password=Test2017

ssl.key.password=Test2017

ssl.truststore.location=/home/kafka/kafka.server.truststore.jks

ssl.truststore.password=Test2017

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder

num.replica.fetchers=4

replica.fetch.max.bytes=1048576

replica.fetch.wait.max.ms=500

replica.high.watermark.checkpoint.interval.ms=5000

replica.socket.timeout.ms=30000

replica.socket.receive.buffer.bytes=65536

replica.lag.time.max.ms=10000

controller.socket.timeout.ms=30000

controller.message.queue.size=10

default.replication.factor=3

log.dirs=/usr/log/kafka

kafka.logs.dir=/usr/log/kafka

num.partitions=20

message.max.bytes=1000000

auto.create.topics.enable=true

log.index.interval.bytes=4096

log.index.size.max.bytes=10485760

log.retention.hours=720

log.flush.interval.ms=10000

log.flush.interval.messages=20000

log.flush.scheduler.interval.ms=2000

log.roll.hours=168

log.retention.check.interval.ms=300000

log.segment.bytes=1073741824

delete.topic.enable=true

socket.request.max.bytes=104857600

socket.receive.buffer.bytes=1048576

socket.send.buffer.bytes=1048576

num.io.threads=8

num.network.threads=8

queued.max.requests=16

fetch.purgatory.purge.interval.requests=100

producer.purgatory.purge.interval.requests=100

zookeeper.connect=n1:2181,n2:2181,n3:2181

zookeeper.connection.timeout.ms=2000

zookeeper.sync.time.ms=2000
######




producer.properties
######

bootstrap.servers=n1.test.com:9092

security.protocol=SSL

ssl.truststore.location=/home/kafka/kafka.client.truststore.jks

ssl.truststore.password=testkafka

ssl.keystore.location=/home/kafka/kafka.client.keystore.jks

ssl.keystore.password=testkafka

ssl.key.password=testkafka
#####
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is working?
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1 ...........Y.m..
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is working?
You can enable rough logins using log4j.properties file supplier with kafka
and set the root logging level to DEBUG. This prints out more info to trace
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExce
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
test02
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
kafka-console-consumer.sh --bootstrap-server n1:9093 --consumer.config
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
0
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
M. Manna
2017-08-09 20:28:26 UTC
Permalink
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running

Please share the following and don't leave any details out. This will only
create more assumptions.

1) server.properties
2) Zookeeper.properties

Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11

Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is working?
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1 ...........Y.m..
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier with
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info to
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExcepti
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe --topic
test02
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11 Isr: 12,13,11
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12 Isr: 13,11,12
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13 Isr: 11,12,13
kafka-console-consumer.sh --bootstrap-server n1:9093 --consumer.config
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 20:35:21 UTC
Permalink
server.properties

######

broker.id=11

port=9093

host.name=n1

advertised.host.name=192.168.0.11

allow.everyone.if.no.acl.found=true

super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST

listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>

advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>

ssl.client.auth=required

ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1

ssl.keystore.type=JKS

ssl.truststore.type=JKS

security.inter.broker.protocol=SSL

ssl.keystore.location=/home/kafka/kafka.server.keystore.jks

ssl.keystore.password=Test2017

ssl.key.password=Test2017

ssl.truststore.location=/home/kafka/kafka.server.truststore.jks

ssl.truststore.password=Test2017

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder

num.replica.fetchers=4

replica.fetch.max.bytes=1048576

replica.fetch.wait.max.ms=500

replica.high.watermark.checkpoint.interval.ms=5000

replica.socket.timeout.ms=30000

replica.socket.receive.buffer.bytes=65536

replica.lag.time.max.ms=10000

controller.socket.timeout.ms=30000

controller.message.queue.size=10

default.replication.factor=3

log.dirs=/usr/log/kafka

kafka.logs.dir=/usr/log/kafka

num.partitions=20

message.max.bytes=1000000

auto.create.topics.enable=true

log.index.interval.bytes=4096

log.index.size.max.bytes=10485760

log.retention.hours=720

log.flush.interval.ms=10000

log.flush.interval.messages=20000

log.flush.scheduler.interval.ms=2000

log.roll.hours=168

log.retention.check.interval.ms=300000

log.segment.bytes=1073741824

delete.topic.enable=true

socket.request.max.bytes=104857600

socket.receive.buffer.bytes=1048576

socket.send.buffer.bytes=1048576

num.io.threads=8

num.network.threads=8

queued.max.requests=16

fetch.purgatory.purge.interval.requests=100

producer.purgatory.purge.interval.requests=100

zookeeper.connect=n1:2181,n2:2181,n3:2181

zookeeper.connection.timeout.ms=2000

zookeeper.sync.time.ms=2000

######





producer.properties

######

bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>

security.protocol=SSL

ssl.truststore.location=/home/kafka/kafka.client.truststore.jks

ssl.truststore.password=testkafka

ssl.keystore.location=/home/kafka/kafka.client.keystore.jks

ssl.keystore.password=testkafka

ssl.key.password=testkafka
#####


(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1 ...........Y.m..
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier with
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info to
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2] to
broker 1:org.apache.kafka.common.errors.
UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 20:43:18 UTC
Permalink
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)

zookeeper versoom: 3.4.10

zoo.cfg
######

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/usr/local/zookeeper/data

dataLogDir=/usr/local/zookeeper/datalog

clientPort=2181

maxClientCnxns=60

server.1=n1.test.com:2888:3888

server.2=n2.test.com:2888:3888

server.3=n3.test.com:2888:3888

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

jaasLoginRenew=3600000

requireClientAuthScheme=sasl

zookeeper.allowSaslFailedClients=false

kerberos.removeHostFromPrincipal=true

######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.
auth.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier with
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info to
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2]
to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.errors.UnknownTopicOrPartitionExce
pti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 20:46:45 UTC
Permalink
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11

The result:

zookeeper-shell.sh n1.test.com:2181

Connecting to n1.test.com:2181

Welcome to ZooKeeper!

JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

WATCHER::

WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier with
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info to
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition [test02,2]
to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 21:03:04 UTC
Permalink
About:
zookeeper-shell.sh localhost:2181
get /brokers/ids/11


The result:

zookeeper-shell.sh n1.test.com:2181

Connecting to n1.test.com:2181

Welcome to ZooKeeper!

JLine support is disabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

WATCHER::




get /brokers/ids/11

WatchedEvent state:SaslAuthenticated type:None path:null

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","port":-1,"version":4}

cZxid = 0x40002787d

ctime = Thu Aug 10 04:31:37 HKT 2017

mZxid = 0x40002787d

mtime = Thu Aug 10 04:31:37 HKT 2017

pZxid = 0x40002787d

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x35d885c689c00a6

dataLength = 168

numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093 --producer.config
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCal
lback)
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger time
How to resolve it?
Regards
Ascot Moss
2017-08-09 21:08:23 UTC
Permalink
( I have 3 test nodes)

get /brokers/ids/11

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","port":-1,"version":4}

cZxid = 0x40002787d

ctime = Thu Aug 10 04:31:37 HKT 2017

mZxid = 0x40002787d

mtime = Thu Aug 10 04:31:37 HKT 2017

pZxid = 0x40002787d

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x35d885c689c00a6

dataLength = 168

numChildren = 0


get /brokers/ids/12

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n2.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502284073115","port":-1,"version":4}

cZxid = 0x400026c66

ctime = Wed Aug 09 21:07:53 HKT 2017

mZxid = 0x400026c66

mtime = Wed Aug 09 21:07:53 HKT 2017

pZxid = 0x400026c66

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x25d6b41469a0110

dataLength = 168

numChildren = 0


get /brokers/ids/13

{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n3.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502284080461","port":-1,"version":4}

cZxid = 0x400026c6c

ctime = Wed Aug 09 21:07:59 HKT 2017

mZxid = 0x400026c6c

mtime = Wed Aug 09 21:07:59 HKT 2017

pZxid = 0x400026c6c

cversion = 0

dataVersion = 0

aclVersion = 0

ephemeralOwner = 0x35d885c689c00a2

dataLength = 168

numChildren = 0
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync --topic test02
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCal
lback)
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger
time
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
Ascot Moss
2017-08-09 21:21:18 UTC
Permalink
Dear Manna,

Where can I set "-Djavax.security.debug=all" for Kafka?

Regards
Post by Ascot Moss
( I have 3 test nodes)
get /brokers/ids/11
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
get /brokers/ids/12
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n2.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502284073115","
port":-1,"version":4}
cZxid = 0x400026c66
ctime = Wed Aug 09 21:07:53 HKT 2017
mZxid = 0x400026c66
mtime = Wed Aug 09 21:07:53 HKT 2017
pZxid = 0x400026c66
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x25d6b41469a0110
dataLength = 168
numChildren = 0
get /brokers/ids/13
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n3.test.com:9093"],"jmx_port":-1,"host":null,"timestamp":"1502284080461","
port":-1,"version":4}
cZxid = 0x400026c6c
ctime = Wed Aug 09 21:07:59 HKT 2017
mZxid = 0x400026c6c
mtime = Wed Aug 09 21:07:59 HKT 2017
pZxid = 0x400026c6c
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a2
dataLength = 168
numChildren = 0
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093"],"jmx_port":-1,"host":null,"timest
amp":"1502310695312","port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your previous
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more
info to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync --topic
test02
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCal
lback)
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger
time
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
M. Manna
2017-08-09 21:25:06 UTC
Permalink
if you remove host.name, advertised.host.name and port from
server.properties, does it work for you?

I am using SSL without ACL. it seems to be working fine.
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your
previous
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This will only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup is
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more info
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync --topic
test02
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
ERROR Error when sending message to topic test02 with key: null,
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCal
lback)
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger
time
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
Ascot Moss
2017-08-09 23:17:20 UTC
Permalink
I commented out both #host.name, #advertised.host.nam

(new server.properties)
broker.id=11
port=9093
#host.name=n1.test.com
#advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093
advertised.listeners=SSL://n1.test.com:9093
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000


(producer.properties)
bootstrap.servers=n1.test.com:9093
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka


(run producer)
./bin/kafka-console-producer.sh \
--broker-list n1:9093 \
--producer.config /home/kafka/config/producer.n1.properties \
--sync --topic test02


(got error)

[2017-08-10 07:10:31,881] ERROR Error when sending message to topic test02
with key: null, value: 0 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
test02-0: 1518 ms has passed since batch creation plus linger time

[2017-08-10 07:10:32,230] ERROR Error when sending message to topic test02
with key: null, value: 0 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
test02-1: 1543 ms has passed since batch creation plus linger time



By the way, where to set "-Djavax.security.debug=all" for Kafka?
Post by M. Manna
if you remove host.name, advertised.host.name and port from
server.properties, does it work for you?
I am using SSL without ACL. it seems to be working fine.
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have existing
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <
http://n1.test.com:9092/>
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your
previous
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
messages show 9093 - is there some typo issues? Where is SSL running
Please share the following and don't leave any details out. This
will
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup is
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the setup
is
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
working?
Post by M. Manna
You can enable rough logins using log4j.properties file supplier
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more
info
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181 --describe
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
12,13,11
13,11,12
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync --topic
test02
null,
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCal
lback)
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus linger
time
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
M. Manna
2017-08-10 08:33:26 UTC
Permalink
you missed port - comment that out too.

Debugging can enabled by

1) Setting root logger to DEBUG - more information on you cluster
2) SSL debugging - edit kafka-run-class - to add -Djavax.security.debug=all
(see some examples of how some other values are configured)

could you please set:
zookeeper.connection.timeout.ms = 15000
zookeeper.sync.time.ms=10000
retries=10

It seems that your group metadata is expiring all time. Try with the above
and see if it improves.
Post by Ascot Moss
I commented out both #host.name, #advertised.host.nam
(new server.properties)
broker.id=11
port=9093
#host.name=n1.test.com
#advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093
advertised.listeners=SSL://n1.test.com:9093
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.
auth.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
(producer.properties)
bootstrap.servers=n1.test.com:9093
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
(run producer)
./bin/kafka-console-producer.sh \
--broker-list n1:9093 \
--producer.config /home/kafka/config/producer.n1.properties \
--sync --topic test02
(got error)
[2017-08-10 07:10:31,881] ERROR Error when sending message to topic test02
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
test02-0: 1518 ms has passed since batch creation plus linger time
[2017-08-10 07:10:32,230] ERROR Error when sending message to topic test02
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
test02-1: 1543 ms has passed since batch creation plus linger time
By the way, where to set "-Djavax.security.debug=all" for Kafka?
Post by M. Manna
if you remove host.name, advertised.host.name and port from
server.properties, does it work for you?
I am using SSL without ACL. it seems to be working fine.
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have
existing
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=
TEST,C=TEST
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <
http://n1.test.com:9092/>
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your
previous
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
messages show 9093 - is there some typo issues? Where is SSL
running
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Please share the following and don't leave any details out. This
will
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the setup
is
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the
setup
Post by M. Manna
is
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
working?
Post by M. Manna
You can enable rough logins using log4j.properties file
supplier
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out more
info
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper n1:2181,n2:2181,n3:2181
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181
--describe
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11
12,13,11
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12
13,11,12
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync --topic
test02
null,
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.
ErrorLoggingCal
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
lback)
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
org.apache.kafka.common.errors.TimeoutException: Expiring 1
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus
linger
Post by M. Manna
Post by M. Manna
time
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
Ascot Moss
2017-08-10 10:37:08 UTC
Permalink
Works!
Many thanks
Post by M. Manna
you missed port - comment that out too.
Debugging can enabled by
1) Setting root logger to DEBUG - more information on you cluster
2) SSL debugging - edit kafka-run-class - to add -Djavax.security.debug=all
(see some examples of how some other values are configured)
zookeeper.connection.timeout.ms = 15000
zookeeper.sync.time.ms=10000
retries=10
It seems that your group metadata is expiring all time. Try with the above
and see if it improves.
Post by Ascot Moss
I commented out both #host.name, #advertised.host.nam
(new server.properties)
broker.id=11
port=9093
#host.name=n1.test.com
#advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093
advertised.listeners=SSL://n1.test.com:9093
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.
auth.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
(producer.properties)
bootstrap.servers=n1.test.com:9093
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
(run producer)
./bin/kafka-console-producer.sh \
--broker-list n1:9093 \
--producer.config /home/kafka/config/producer.n1.properties \
--sync --topic test02
(got error)
[2017-08-10 07:10:31,881] ERROR Error when sending message to topic
test02
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
for
Post by Ascot Moss
test02-0: 1518 ms has passed since batch creation plus linger time
[2017-08-10 07:10:32,230] ERROR Error when sending message to topic
test02
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
for
Post by Ascot Moss
test02-1: 1543 ms has passed since batch creation plus linger time
By the way, where to set "-Djavax.security.debug=all" for Kafka?
Post by M. Manna
if you remove host.name, advertised.host.name and port from
server.properties, does it work for you?
I am using SSL without ACL. it seems to be working fine.
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
{"listener_security_protocol_map":{"SSL":"SSL"},"endpoints":["SSL://
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have
existing
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=
TEST,C=TEST
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <
http://n1.test.com:9092/>
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct port)
Post by M. Manna
Your openssl test is showing connected with port 9092. but your
previous
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
messages show 9093 - is there some typo issues? Where is SSL
running
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Please share the following and don't leave any details out. This
will
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the
setup
Post by Ascot Moss
is
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
chain)
Post by Ascot Moss
---
Regards
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the
setup
Post by M. Manna
is
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
working?
Post by M. Manna
You can enable rough logins using log4j.properties file
supplier
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out
more
Post by Ascot Moss
Post by M. Manna
info
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper
n1:2181,n2:2181,n3:2181
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181
--describe
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11
12,13,11
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12
13,11,12
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync --topic
test02
null,
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.
ErrorLoggingCal
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
lback)
Expiring 1
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus
linger
Post by M. Manna
Post by M. Manna
time
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
Ascot Moss
2017-08-10 10:42:00 UTC
Permalink
A question:

(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2


If get them using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2

Any idea how to get the message in the correct order as inputted?
Post by Ascot Moss
Works!
Many thanks
Post by M. Manna
you missed port - comment that out too.
Debugging can enabled by
1) Setting root logger to DEBUG - more information on you cluster
2) SSL debugging - edit kafka-run-class - to add
-Djavax.security.debug=all
(see some examples of how some other values are configured)
zookeeper.connection.timeout.ms = 15000
zookeeper.sync.time.ms=10000
retries=10
It seems that your group metadata is expiring all time. Try with the above
and see if it improves.
Post by Ascot Moss
I commented out both #host.name, #advertised.host.nam
(new server.properties)
broker.id=11
port=9093
#host.name=n1.test.com
#advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=TEST,C=TEST
listeners=SSL://n1.test.com:9093
advertised.listeners=SSL://n1.test.com:9093
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.
auth.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
(producer.properties)
bootstrap.servers=n1.test.com:9093
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
(run producer)
./bin/kafka-console-producer.sh \
--broker-list n1:9093 \
--producer.config /home/kafka/config/producer.n1.properties \
--sync --topic test02
(got error)
[2017-08-10 07:10:31,881] ERROR Error when sending message to topic
test02
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
for
Post by Ascot Moss
test02-0: 1518 ms has passed since batch creation plus linger time
[2017-08-10 07:10:32,230] ERROR Error when sending message to topic
test02
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s)
for
Post by Ascot Moss
test02-1: 1543 ms has passed since batch creation plus linger time
By the way, where to set "-Djavax.security.debug=all" for Kafka?
Post by M. Manna
if you remove host.name, advertised.host.name and port from
server.properties, does it work for you?
I am using SSL without ACL. it seems to be working fine.
Post by M. Manna
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
get /brokers/ids/11
WatchedEvent state:SaslAuthenticated type:None path:null
["SSL://
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
n1.test.com:9093
"],"jmx_port":-1,"host":null,"timestamp":"1502310695312","
port":-1,"version":4}
cZxid = 0x40002787d
ctime = Thu Aug 10 04:31:37 HKT 2017
mZxid = 0x40002787d
mtime = Thu Aug 10 04:31:37 HKT 2017
pZxid = 0x40002787d
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x35d885c689c00a6
dataLength = 168
numChildren = 0
Post by Ascot Moss
About: zookeeper-shell.sh localhost:2181
get /brokers/ids/11
zookeeper-shell.sh n1.test.com:2181
Connecting to n1.test.com:2181
Welcome to ZooKeeper!
JLine support is disabled
WatchedEvent state:SyncConnected type:None path:null
WatchedEvent state:SaslAuthenticated type:None path:null
Post by Ascot Moss
FYI, about zookeeper, I used my existing zookeeper (as I have
existing
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
zookeeper up and running, which is also used for hbase)
zookeeper versoom: 3.4.10
zoo.cfg
######
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/datalog
clientPort=2181
maxClientCnxns=60
server.1=n1.test.com:2888:3888
server.2=n2.test.com:2888:3888
server.3=n3.test.com:2888:3888
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenti
cationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl
zookeeper.allowSaslFailedClients=false
kerberos.removeHostFromPrincipal=true
######
On Thu, Aug 10, 2017 at 4:35 AM, Ascot Moss <
Post by Ascot Moss
server.properties
######
broker.id=11
port=9093
host.name=n1
advertised.host.name=192.168.0.11
allow.everyone.if.no.acl.found=true
super.users=User:CN=n1.test.com,OU=TEST,O=TEST,L=TEST,ST=
TEST,C=TEST
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
listeners=SSL://n1.test.com:9093 <http://n1.test.com:9092/>
advertised.listeners=SSL://n1.test.com:9093 <
http://n1.test.com:9092/>
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type=JKS
ssl.truststore.type=JKS
security.inter.broker.protocol=SSL
ssl.keystore.location=/home/kafka/kafka.server.keystore.jks
ssl.keystore.password=Test2017
ssl.key.password=Test2017
ssl.truststore.location=/home/kafka/kafka.server.truststore.jks
ssl.truststore.password=Test2017
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
principal.builder.class=org.apache.kafka.common.security.aut
h.DefaultPrincipalBuilder
num.replica.fetchers=4
replica.fetch.max.bytes=1048576
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.lag.time.max.ms=10000
controller.socket.timeout.ms=30000
controller.message.queue.size=10
default.replication.factor=3
log.dirs=/usr/log/kafka
kafka.logs.dir=/usr/log/kafka
num.partitions=20
message.max.bytes=1000000
auto.create.topics.enable=true
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.hours=720
log.flush.interval.ms=10000
log.flush.interval.messages=20000
log.flush.scheduler.interval.ms=2000
log.roll.hours=168
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
delete.topic.enable=true
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
num.io.threads=8
num.network.threads=8
queued.max.requests=16
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
zookeeper.connect=n1:2181,n2:2181,n3:2181
zookeeper.connection.timeout.ms=2000
zookeeper.sync.time.ms=2000
######
producer.properties
######
bootstrap.servers=n1.test.com:9093 <http://n1.test.com:9092/>
security.protocol=SSL
ssl.truststore.location=/home/kafka/kafka.client.truststore.jks
ssl.truststore.password=testkafka
ssl.keystore.location=/home/kafka/kafka.client.keystore.jks
ssl.keystore.password=testkafka
ssl.key.password=testkafka
#####
(I had tried to switch to another port, 9093 is the correct
port)
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Your openssl test is showing connected with port 9092. but your
previous
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
messages show 9093 - is there some typo issues? Where is SSL
running
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Please share the following and don't leave any details out.
This
Post by Ascot Moss
Post by M. Manna
will
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
only
create more assumptions.
1) server.properties
2) Zookeeper.properties
Also, run the following command (when the cluster is running)
zookeeper-shell.sh localhost:2181
get /brokers/ids/11
Does it show that your broker #11 is connected?
Post by Ascot Moss
Dear Manna,
What's the status of your SSL? Have you verified that the
setup
Post by Ascot Moss
is
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
working?
Post by Ascot Moss
Yes, I used "
openssl s_client -debug -connect n1.test.com:9092 -tls1
CONNECTED(00000003)
write to 0x853e70 [0x89fd43] (155 bytes => 155 (0x9B))
0000 - 16 03 01 00 96 01 00 00-92 03 01 59 8b 6d 0d b1
...........Y.m..
Post by Ascot Moss
...
Server certificate
-----BEGIN CERTIFICATE-----
CwwCSEsxGT............
-----END CERTIFICATE-----
---
SSL handshake has read 2470 bytes and written 161 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
PSK identity hint: None
Start Time: 1502309645
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
chain)
Post by Ascot Moss
---
Regards
On Wed, Aug 9, 2017 at 10:29 PM, M. Manna <
Post by M. Manna
Hi,
What's the status of your SSL? Have you verified that the
setup
Post by M. Manna
is
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
working?
Post by M. Manna
You can enable rough logins using log4j.properties file
supplier
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
with
Post by Ascot Moss
kafka
Post by M. Manna
and set the root logging level to DEBUG. This prints out
more
Post by Ascot Moss
Post by M. Manna
info
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
to
Post by Ascot Moss
trace
Post by M. Manna
things. Also, you can enable security logging by adding
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
On 9 August 2017 at 14:38, Ascot Moss <
Post by Ascot Moss
Hi,
I have setup Kafka 0.10.2.1 with SSL.
openssl s_client -debug -connect n1:9093 -tls1
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Protocol : TLSv1
PSK identity hint: None
Start Time: 1502285690
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in
certificate
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
chain)
Post by M. Manna
Post by Ascot Moss
kafka-topics.sh --create --zookeeper
n1:2181,n2:2181,n3:2181
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
--replication-factor 3 --partitions 3 --topic test02
ERROR [ReplicaFetcherThread-2-111], Error for partition
[test02,2] to
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
broker 1:org.apache.kafka.common.erro
rs.UnknownTopicOrPartitionExcepti
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
This server does not host this topic-partition.
(kafka.server.ReplicaFetcherThread)
However, if I run describe topic, I can see it is created
kafka-topics.sh --zookeeper n1:2181,n2:2181,n3:2181
--describe
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
--topic
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
test02
Topic:test02 PartitionCount:3 ReplicationFactor:3
Topic: test02 Partition: 0 Leader: 12 Replicas: 12,13,11
12,13,11
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Topic: test02 Partition: 1 Leader: 13 Replicas: 13,11,12
13,11,12
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Topic: test02 Partition: 2 Leader: 11 Replicas: 11,12,13
11,12,13
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
kafka-console-consumer.sh --bootstrap-server n1:9093
--consumer.config
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/home/kafka/config/consumer.n1.properties --topic test02
--from-beginning
Post by Ascot Moss
kafka-console-producer.sh --broker-list n1:9093
--producer.config
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
/homey/kafka/config/producer.n1.properties --sync
--topic
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
test02
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
ERROR Error when sending message to topic test02 with
null,
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
value: 0
Post by M. Manna
Post by Ascot Moss
(org.apache.kafka.clients.producer.internals.
ErrorLoggingCal
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
lback)
Expiring 1
Post by Ascot Moss
Post by M. Manna
Post by M. Manna
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
record(s)
Post by Ascot Moss
Post by M. Manna
for
Post by Ascot Moss
test02-1: 1506 ms has passed since batch creation plus
linger
Post by M. Manna
Post by M. Manna
time
Post by Ascot Moss
Post by Ascot Moss
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
Post by M. Manna
Post by Ascot Moss
How to resolve it?
Regards
Ascot Moss
2017-08-10 10:43:19 UTC
Permalink
A question:

(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2


If get them using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2

Any idea how to get the message in the original order as input?
M. Manna
2017-08-10 10:50:58 UTC
Permalink
This is due to partitions you are consuming from. Documentation section
explains what needs to be done.
Post by Ascot Moss
(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2
If get them using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2
Any idea how to get the message in the original order as input?
Ascot Moss
2017-08-10 11:05:56 UTC
Permalink
Could point me where the document is?
Post by M. Manna
This is due to partitions you are consuming from. Documentation section
explains what needs to be done.
Post by Ascot Moss
(input order)
test1
test2
test3
test 2017-08-10
|2017-08-10 test1
2017-08-10 test2
If get them using
*--from-beginning*
(received order)
test1
test 2017-08-10
2017-08-10 test1
test2
test3
2017-08-10 test2
Any idea how to get the message in the original order as input?
Ascot Moss
2017-08-13 12:12:57 UTC
Permalink
Hi,


Without changing any configuration, got the error again now:

[2017-08-13 20:09:52,727] ERROR Error when sending message to topic test02
with key: null, value: 5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
test02-1: 1542 ms has passed since batch creation plus linger time

[2017-08-13 20:09:53,835] ERROR Error when sending message to topic test02
with key: null, value: 5 bytes with error:
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for
test02-0: 1532 ms has passed since batch creation plus linger time


Producer:

kafka-console-producer.sh \
--broker-list n1:9093 \
--producer.config /homey/kafka/config/producer.n1.properties
--sync --topic test02


Consumer:

kafka-console-consumer.sh \
--bootstrap-server n1:9093 \
--consumer.config /home/kafka/config/consumer.n1.properties \
--topic test02 --from-beginning

Loading...