Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No data flowing for windows consumers #2025

Closed
1 of 4 tasks
verc11 opened this issue Sep 25, 2018 · 2 comments
Closed
1 of 4 tasks

No data flowing for windows consumers #2025

verc11 opened this issue Sep 25, 2018 · 2 comments

Comments

@verc11
Copy link

verc11 commented Sep 25, 2018

Description

No data flowing on windows client.

I have an environment with 5 kafka brokers serving 2 topics. I can produce without issues and I can consume without issues with the linux librdkafka client. The problem is with consuming on windows. When I subscribe I never receive any messages. I'm using the RdKafka::KafkaConsumer with Subscribe() and the rebalance_cb defined to a class that seeks based on offset times to the start of the current day.

ExampleRebalanceCb ex_rebalance_cb(m_date);
conf->set("rebalance_cb", &ex_rebalance_cb, errstr);

I'm using a new consumer group each time I launch the application (only one instance).

When I first enabled all debug output I noticed that the metadata requests never returned topic information. I was able to get around that by creating a RdKafka::Topic for each of my topics

std::shared_ptrRdKafka::Topic topic = std::shared_ptrRdKafka::Topic(RdKafka::Topic::create(consumer, *it, tconf, errstr));

But, I just get fetch backoffs and data never flows

%7|1537893809.600|FETCH|rdkafka#consumer-1| [thrd:kafk0102:9092/bootstrap]: kafk0102:9092/43: Fetch backoff for 2631409797ms
%7|1537893809.600|FETCH|rdkafka#consumer-1| [thrd:kafk0101:9092/bootstrap]: kafk0101:9092/42: Fetch backoff for 2631409797ms
%7|1537893809.725|FETCH|rdkafka#consumer-1| [thrd:kafk0104:9092/bootstrap]: kafk0104:9092/40: Fetch backoff for 2631409672ms
%7|1537893809.725|FETCH|rdkafka#consumer-1| [thrd:kafk0105:9092/bootstrap]: kafk0105:9092/44: Fetch backoff for 2631409672ms
%7|1537893810.053|COMMIT|rdkafka#consumer-1| [thrd:main]: OffsetCommit internal error: Local: No offset stored
%7|1537893810.053|COMMIT|rdkafka#consumer-1| [thrd:main]: OffsetCommit for -1 partition(s): cgrp auto commit timer: returned: Local: No offset stored
%7|1537893810.053|UNASSIGN|rdkafka#consumer-1| [thrd:main]: Group "LBE_Group3882153046": unassign done in state wait-broker (join state init): without new assignment: OffsetCommit done (__NO_OFFSET)
%7|1537893810.616|FETCH|rdkafka#consumer-1| [thrd:kafk0102:9092/bootstrap]: kafk0102:9092/43: Fetch backoff for 2631408782ms
%7|1537893810.616|FETCH|rdkafka#consumer-1| [thrd:kafk0101:9092/bootstrap]: kafk0101:9092/42: Fetch backoff for 2631408782ms
%7|1537893810.616|FETCH|rdkafka#consumer-1| [thrd:kafk0103:9092/bootstrap]: kafk0103:9092/41: Fetch backoff for 2631408782ms

I'm attaching the full log snippet.
kafka_log_925.txt

Other details-
Not using kerbose or any authentication. Not using compression. Compiling with LIBRDKAFKA_STATICLIB defined on windows.

No errors or warning in broker logs

kafka-broker-kafk0101.log.9:2018-09-19 13:06:33,272 DEBUG kafka.network.Acceptor: Accepted connection from /10.137.18.132:53849 on /10.137.18.156:9092 and assigned it to processor 2, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400]
kafka-broker-kafk0101.log.9:2018-09-19 13:06:33,272 DEBUG kafka.network.Processor: Processor 2 listening to new connection from /10.137.18.132:53849
kafka-broker-kafk0101.log.9:2018-09-19 13:06:33,272 DEBUG kafka.request.logger: Completed request:{api_key=18,api_version=0,correlation_id=1,client_id=rdkafka} -- {} from connection 10.137.18.156:9092-10.137.18.132:53849;totalTime:0.176000,requestQueueTime:0.021000,localTime:0.133000,remoteTime:0.000000,throttleTime:0.091000,responseQueueTime:0.013000,sendTime:0.015000,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT
kafka-broker-kafk0101.log.9:2018-09-19 13:06:33,273 DEBUG kafka.request.logger: Completed request:{api_key=3,api_version=2,correlation_id=2,client_id=rdkafka} -- {topics=[]} from connection 10.137.18.156:9092-10.137.18.132:53849;totalTime:0.433000,requestQueueTime:0.012000,localTime:0.405000,remoteTime:0.000000,throttleTime:0.199000,responseQueueTime:0.008000,sendTime:0.011000,securityProtocol:PLAINTEXT,principal:User:ANONYMOUS,listener:PLAINTEXT

contents of win32_config.h:

#ifndef _RD_WIN32_CONFIG_H_
#define _RD_WIN32_CONFIG_H_

#ifndef WITHOUT_WIN32_CONFIG
#define WITH_SSL 0
#define WITH_ZLIB 0
#define WITH_SNAPPY 0
#define WITH_SASL_SCRAM 0
#define ENABLE_DEVEL 0
#define WITH_PLUGINS 1
#define WITH_HDRHISTOGRAM 0
#endif
#define SOLIB_EXT ".dll"

#endif /* _RD_WIN32_CONFIG_H_ */
  • librdkafka version v0.11.5

  • Kafka 1.0.1

  • Operating system: windows server 2012 r2

  • librdkafka client configuration:

    conf->set("socket.send.buffer.bytes", "16000000", errstr);
    conf->set("socket.receive.buffer.bytes", "16000000", errstr);
    tconf->set("auto.offset.reset", "beginning", errstr);
    conf->set("group.id", group, errstr);
    conf->set("enable.auto.commit", "true", errstr);
    
    



@edenhill
Copy link
Contributor

Is the client machine uptime more than 7 days?
If so you are probably hitting: #1980

@verc11
Copy link
Author

verc11 commented Sep 25, 2018

Bingo. that was it. works like a charm after restarting. Thank you

@edenhill edenhill closed this as completed Dec 4, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants