Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the produced message seems nerver sent to server, the outq_len() always none zero and nerver decrease #2030

Closed
7 tasks
yandaren opened this issue Sep 29, 2018 · 6 comments

Comments

@yandaren
Copy link

yandaren commented Sep 29, 2018

Read the FAQ first: https://github.com/edenhill/librdkafka/wiki/FAQ

Description

in my test code, the topic can create success, and can get metadata success, but can't produce message success, the producer->outq_len() alway same and nerver decease to 0. seems that the message nerver sent to the server, alway in the sent queue.

#include <iostream>
#include <string>
#include <list>
#include <stdint.h>
#include <rdkafkacpp.h>

static bool run = true;
static bool exit_eof = false;

void dump_config(RdKafka::Conf* conf) {
    std::list<std::string> *dump = conf->dump();

    printf("config dump(%d):\n", (int32_t)dump->size());
    for (auto it = dump->begin(); it != dump->end(); ) {
        std::string name = *it++;
        std::string value = *it++;
        printf("%s = %s\n", name.c_str(), value.c_str());
    }

    printf("---------------------------------------------\n");
}

class my_event_cb : public RdKafka::EventCb {
public:
    void event_cb(RdKafka::Event &event) override {
        switch (event.type())
        {
        case RdKafka::Event::EVENT_ERROR:
            std::cerr << "ERROR (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            if (event.err() == RdKafka::ERR__ALL_BROKERS_DOWN)
                run = false;
            break;

        case RdKafka::Event::EVENT_STATS:
            std::cerr << "\"STATS\": " << event.str() << std::endl;
            break;

        case RdKafka::Event::EVENT_LOG:
            fprintf(stderr, "LOG-%i-%s: %s\n",
                event.severity(), event.fac().c_str(), event.str().c_str());
            break;

        default:
            std::cerr << "EVENT " << event.type() <<
                " (" << RdKafka::err2str(event.err()) << "): " <<
                event.str() << std::endl;
            break;
        }
    }
};

class my_hash_partitioner_cb : public RdKafka::PartitionerCb {
public:
    int32_t partitioner_cb(const RdKafka::Topic *topic, const std::string *key,
        int32_t partition_cnt, void *msg_opaque) override {
        return djb_hash(key->c_str(), key->size()) % partition_cnt;
    }
private:
    static inline unsigned int djb_hash(const char *str, size_t len) {
        unsigned int hash = 5381;
        for (size_t i = 0; i < len; i++)
            hash = ((hash << 5) + hash) + str[i];
        return hash;
    }
};

namespace producer_ts {

class my_delivery_report_cb : public RdKafka::DeliveryReportCb {
public:
    void dr_cb(RdKafka::Message& message) override {
        printf("message delivery %d bytes, error:%s, key: %s\n",
            (int32_t)message.len(), message.errstr().c_str(), message.key() ? message.key()->c_str() : "");
    }
};

void producer_test() {
    printf("producer test\n");

    int32_t partition = RdKafka::Topic::PARTITION_UA;

    printf("input brokers list(127.0.0.1:9092;127.0.0.1:9093;127.0.0.1:9094):\n");
    std::string broker_list;

    //std::cin >> broker_list;
    broker_list = "127.0.0.1:9092";

    printf("input partition:");

    //std::cin >> partition;
    partition = 0;

    // config 
    RdKafka::Conf* global_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_GLOBAL);
    RdKafka::Conf* topic_conf = RdKafka::Conf::create(RdKafka::Conf::CONF_TOPIC);

    my_hash_partitioner_cb          hash_partitioner;
    my_event_cb                     event_cb;
    my_delivery_report_cb           delivery_cb;
  

    std::string err_string;
    if (topic_conf->set("partitioner_cb", &hash_partitioner, err_string) != RdKafka::Conf::CONF_OK) {
        printf("set partitioner_cb error: %s\n", err_string.c_str());
        return;
    }

    global_conf->set("metadata.broker.list", broker_list, err_string);
    global_conf->set("event_cb", &event_cb, err_string);
    global_conf->set("dr_cb", &delivery_cb, err_string);
    global_conf->set("debug", "all", err_string);

    //dump_config(global_conf);
    //dump_config(topic_conf);


    // create producer
    RdKafka::Producer* producer = RdKafka::Producer::create(global_conf, err_string);
    if (!producer) {
        printf("failed to create producer, %s\n", err_string.c_str());
        return;
    }

    printf("created producer %s\n", producer->name().c_str());

    std::string topic_name;
    while (true) {

        printf("input topic to create:\n");
        std::cin >> topic_name;

        // create topic
        RdKafka::Topic* topic =
            RdKafka::Topic::create(producer, topic_name, topic_conf, err_string);

        if (!topic) {
            printf("try create topic[%s] failed, %s\n",
                topic_name.c_str(), err_string.c_str());
            return;
        }

        printf(">");
        for (std::string line; run && std::getline(std::cin, line); ) {
            if (line.empty()) {
                producer->poll(0);
                continue;
            }

            if (line == "quit") {
                break;
            }

            RdKafka::ErrorCode res = producer->produce(topic, partition,
                RdKafka::Producer::RK_MSG_COPY,
                (char*)line.c_str(), line.size(), NULL, NULL);

            if (res != RdKafka::ERR_NO_ERROR) {
                printf("produce failed, %s\n", RdKafka::err2str(res).c_str());
            }
            else {
                printf("produced msg, bytes %d\n", (int32_t)line.size());
            }

            // do socket io
            producer->poll(0);

            printf("outq_len: %d\n", producer->outq_len());

            //producer->flush(1000);

            //while (run && producer->outq_len()) {
            //    printf("wait for write queue( size %d) write finish\n", producer->outq_len());
            //    producer->poll(1000);
            //}

            printf(">");
        }

        delete topic;

        if (!run) {
            break;
        }
    }

    run = true;

    while (run && producer->outq_len()) {
        printf("wait for write queue( size %d) write finish\n", producer->outq_len());
        producer->poll(1000);
    }

    delete producer;
}
}

================
the logs like this

>1
produced msg, bytes 1
outq_len: 1
>2
produced msg, bytes 1
outq_len: 2
>3
produced msg, bytes 1
outq_len: 3
LOG-7-TOPPAR: [thrd:HSH-D-3146.wowshga.internal:9092/0]: xxx:9092/0: kafka_cpp_test [0] 3 message(s) in xmit queue (0 added from partition queue)
LOG-7-TOPPAR: [thrd:HSH-D-3146.wowshga.internal:9092/0]: xxx:9092/0: kafka_cpp_test [0] 3 message(s) in xmit queue (0 added from partition queue)
LOG-7-TOPPAR: [thrd:HSH-D-3146.wowshga.internal:9092/0]: xxx:9092/0: kafka_cpp_test [0] 3 message(s) in xmit queue (0 added from partition queue)

the outq_len just increase when i send q produce request, and then stay the same.

Checklist

IMPORTANT: We will close issues where the checklist has not been completed.

Please provide the following information:

  • librdkafka version (release number or git tag): v0.11.5
  • Apache Kafka version: 0.11.0.3
  • librdkafka client configuration: <REPLACE with e.g., message.timeout.ms=123, auto.reset.offset=earliest, ..>
  • Operating system: Win64
  • Provide logs (with debug=.. as necessary) from librdkafka
  • Provide broker log excerpts
  • Critical issue
@yandaren
Copy link
Author

delivery report callback not triggered

@yandaren
Copy link
Author

config

config dump(138):
builtin.features = gzip,snappy,ssl,sasl,regex,lz4,sasl_gssapi,sasl_plain,sasl_sc
ram,plugins
client.id = rdkafka
metadata.broker.list = 127.0.0.1:9092
message.max.bytes = 1000000
message.copy.max.bytes = 65535
receive.message.max.bytes = 100000000
max.in.flight.requests.per.connection = 1000000
metadata.request.timeout.ms = 60000
topic.metadata.refresh.interval.ms = 300000
metadata.max.age.ms = -1
topic.metadata.refresh.fast.interval.ms = 250
topic.metadata.refresh.fast.cnt = 10
topic.metadata.refresh.sparse = true
debug =
socket.timeout.ms = 60000
socket.blocking.max.ms = 1000
socket.send.buffer.bytes = 0
socket.receive.buffer.bytes = 0
socket.keepalive.enable = false
socket.nagle.disable = false
socket.max.fails = 1
broker.address.ttl = 1000
broker.address.family = any
reconnect.backoff.jitter.ms = 500
statistics.interval.ms = 0
enabled_events = 0
log_cb = 000000013FBAA4E2
log_level = 6
log.queue = false
log.thread.name = true
log.connection.close = true
socket_cb = 000000013FBA6275
open_cb = 000000013FBA934E
internal.termination.signal = 0
api.version.request = true
api.version.request.timeout.ms = 10000
api.version.fallback.ms = 1200000
broker.version.fallback = 0.9.0
security.protocol = plaintext
sasl.mechanisms = GSSAPI
sasl.kerberos.service.name = kafka
sasl.kerberos.principal = kafkaclient
partition.assignment.strategy = range,roundrobin
session.timeout.ms = 30000
heartbeat.interval.ms = 1000
group.protocol.type = consumer
coordinator.query.interval.ms = 600000
enable.auto.commit = true
auto.commit.interval.ms = 5000
enable.auto.offset.store = true
queued.min.messages = 100000
queued.max.messages.kbytes = 1048576
fetch.wait.max.ms = 100
fetch.message.max.bytes = 1048576
fetch.max.bytes = 52428800
fetch.min.bytes = 1
fetch.error.backoff.ms = 500
offset.store.method = broker
enable.partition.eof = true
check.crcs = false
queue.buffering.max.messages = 100000
queue.buffering.max.kbytes = 1048576
queue.buffering.max.ms = 0
message.send.max.retries = 2
retry.backoff.ms = 100
queue.buffering.backpressure.threshold = 1
compression.codec = none
batch.num.messages = 10000
delivery.report.only.error = false
---------------------------------------------
config dump(30):
request.required.acks = 1
request.timeout.ms = 5000
message.timeout.ms = 300000
queuing.strategy = fifo
produce.offset.report = false
partitioner = consistent_random
compression.codec = inherit
compression.level = -1
auto.commit.enable = true
auto.commit.interval.ms = 60000
auto.offset.reset = largest
offset.store.path = .
offset.store.sync.interval.ms = -1
offset.store.method = broker
consume.callback.max.messages = 0
---------------------------------------------

@edenhill
Copy link
Contributor

edenhill commented Oct 5, 2018

Set debug=topic,msg config property to see what is going on under the hood.

nit: You should only create a topic once.

@yandaren
Copy link
Author

yandaren commented Oct 8, 2018

maybe I got the problem, in the function 'rd_kafka_toppar_producer_serve'

        /* Honour retry.backoff.ms. */
        if (unlikely(rkm->rkm_u.producer.ts_backoff > now)) {
                *next_wakeup = rkm->rkm_u.producer.ts_backoff;
                /* Wait for backoff to expire */
                return 0;
        }

        /* Send Produce requests for this toppar */
        while (1) {
                r = rd_kafka_ProduceRequest(rkb, rktp);
                if (likely(r > 0))
                        cnt += r;
                else
                        break;
        }

rkm->rkm_u.producer.ts_backoff is default 0, now = rd_clock() and their type is int64_t; in win64 vs project, the value of 'now' seems always be negative, then the condition ''rkm->rkm_u.producer.ts_backoff > now" alway hit, and the function 'rd_kafka_ProduceRequest' has no chance to execute; then i try change the type of 'rd_ts_t' to uint64_t, seems it works, i can produce and consume topic message success

@edenhill
Copy link
Contributor

edenhill commented Oct 8, 2018

You're most likely hitting the clock bug on windows: #1980
It is fixed in the upcoming v0.11.6 release.

@yandaren
Copy link
Author

yandaren commented Oct 8, 2018

OK,thanks,i'll try it later

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants