Skip to content

No effect of KAFKA_BATCH_NUM_MESSAGES variable. #93

@psreekrishnan

Description

@psreekrishnan

Hi There,

Setting KAFKA_BATCH_NUM_MESSAGES to different values has no effect on the adapter. Tried by setting10K (default) up to 50K.

I was looking for a way to avoid the below error by increasing the BATCH_NUM_MESSAGES value hoping that I get more room in the adapter to accommodate more messages in the queue.

`{"error":"Local: Queue full","level":"error","msg":"couldn't produce message in kafka topic prometheus-metrics-pft","time":"2022-04-20T10:37:46Z"}

Configure the below variables other than the broker list.
KAFKA_COMPRESSION=snappy
KAFKA_BATCH_NUM_MESSAGES=30000

Also based on the librdkafka document, "queue.buffering.max.ms" is the delay in milliseconds to wait for messages in the producer queue. I am worried about the default value (set to 0) bypassing the KAFKA_BATCH_NUM_MESSAGES values.

queue.buffering.max.ms P 0 .. 900000 0 Delay in milliseconds to wait for messages in the producer queue to accumulate before constructing message batches (MessageSets) to transmit to brokers. A higher value allows larger and more effective (less overhead, improved compression) batches of messages to accumulate at the expense of increased message delivery latency.Type: integer
`

Is KAFKA_BATCH_NUM_MESSAGES variable not the right variable to bypass the "Queue full" error?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions