Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

CommitFailedError when there are very few messages to be read from topic #2610

Copy link
Copy link
@berrfred

Description

@berrfred
Issue body actions

This CommitFailedError arises when there are very few messages to be read from topic ... I've experiencing it since my early use of kafka-python (release 2.0.2) and it is still present with latest 2.2.3 .
Since I need to process every message only once, this led me to disable autocommit and do instead a manual commit after each poll returning some messages, then process the read messages.

In the below example, "no record available" gets printed after each poll with no returned messages, so you can see poll() gets called every 5 seconds (or immediately after processing the last message).

02/05/2025; 18:49:02.94; [I] $GMT22 ; no record available
02/05/2025; 18:49:08.09; [I] $GMT22 ; no record available
02/05/2025; 18:49:13.23; [I] $GMT22 ; no record available
02/05/2025; 18:49:18.33; [I] $GMT22 ; no record available
02/05/2025; 18:49:23.53; [I] $GMT22 ; no record available
02/05/2025; 18:49:28.70; [I] $GMT22 ; no record available
02/05/2025; 18:49:34.29; [E] $GMT22 ; read records - error handling polled messages - CommitFailedError: Commit cannot be completed since the group has already
            rebalanced and assigned the partitions to another member.
            This means that the time between subsequent calls to poll()
            was longer than the configured max_poll_interval_ms, which
            typically implies that the poll loop is spending too much
            time message processing. You can address this either by
            increasing the rebalance timeout with max_poll_interval_ms,
            or by reducing the maximum size of batches returned in poll()
            with max_poll_records.

02/05/2025; 18:49:39.39; [I] $GMT22 ; no record available
02/05/2025; 18:49:44.50; [I] $GMT22 ; no record available
02/05/2025; 18:49:49.51; [W] $GMT22 ; read records - committed (1)
02/05/2025; 18:49:49.51; [W] $GMT22 ; read|oamds-mtonline|1|189230|b'no_check_1000200ffff33gfff'|10000
02/05/2025; 18:49:49.68; [I] $GMT22 ; nowait response for tag 10000
02/05/2025; 18:49:49.79; [I] $GMT22 ; no record available

max_poll_interval_ms is left to default value (300 seconds I think) and not even provided as a parameter to my Consumer.

Not sure if it has to do with the above behaviour, but at least two Kafka brokers administrators told me they do not see my client registered ... even if my client is regularly polling and processing messages.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.