DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=42) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=42,topics=[],forgotten_topics_data=[]} with correlation id 80 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=43) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=43,topics=[],forgotten_topics_data=[]} with correlation id 81 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 82 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=44) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=44,topics=[],forgotten_topics_data=[]} with correlation id 83 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=45) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=45,topics=[],forgotten_topics_data=[]} with correlation id 84 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 85 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=46) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=46,topics=[],forgotten_topics_data=[]} with correlation id 86 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=47) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=47,topics=[],forgotten_topics_data=[]} with correlation id 87 to node 0
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending Heartbeat request to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send HEARTBEAT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55} with correlation id 88 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful Heartbeat response
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 89 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=48) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=48,topics=[],forgotten_topics_data=[]} with correlation id 90 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=49) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=49,topics=[],forgotten_topics_data=[]} with correlation id 91 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 92 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=50) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=50,topics=[],forgotten_topics_data=[]} with correlation id 93 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=51) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=51,topics=[],forgotten_topics_data=[]} with correlation id 94 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 95 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=52) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=52,topics=[],forgotten_topics_data=[]} with correlation id 96 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=53) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=53,topics=[],forgotten_topics_data=[]} with correlation id 97 to node 0
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending Heartbeat request to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send HEARTBEAT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55} with correlation id 98 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful Heartbeat response
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 99 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=54) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=54,topics=[],forgotten_topics_data=[]} with correlation id 100 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=55) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=55,topics=[],forgotten_topics_data=[]} with correlation id 101 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 102 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=56) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=56,topics=[],forgotten_topics_data=[]} with correlation id 103 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=57) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=57,topics=[],forgotten_topics_data=[]} with correlation id 104 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 105 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=58) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=58,topics=[],forgotten_topics_data=[]} with correlation id 106 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=59) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=59,topics=[],forgotten_topics_data=[]} with correlation id 107 to node 0
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending Heartbeat request to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send HEARTBEAT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55} with correlation id 108 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful Heartbeat response
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 109 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=60) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=60,topics=[],forgotten_topics_data=[]} with correlation id 110 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=61) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=61,topics=[],forgotten_topics_data=[]} with correlation id 111 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 112 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=62) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=62,topics=[],forgotten_topics_data=[]} with correlation id 113 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=63) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=63,topics=[],forgotten_topics_data=[]} with correlation id 114 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 115 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=64) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=64,topics=[],forgotten_topics_data=[]} with correlation id 116 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=65) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=65,topics=[],forgotten_topics_data=[]} with correlation id 117 to node 0
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending Heartbeat request to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send HEARTBEAT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55} with correlation id 118 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful Heartbeat response
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 119 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=66) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=66,topics=[],forgotten_topics_data=[]} with correlation id 120 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=67) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=67,topics=[],forgotten_topics_data=[]} with correlation id 121 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 122 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=68) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=68,topics=[],forgotten_topics_data=[]} with correlation id 123 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=69) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=69,topics=[],forgotten_topics_data=[]} with correlation id 124 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 125 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=70) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=70,topics=[],forgotten_topics_data=[]} with correlation id 126 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=71) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=71,topics=[],forgotten_topics_data=[]} with correlation id 127 to node 0
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending Heartbeat request to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send HEARTBEAT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55} with correlation id 128 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful Heartbeat response
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 129 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=72) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=72,topics=[],forgotten_topics_data=[]} with correlation id 130 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=73) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=73,topics=[],forgotten_topics_data=[]} with correlation id 131 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=13,member_id=consumer-1-92fc6a4c-bc5c-4914-8481-013046954b55,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 132 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1850410725 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1850410725, epoch=74) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1850410725,session_epoch=74,topics=[],forgotten_topics_data=[]} with correlation id 133 to node 0
INFO main org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 
	allow.auto.create.topics = true
	auto.commit.interval.ms = 1000
	auto.offset.reset = latest
	bootstrap.servers = [127.0.0.1:9092]
	check.crcs = true
	client.dns.lookup = default
	client.id = 
	client.rack = 
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = true
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = 1
	group.instance.id = null
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = true
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 300000
	max.poll.records = 500
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.com.jms.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 30000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer

DEBUG main org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=1] Initializing the Kafka consumer
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-throttle-time
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name heartbeat-latency
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name join-latency
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name sync-latency
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name commit-latency
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-fetched
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name records-fetched
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name fetch-latency
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lag
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name records-lead
INFO main org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.3.0
INFO main org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: fc1aaa116b661c8a
INFO main org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1590667416409
DEBUG main org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=1] Kafka consumer initialized
INFO main org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=1] Subscribed to topic(s): topicDemo
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Initiating connection to node 127.0.0.1:9092 (id: -1 rack: null) using address /127.0.0.1
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received
DEBUG main org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency
DEBUG main org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-1, groupId=1] Created socket with SO_RCVBUF = 342972, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node -1
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Completed connection to node -1. Fetching API versions.
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Initiating API versions fetch from node -1.
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Recorded API versions for node -1: (Produce(0): 0 to 7 [usable: 7], Fetch(1): 0 to 10 [usable: 10], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 7 [usable: 7], LeaderAndIsr(4): 0 to 2 [usable: 2], StopReplica(5): 0 to 1 [usable: 1], UpdateMetadata(6): 0 to 5 [usable: 5], ControlledShutdown(7): 0 to 2 [usable: 2], OffsetCommit(8): 0 to 6 [usable: 6], OffsetFetch(9): 0 to 5 [usable: 5], FindCoordinator(10): 0 to 2 [usable: 2], JoinGroup(11): 0 to 4 [usable: 4], Heartbeat(12): 0 to 2 [usable: 2], LeaveGroup(13): 0 to 2 [usable: 2], SyncGroup(14): 0 to 2 [usable: 2], DescribeGroups(15): 0 to 2 [usable: 2], ListGroups(16): 0 to 2 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 2 [usable: 2], CreateTopics(19): 0 to 3 [usable: 3], DeleteTopics(20): 0 to 3 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 1 [usable: 1], OffsetForLeaderEpoch(23): 0 to 2 [usable: 2], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 1 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 1 [usable: 1], ElectPreferredLeaders(43): 0 [usable: 0], IncrementalAlterConfigs(44): UNSUPPORTED)
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(name='topicDemo')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node 127.0.0.1:9092 (id: -1 rack: null)
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v7 to send METADATA {topics=[{name=topicDemo}],allow_auto_topic_creation=true} with correlation id 2 to node -1
DEBUG main org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v7 to send METADATA {topics=null,allow_auto_topic_creation=true} with correlation id 0 to node -1
DEBUG main org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-1, groupId=1] Updating last seen epoch from null to 0 for partition topicDemo-0
INFO main org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-1, groupId=1] Cluster ID: vLctt2CRS9i3HcEnmv1apQ
DEBUG main org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-1, groupId=1] Updated cluster metadata updateVersion 2 to MetadataCache{cluster=Cluster(id = vLctt2CRS9i3HcEnmv1apQ, nodes = [localhost:9092 (id: 0 rack: null)], partitions = [Partition(topic = topicDemo, partition = 0, leader = 0, replicas = [0], isr = [0], offlineReplicas = [])], controller = localhost:9092 (id: 0 rack: null))}
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending FindCoordinator request to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Initiating connection to node localhost:9092 (id: 0 rack: null) using address localhost/127.0.0.1
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-sent
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.bytes-received
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name node-0.latency
DEBUG Thread-1 org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-1, groupId=1] Created socket with SO_RCVBUF = 342972, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node 0
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Completed connection to node 0. Fetching API versions.
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Initiating API versions fetch from node 0.
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Recorded API versions for node 0: (Produce(0): 0 to 7 [usable: 7], Fetch(1): 0 to 10 [usable: 10], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 7 [usable: 7], LeaderAndIsr(4): 0 to 2 [usable: 2], StopReplica(5): 0 to 1 [usable: 1], UpdateMetadata(6): 0 to 5 [usable: 5], ControlledShutdown(7): 0 to 2 [usable: 2], OffsetCommit(8): 0 to 6 [usable: 6], OffsetFetch(9): 0 to 5 [usable: 5], FindCoordinator(10): 0 to 2 [usable: 2], JoinGroup(11): 0 to 4 [usable: 4], Heartbeat(12): 0 to 2 [usable: 2], LeaveGroup(13): 0 to 2 [usable: 2], SyncGroup(14): 0 to 2 [usable: 2], DescribeGroups(15): 0 to 2 [usable: 2], ListGroups(16): 0 to 2 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 2 [usable: 2], CreateTopics(19): 0 to 3 [usable: 3], DeleteTopics(20): 0 to 3 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 1 [usable: 1], OffsetForLeaderEpoch(23): 0 to 2 [usable: 2], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 1 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 1 [usable: 1], ElectPreferredLeaders(43): 0 [usable: 0], IncrementalAlterConfigs(44): UNSUPPORTED)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received FindCoordinator response ClientResponse(receivedTimeMs=1590667416762, latencyMs=9, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=2, clientId=consumer-1, correlationId=3), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='null', nodeId=0, host='localhost', port=9092))
INFO Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Discovered group coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Initiating connection to node localhost:9092 (id: 2147483647 rack: null) using address localhost/127.0.0.1
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Heartbeat thread started
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending synchronous auto-commit of offsets {}
INFO Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Revoking previously assigned partitions []
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Disabling heartbeat thread
INFO Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] (Re-)joining group
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Joining group with current subscription: [topicDemo]
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending JoinGroup (JoinGroupRequestData(groupId='1', sessionTimeoutMs=30000, rebalanceTimeoutMs=300000, memberId='', groupInstanceId='null', protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 0, 0, 0, 0, 1, 0, 9, 116, 111, 112, 105, 99, 68, 101, 109, 111, 0, 0, 0, 0])])) to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-sent
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.bytes-received
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name node-2147483647.latency
DEBUG Thread-1 org.apache.kafka.common.network.Selector - [Consumer clientId=consumer-1, groupId=1] Created socket with SO_RCVBUF = 342972, SO_SNDBUF = 146988, SO_TIMEOUT = 0 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Completed connection to node 2147483647. Fetching API versions.
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Initiating API versions fetch from node 2147483647.
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Recorded API versions for node 2147483647: (Produce(0): 0 to 7 [usable: 7], Fetch(1): 0 to 10 [usable: 10], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 7 [usable: 7], LeaderAndIsr(4): 0 to 2 [usable: 2], StopReplica(5): 0 to 1 [usable: 1], UpdateMetadata(6): 0 to 5 [usable: 5], ControlledShutdown(7): 0 to 2 [usable: 2], OffsetCommit(8): 0 to 6 [usable: 6], OffsetFetch(9): 0 to 5 [usable: 5], FindCoordinator(10): 0 to 2 [usable: 2], JoinGroup(11): 0 to 4 [usable: 4], Heartbeat(12): 0 to 2 [usable: 2], LeaveGroup(13): 0 to 2 [usable: 2], SyncGroup(14): 0 to 2 [usable: 2], DescribeGroups(15): 0 to 2 [usable: 2], ListGroups(16): 0 to 2 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 2 [usable: 2], CreateTopics(19): 0 to 3 [usable: 3], DeleteTopics(20): 0 to 3 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 1 [usable: 1], OffsetForLeaderEpoch(23): 0 to 2 [usable: 2], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 1 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 1 [usable: 1], ElectPreferredLeaders(43): 0 [usable: 0], IncrementalAlterConfigs(44): UNSUPPORTED)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v4 to send JOIN_GROUP {group_id=1,session_timeout_ms=30000,rebalance_timeout_ms=300000,member_id=,protocol_type=consumer,protocols=[{name=range,metadata=java.nio.HeapByteBuffer[pos=0 lim=21 cap=21]}]} with correlation id 5 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Disabling heartbeat thread
INFO Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] (Re-)joining group
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Joining group with current subscription: [topicDemo]
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending JoinGroup (JoinGroupRequestData(groupId='1', sessionTimeoutMs=30000, rebalanceTimeoutMs=300000, memberId='consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76', groupInstanceId='null', protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 0, 0, 0, 0, 1, 0, 9, 116, 111, 112, 105, 99, 68, 101, 109, 111, 0, 0, 0, 0])])) to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v4 to send JOIN_GROUP {group_id=1,session_timeout_ms=30000,rebalance_timeout_ms=300000,member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76,protocol_type=consumer,protocols=[{name=range,metadata=java.nio.HeapByteBuffer[pos=0 lim=21 cap=21]}]} with correlation id 7 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=15, protocolName='range', leader='consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76', memberId='consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76', members=[JoinGroupResponseMember(memberId='consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76', groupInstanceId='null', metadata=[0, 0, 0, 0, 0, 1, 0, 9, 116, 111, 112, 105, 99, 68, 101, 109, 111, 0, 0, 0, 0])])
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Performing assignment using strategy range with subscriptions {consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76=Subscription(topics=[topicDemo])}
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Finished assignment for group: {consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76=Assignment(partitions=[topicDemo-0])}
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending leader SyncGroup to coordinator localhost:9092 (id: 2147483647 rack: null): SyncGroupRequestData(groupId='1', generationId=15, memberId='consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76', groupInstanceId='null', assignments=[SyncGroupRequestAssignment(memberId='consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76', assignment=[0, 0, 0, 0, 0, 1, 0, 9, 116, 111, 112, 105, 99, 68, 101, 109, 111, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0])])
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send SYNC_GROUP {group_id=1,generation_id=15,member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76,assignments=[{member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76,assignment=java.nio.HeapByteBuffer[pos=0 lim=29 cap=29]}]} with correlation id 8 to node 2147483647
INFO Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Successfully joined group with generation 15
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Enabling heartbeat thread
INFO Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Setting newly assigned partitions: topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Fetching committed offsets for partitions: [topicDemo-0]
INFO Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Setting offset for partition topicDemo-0 to the committed offset FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}}
DEBUG Thread-1 org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-1, groupId=1] Not replacing existing epoch 0 with new epoch 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Skipping validation of fetch offsets for partitions [topicDemo-0] since the broker does not support the required protocol version (introduced in Kafka 2.3)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 0 with 1 partition(s).
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED FullFetchRequest(topicDemo-0) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=0,session_epoch=0,topics=[{topic=topicDemo,partitions=[{partition=0,current_leader_epoch=0,fetch_offset=1613,log_start_offset=-1,partition_max_bytes=1048576}]}],forgotten_topics_data=[]} with correlation id 10 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent a full fetch response that created a new incremental fetch session 1839206013 with 1 response partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Fetch READ_UNCOMMITTED at offset 1613 for partition topicDemo-0 returned fetch data (error=NONE, highWaterMark=1613, lastStableOffset = 1613, logStartOffset = 0, preferredReadReplica = absent, abortedTransactions = null, recordsSizeInBytes=0)
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topicDemo.bytes-fetched
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name topic.topicDemo.records-fetched
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name topicDemo-0.records-lag
DEBUG Thread-1 org.apache.kafka.common.metrics.Metrics - Added sensor with name topicDemo-0.records-lead
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1839206013, epoch=1) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1839206013,session_epoch=1,topics=[],forgotten_topics_data=[]} with correlation id 11 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=15,member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 12 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1839206013 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1839206013, epoch=2) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1839206013,session_epoch=2,topics=[],forgotten_topics_data=[]} with correlation id 13 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1839206013 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1839206013, epoch=3) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1839206013,session_epoch=3,topics=[],forgotten_topics_data=[]} with correlation id 14 to node 0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=15,member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 15 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Committed offset 1613 for partition topicDemo-0
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Completed asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1839206013 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1839206013, epoch=4) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1839206013,session_epoch=4,topics=[],forgotten_topics_data=[]} with correlation id 16 to node 0
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Node 0 sent an incremental fetch response for session 1839206013 with 0 response partition(s), 1 implied partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Added READ_UNCOMMITTED fetch request for partition topicDemo-0 at position FetchPosition{offset=1613, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=localhost:9092 (id: 0 rack: null), epoch=0}} to node localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=consumer-1, groupId=1] Built incremental fetch (sessionId=1839206013, epoch=5) for node 0. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s) out of 1 partition(s)
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=consumer-1, groupId=1] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), implied=(topicDemo-0)) to broker localhost:9092 (id: 0 rack: null)
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v10 to send FETCH {replica_id=-1,max_wait_time=500,min_bytes=1,max_bytes=52428800,isolation_level=0,session_id=1839206013,session_epoch=5,topics=[],forgotten_topics_data=[]} with correlation id 17 to node 0
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending Heartbeat request to coordinator localhost:9092 (id: 2147483647 rack: null)
DEBUG kafka-coordinator-heartbeat-thread | 1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v2 to send HEARTBEAT {group_id=1,generation_id=15,member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76} with correlation id 18 to node 2147483647
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=1] Received successful Heartbeat response
DEBUG Thread-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=consumer-1, groupId=1] Sending asynchronous auto-commit of offsets {topicDemo-0=OffsetAndMetadata{offset=1613, leaderEpoch=0, metadata=''}}
DEBUG Thread-1 org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=1] Using older server API v6 to send OFFSET_COMMIT {group_id=1,generation_id=15,member_id=consumer-1-5e573d5a-1142-4585-b0bc-4d393a1c9d76,topics=[{name=topicDemo,partitions=[{partition_index=0,committed_offset=1613,committed_leader_epoch=0,committed_metadata=}]}]} with correlation id 19 to node 2147483647
