Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

__consumer_offsets topic with very big partitions #75

Open
emy-lee opened this issue Jun 3, 2021 · 0 comments
Open

__consumer_offsets topic with very big partitions #75

emy-lee opened this issue Jun 3, 2021 · 0 comments

Comments

@emy-lee
Copy link

emy-lee commented Jun 3, 2021

I am using Kafka 2.0.0
There are some partitions of the __consumer_offsets topic that are 500-700 GB and more than 5000-7000 segments. These segments are older than 2-3 months.
There aren't errors in the logs and that topic is COMPACT as default.

What could be the problem?
Maybe a config or a consumer problem? or maybe a bug of kafka 2.0.0?
What checks could I do?

My settings:

log.cleaner.enable=true
log.cleanup.policy = [delete]
log.retention.bytes = -1
log.segment.bytes = 268435456
log.retention.hours = 72
log.retention.check.interval.ms = 300000
...
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant