-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consumer: KeyError: 'NextShardIterator' after shard scale-up #29
Comments
yep known issue, WIP see #26 |
@zarnovican so this issue will correct itself in 24 hours or based on your shard retention time after the creation of the shards. Here is why. When you bumped your shards 2 to 4, there really are 6 shards in total now. 2( being the old parent shards) and 2 child shards that point back to the parent shards and 2 shards that are completely new with no parent. Those 2 old parent shards will stay in the available shard list until their retention period expires. As hampsters mentioned #26 combats this problem, by making a shard lifecycle manager to help detect this issue. It's been drawn out due to lack of time on my side and issues with the kinesislite library for testing so I have to use a real stream to fix it. Saying that, if you want to get involved in fixing it, I am more than happy to take pull requests on #26 Also worth mentioning, this only affects consumers. The producers are unaffected by this. |
Thank you both for timely response and John for the great explanation.
No pressure. I have destroyed and re-created the affected Kinesis streams. It solved the problem for me. Fortunately, I'm using Kinesis only for CloudFront logging. The gap in graphs is no problem for me. I won't be needing resharding for a while anyway. I just though to let you know about the problem. But if you are aware of it, I'm happy to close it as duplicate. |
duplicate of #26 |
Hi,
I have scaled-up my Kinesis stream from 2 to 4 shards. Since then, I'm not able to read anything because of this error:
When I added a bit of logging, the
result
variable contained this:Restarting the consumer does not help.
I haven't tried it yet, but I guess, destroying and recreating the stream would solve the problem.
Info:
The text was updated successfully, but these errors were encountered: