-
-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ThrottingException - Rate limit exceeded #69
Comments
Hi! The There should really be code that handles these limit errors though. It could be as simple as allowing the error to happen here which should trigger a retry. Of course, without logging a warning, it could just start retrying indefinitely, meaning you'd still lose the logs. |
Sorry, mis-clicked. |
Not sure what's wrong... Tried to change
If I got it right, I shouldn't have this problem with:
Since it would send 1 log every 5 seconds to CloudWatch. The only problem would be a giant queue, holding the messages before they're properly sent, which would consume a lot of RAM, but would still work. Do you have any idea or suggestion? |
Well, I fixed the problem. I can open 2 PRs, if that's ok for you:
Just let me know if you're interested on that. TLDR:Just add this line to the constructor of the maxSequenceTokenAge: 999999 Here's what I found:Internally, you're calling the method You can check it here: Since my solution has more than 6 servers and all of them uses this module, together they were exceeding that limit. This explains the In your code, you call I saw that you're properly caching
The problem was in a method you use to check if the It seems like the implementation of your method is always saying to fetch again no matter what. export default class CloudWatchClient {
constructor (logGroupName, logStreamName, options) {
debug('constructor', {logGroupName, logStreamName, options})
this._logGroupName = logGroupName
this._logStreamName = logStreamName
this._options = defaults(options, {
awsConfig: null,
// XXX: This should be positive value, I think. I used it as a work-around to fix my problem
maxSequenceTokenAge: -1,
formatLog: null,
formatLogItem: null,
createLogGroup: false,
createLogStream: false
})
this._formatter = new CloudWatchEventFormatter(this._options)
this._sequenceTokenInfo = null
this._client = new AWS.CloudWatchLogs(this._options.awsConfig)
this._initializing = null
// ...
// XXX: I think you could just check if we have sequenceToken cached here
_getSequenceToken () {
const now = +new Date()
// XXX: It's always false, apparently
const isStale = (!this._sequenceTokenInfo ||
this._sequenceTokenInfo.date + this._options.maxSequenceTokenAge < now)
return isStale ? this._fetchAndStoreSequenceToken()
: Promise.resolve(this._sequenceTokenInfo.sequenceToken)
}
// ...
} That's why passing |
PR to make |
I've also opened a PR that tries to fix this issue. |
Thanks for the hard work! I haven't had time to look at the PRs yet but they're at the top of my list. |
So is this issue fixed in the current version (3.0.0) or why is it still open? I'm still getting frequent errors "Rate exceeded for logStreamName ". I have rather large logGroups with about 50+ logStreams each. Normally, 2-8 processes share a single logStream. |
Hey!
How are you?
Sometimes I retrieve this error when the module tries to send a lot of logs to CloudWatch.
It comes as an "Unhandled Rejection", as we saw in #32, but let's not focus in this subject for now.
In fact, CloudWatch has some limits when we send data to it, as we can see here:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html
I would like to know if this module already works in the "best way it can", making batch operations and respecting the 5 requests / second restriction, per log stream.
My concern is to lose logs in this process.
If that happen, I just can't trust in CloudWatch when debugging an issue..
Do you have suggestions, anyway?
It seems that temporally decreasing the log level doesn't ease the overload on CloudWatch.
Thanks in advance!
The text was updated successfully, but these errors were encountered: