-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limiting requirement #501
Comments
@smulube please PR this back here, this is definitely something of interest! |
If your going to proceed with this, please allow 0 to default to the original unlimited behavior. |
ok, will clean things up on our side and open a PR back here for you to have a look at 👍 |
yep this was exactly how we implemented, think we must have had the same thought process around this 😄 |
Pull request opened for review here #506 |
That works for backing off - but usually you then want to put backpressure further upstream to prevent your producer from creating a deeper and deeper backlog. If its market data, you might want to coalesce the data at an instrument level and probably prioritise latest updates. If its order data you need to rethink if you can't handle the data flow. Using a goroutine with an atomic queue depth is pretty much what I've also done for a basic handler, but its quite naive in the long run and I know I will need something smarter. |
Do we still plan to merge the diff above? |
Hi, we're currently in the middle of going live with a project that involves FIX connections to a very large stock exchange. We are currently testing in their UAT environment, but will be going live to production systems very shortly. In the exchange's UAT environment, clients using the trading facility are limited to no more than 10 msg/second/session; once we activate in production this limit will increase to 1000 msg/second/session (for the agreement we currently have in place for them). If we breach this limit when sending messages to the exchange they temporarily disconnect us, meaning an interruption to our processing, and we would have concerns about losing messages in this situation.
This is not so much a concern for production as I think there is little danger of us breaching the 1000 msg/sec/session limit in the near future, but while testing in UAT we have been experiencing rate limit disconnections despite the precautions we tried to put in place to not call
quickfix.Send
faster than the limit.What we think is happening after looking at the library is that when we call
Send
from our application code, the quickfixgo library actually buffers those messages in an internal slice of byte slices, which are then sent out via thewriteLoop
insideconnection.go
. This means that what we suspect is happening is that sometimes this buffer is capturing enough messages that when they are sent, they go out in a burst that pushes us over the limit causing us to be disconnected.We have forked the library and patched in a rate limit at the lowest level (i.e. inside connection.go) to ensure that we never accidentally burst over the specified limit, but I'd like to ask if there would be any interest in us cleaning up our fork and submitting a PR with these changes?
Here's an outline of how we've solved on our side:
MaxMessagesPerSecond
into theinternal.Settings
struct so that when building sessions from config we are able to specify per environment/per session limitswriteLoop
function inconnection.go
, we pass in theMaxMessagesPerSecond
config value and inside the loop after we pull a message from the channel to send we either immediately proceed (as is done at the moment), or we use a rate limiter to ensure we never burst above the specified session limit.These changes seemed to work well in testing - we have stopped seeing any occurrences of ratelimit disconnections. Is this a change that would be of any interest to be pull requested back here as I can't believe we're the only people to face this restriction, or should we just look at continuing to maintain our own fork.
many thanks
The text was updated successfully, but these errors were encountered: