-
-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRSS query efficiency #839
Comments
A couple of thoughts:
|
Noting that I have mitigated this on the Adioptium TRSS server by rate-limiting requests on the nginx front-end, but that should be considered a temporary workaround for the underlying issues with TRSS. A change in architecture to use a single query would definitely be preferable if possible, or at least combining them somehow so as not to overload the database. |
This is not a database overload issue. All changes are delivered. Performance has been boosted by approximately 35 times. This issue will be closed. Rate-limiting requests on nginx is not a way to fix performance issue. |
Does that mean the problem that you've screenshotted in the original description has been resolved and we just need to get the update onto the adoptium TRSS instance?
I completely agree but I wasn't aware that anyone had been working on the issue - I'd be delighted if the performance issue has been fixed and I can remove the limit again :-) |
Perhaps I failed to describe clearly enough in recent scrum or Slack that my intention/priority is to update the synch job (#856) so I can pull in the 3 recent perf improvements committed into aqa-test-tools from Lan. I am working on it now, but took longer than expected due to recent removal of local Docker tools, and my wanting to test locally. I've finally resolved that barrier and will hopefully be able to test my updates shortly. Noting we had 2 different issues:
Lan has vastly improved 1) TRSS perf, but we have not pulled the changes in to our prod server yet. |
Thanks - I knew you were working on getting the sync job working again but I wasn't aware until now that it was because some of the underlying issues we'd been seeing here - that had been mitigated temporarily with the nginx "hack" - had been resolved. That's great to hear to thanks Lan! I think for (2) we still need to understand what can be done to reduce the output (although that's separate from this issue). It would be good to know if other TRSS instances were seeing this with a default configuration to indicate if it's something we've done. A cleanup on sync might be adequate but is more of a sticking plaster (Similar to what I did with nginx!) |
As we monitor more and more test builds, we need to look into TRSS query efficiency. I have seen cases where TRSS uses 100%+ to 600% CPU when loading the page.
Also, depending on the number of builds that are monitored, loading the main page can take a long time.
The text was updated successfully, but these errors were encountered: