Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Has there been some load testing on the ws-backend? #12

Open
junoriosity opened this issue Aug 6, 2023 · 9 comments
Open

Has there been some load testing on the ws-backend? #12

junoriosity opened this issue Aug 6, 2023 · 9 comments

Comments

@junoriosity
Copy link
Collaborator

@kapv89 I did some load testing for a single instance of the ws-backend, where I have tested the operation in 2 cases:

  • average number of calls(5) to different documents(5) - first peak on the graphs
  • a large number of connections(35) to one document - the second peak in the graphs

I used a document of 24618 characters, so a large number of messages that need to be exchanged between these sockets.

image-20230522-112608 image-20230522-112631

I am a little concerned:

  • Why does the load go so heavily up if we all use the very same document? In my opinion the load should be very low, because there is just one state … for everyone.
  • Also, the vCPU is above 0.6 and memory increase 300 MB for just 35 users ons just one document … which is quite a lot.

A small calculation makes it hard to understand for me: 25 k characters is around 25 kb, so a really small document.
Hence, 35 Users * 25 kb = small load, yet we see such a dramatic jump in memory.

Hence, is there some way to do it more efficiently? 🙂

@kapv89
Copy link
Owner

kapv89 commented Aug 13, 2023

Hey .. very heavy load at work .. will try to look into this coming week.

And no .. no load testing done by me ..

@junoriosity
Copy link
Collaborator Author

junoriosity commented Aug 21, 2023

@kapv89 Many thanks for letting me know 🙂

Please let me know, once you have some updates. 🙂

@junoriosity
Copy link
Collaborator Author

junoriosity commented Aug 26, 2023

@kapv89 Did you have some chance already to look into this issue.

Please let me know, if I can be of any help. 🙂

@kapv89
Copy link
Owner

kapv89 commented Oct 26, 2023

@junoriosity ... @sbalikondwar on discord was doing some load testing on code from this repo for their company ... don't know where they reached, but if they are using code from this solution they might have some helpful insights for you

@junoriosity
Copy link
Collaborator Author

@kapv89 Sounds very interesting, could you tell me more?

@kapv89
Copy link
Owner

kapv89 commented Oct 27, 2023

They were doing some load testing stuff and were observing increasing memory .. u gave suggestion might be because of undo-log, and might clear up after connections close .. didn't connect after that .. probably reach out to him

@junoriosity
Copy link
Collaborator Author

@sbalikondwar Could you give us some input on that matter?

@zyhzsh
Copy link

zyhzsh commented Nov 3, 2023

@junoriosity Hey, I am curious about how you conducted the load testing and which tool you used.

How did you emulate concurrent users sending WebSocket messages to the server?

I tried using k6 for this purpose, but I couldn't succeed in sending the correct message, or at least it didn't work as expected.

Could you suggest an alternative way to perform the testing?

@junoriosity
Copy link
Collaborator Author

@zyhzsh What I did is the following:

  • I spun up a K8s cluster
  • I deployed the Pod in the K8s cluster
  • I used to connect with many connections to one document
  • I inserted a large number of of images in that document
  • I repeated the last two steps.

That was what caused the issues I mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants