-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Futures / Asynchronous Results #6
Comments
I just found to my surprise that Cask doesn't seem to have any support for returning a Future of a response instead of a response. TBH, I'm shocked. Is the idea that I block the thread until I have my response ready? Or am I missing something? I can't see anything about this in the docs |
@malcolmredheron Yes blocking a thread is generally fine. What's your RPS and number of concurrent requests that this is causing issues for you? |
Well, we aren't in production yet so right now anything would be fine. We have a server that uses akka streams and actors and I hate the result. While converting it to cask and castor I noticed that we can't return a future and got very confused Everything else of yours that I have seen has been so thoughtfully designed -- is there a principle behind this decision? I understand that I could tie up a thread while I talk to my backends, but I have future-based apis for those things and Scala makes futures pretty easy. Why do you prefer blocking in this case? |
Yes, the principle is that when discussing performance you need profiles and benchmarks. If you do not know your performance bottlenecks, performance requirements, or performance limits, worrying about tying up threads is a 100% waste of time. Just spawn more threads, and when that starts causing problems come back and we can have a more productive discussion on how your service can be optimized |
I totally agree about the important of profiling and tuning real systems, not imaginary ones. And I want to use Cask. So far it's been clean and simple At the same time, I'm trying to make decisions that are at least roughly setting myself up for future success. A thread, with its own stack, takes much more memory than a future. Has that increased memory usage never been a problem in your experience? |
It has not caused problems in my experience, but everyone's circumstances are different, so it's hard to generalize. e.g. my work has largely been in B2B and internal-facing systems, which have very different performance characteristics than e.g. consumer-facing web, ad-tech, and other vertical industries, What I can generalize is that for the longest time, Youtube and Dropbox were mostly written in Python, Stripe and Github and Shopify are in Ruby, and Facebook was in vanilla PHP: all simple blocking tech stacks in addition to being 20-40x slower than JVM. If you think you're going to have more traffic than Youtube/Shopify/Facebook then maybe you need to worry about asynchrony and threading overhead, but for the bulk of folks who aren't going to be the next Youtube/Shopify/Facebook worrying about thread blocking overhead is a complete waste of time |
I think Lihaoyi is right here, at least at Alibaba, where we mostly write blocking code. In a recent project, where we are using Netty, fastparse, and Pekko as a gateway to translate Taobao to Taobao English, the code is all async. Still, it is, really, very hard to reason about. Yes, performance is good, but it is not easy to maintain by others. I think we can use Virtual threads, where your blocking code runs in a virtual thread. At least, it will not be a problem if you migrate to JDK 21/24 if your company has a JDK team that backports JEP 491 to JDK 21. |
Ok. Thanks everyone. I'll set my fears aside for now :-) @He-Pin, one question, though: was your hard-to-maintain code using futures or callbacks? I have so much future-oriented code in the browser already that I assume anyone on my team will be familiar with it |
@malcolmredheron no, at least one promise is a single queue, order is a pain |
@shanielh hi, the virtual thread support is in 0.9.6, which is quite simple, I think you can take a look at it. |
No description provided.
The text was updated successfully, but these errors were encountered: