Skip to content
This repository has been archived by the owner on Mar 26, 2024. It is now read-only.

Batch metrics discussion #26

Open
waisbrot opened this issue Jan 13, 2017 · 3 comments
Open

Batch metrics discussion #26

waisbrot opened this issue Jan 13, 2017 · 3 comments

Comments

@waisbrot
Copy link
Contributor

@wk8 @JoshRagem

Per #23, I think it'd be nice if all metrics followed a path that didn't get bottlenecked by worker_pool.

I didn't do any testing when I put the pool in -- we were just writing a bunch of other code that needed to be pooled to constrain output rate and it felt natural to write the same pattern.

My first thought is to drop the pool and just have a single gen_server sending all the UDP traffic. The reason to keep the pool is if the UDP send was a bottleneck. I'll try to do a little benchmarking of that this weekend if nobody beats me to it.

@JoshRagem
Copy link

I will not beat you to it, but I think it should be useful to understand why wpool was a bottleneck--was it because the pool became exhausted so requests got queued up waiting for the next available worker? Looks like it is hardcoded to 10 workers in the pool. Perhaps boosting that number is a simpler solution.

@wk8
Copy link
Contributor

wk8 commented Jan 13, 2017

From what I've seen, the bottleneck really is with checking workers in & out constantly out of the pool. Not entirely sure wpool was designed for such lightweight work; here the "actual work" requires less processing than the wpool overhead.

@wk8
Copy link
Contributor

wk8 commented Jan 20, 2017

@JoshRagem @waisbrot been playing with using a NIF for that (https://github.com/wk8/erlang-dogstatsd), seems to be working great in our use case

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants