-
-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using aedes-persistence-redis with Aedes resulting in slow client connections. #10
Comments
Can you upload a Node.js script to reproduce the problem? |
I don't have Node.js script ,I have tried with java based client program. In program there is a while loop which creates new client and connects to server and after connection subscribe to a topic whose value is same as clientId. |
anything that can reproduce the problem is good, otherwise it will be extremely hard to fix. |
cc @GavinDmello |
Yea. This is an issue. We have a lot of anonymous functions in persistence which is preventing optimizations, I think. Also, ResultsHolder from fastSeries always seems to be un-optimized. |
@GavinDmello removing the anon functions should be feasible, do you want to send a PR? Mostly are forEach/Reduce etc, that needs to be changed into straight for loop. ResultsHolder should be optimized if the rest is optimized. There might be a small issue somewhere in there too. At any rate, we need a script to verify this problem is fixed. |
@mcollina Yea. That would be cool. I'll make a PR |
@mcollina , kindly share the client script that you have used to test 900 k connections , so that i modify according to my use case and test. |
@rasalasantosh as said elsewhere, I did not do that test myself. |
@rasalasantosh can you try version v2.6.1 and see if it improves things? |
@mcollina , is it redis v2.6.1?If so will try and let you know. |
@rasalasantosh v2.6.1 of this module |
@GavinDmello , I installed aedes using npm i aedes [email protected] --save and tried again. It will still taking around 11 minutes for 25k connections , but without persistence it was taking less than a minute for 50k connections. And also tried with mosca with persistence ,it was taking around 3 minutes for 25k connections. |
@rasalasantosh are you connecting with what options? are you also subscribing to a topic? |
@mcollina ,yes after connection successful ,iam subscribing to topic as well. |
@rasalasantosh you should check v2.6.3, it has lua script caching, which should improve the throughput. |
@mcollina , I have tested again ,but giving the same results i.e. 11min for 25k connections. |
@rasalasantosh are you connecting with what options? are you also subscribing to a topic? |
@mcollina , iam connecting with clean session =true , QoS=1,keepalive - 60 sec .And also after connecting we will be subscribing to topic which is same as clientId. And also please share the test script,so i can verify at my side also with your script. |
@rasalasantosh Could you try it with this ? https://gist.github.com/GavinDmello/dda612661df4db11ba84b41097470c95 |
@GavinDmello , Aedes is taking 142 sec to connect for 50k connections without subscriptions,but 1360 sec to connect with subscriptions. Please find the script below that is used to test subscriptions. |
@rasalasantosh with Redis configured as persistence, is it correct? I think the |
@rasalasantosh check v2.6.4. |
@mcollina with v2.6.4 getting same results. |
@rasalasantosh Try removing all unwanted keys from redis. Also, is redis working on the same box ? Maybe you're running out of memory. |
@GavinDmello ,I tried after flushing redis,but getting same results . More over redis , Aedes and client are running in seperate instances. |
Mosca is using an hashtable for the retained messages, while we are just using keys and h here. https://github.com/mcollina/mosca/blob/master/lib/persistence/redis.js#L241-L267. Maybe that's the approach we should take here as well. I thought using SCAN was smart |
@mcollina i'm wondering if this is due to the lua script. we eventually run into issues in production once the subscription count hits a certain point. i missed this section in the redis docs when testing the lua changes: ...while the script is running no other client can execute commands. from the redis docs:
|
@toddtreece that is probably the reason. |
I'm ok with changes, as long as they are ported to the other persistences as well. |
(sorry I was from mobile) |
@behrad How many subscriptions are dealing with ? I did not use any SCAN/HSCAN implementation of redis because, if count is not set to an optimum value the performance isn't good. Also the redis server was doing a lot of heavy lifting in terms of CPU , when many aedes instances are connected |
may be a few millions in production, however current implementation of aedes persistence [redis] will rise issues around a few hundred thousands.
please check mosca's redis implementation here: https://github.com/mcollina/mosca/blob/master/lib/persistence/redis.js#L137 we can't set an upper limit for client subscription count
This may be an issue with the core arch/design of a multi process/instance around a single shared redis model. I believe Do you think your concern is the same as mcollina/mqemitter-child-process#1 @GavinDmello ? |
I think you can fetch all keys in a list with a -1 arg. My concern is using SCAN in hot paths. The count 25000 which is present in mosca may be good enough for a few million keys but as the subscriptions increase the number will have to be bumped . |
You mean
that is the chunk buffer size, and my tests show that should be lowered even to 5000 or lower.
regarding |
I'm 👍 on removing MULTI, I do not think it would cause any issue in practice. |
Set would've been a good option. I wasn't aware of the handleSubscribe issue
I'm okay with this if nutcracker support is added. |
@behrad SCAN is not supported on Nutcracker as well.
SSCAN, ZSCAN, HSCAN is supported though |
Unfortunately yes, and seems we SHOULD have a container for all subs/clients:
currently, this module is storing both. e.g. a client's subscription is stored both inside client's hash, and also inside the topic hash (only clientId would suffice in the latter)
How do you think SSCAN will compare to SMEMBERS for a large (over a million entry) set @GavinDmello ? |
To test further, i migrated my current actual subscription data (under mosca), to aedes format under redis. When I launch aedes, memory grows until process halts. I noticed the console.time('@@@@@@@@@@@@@@@@@@ READY');
function split (keys, enc, cb) {
for (var i = 0, l = keys.length; i < l; i++) {
this.push(keys[i])
}
cb()
}
var splitStream = through.obj(split);
var count = 0; var result = 0;
var hgetallStream = throughv.obj(function getStream (chunk, enc, cb) {
ioredisClient.hgetallBuffer(chunk, function(e,r){ result++; cb(); });
count++;
}, function emitReady (cb) {
console.timeEnd('@@@@@@@@@@@@@@@@@@ READY');
console.log('count ', count);
cb()
});
console.time('smembers');
ioredisClient.smembers('sub:client', function lrangeResult (err, results) {
console.timeEnd('smembers');
setInterval(()=>{console.log('Count='+count, ' Result='+result)},1000);
if (err) {
splitStream.emit('error', err)
} else {
splitStream.write(results);
splitStream.end()
}
});
pump(splitStream, hgetallStream, function pumpStream (err) {}) Surprisingly when I moved in the same code above (which is not processing keys) inside |
My heap dump shows it's inside |
You don't need to use streams here. You will get some speed bump from removing them. |
@mcollina Could it be qlobber ? We're adding all the decoded packets to qlobber versus the topic. |
@GavinDmello not sure, but I don't think so. |
So strange!!! I figured it out:
No, Qlobber part of code was commented. |
by removing streams and putting |
You can use something like http://npm.im/fastq. |
My benchmarks ran just now:
So it seems changing to fastq won't help that much :) Before I go and test with Qlobber code activated, there are some optimizitions I wanna mention:
|
on your results, can I see your code? on 1. go ahead! on 2. I'm really against going multi-process. This is very deployment specific. on 3. Aedes does not listen automatically. That part of logic in Mosca was super complicated, so it's up to you. If you want to send an update to the README, go ahead. If we are missing some events for this, please send a PR and we'll add them. |
var queue = require('fastq')(worker, 100)
queue.drain = function() {
console.timeEnd('@@@@@@@@@@@@@@@@@@ READY')
console.log('count ', count)
}
function worker (arg, cb) {
that._db.hgetallBuffer(arg, function(e,r){ result++; cb(); })
count++;
}
this._db.smembers(subscriptionsKey, function lrangeResult (err, results) {
for (var i = 0, l = results.length; i < l; i++) {
queue.push(results[i])
}
}
So, I'll suspend this, and go for more priority tasks like number 1.
Great. I should check if persistence api already exposes |
I think 1. might be the path that leads to the best benefits. |
Yes, sure. Can you clarify me also on these extra details @mcollina ?
|
throughv parallelize, through2 does things in series. you can play with throughv concurrency by setting a highWaterMark. pump makes sure all streams are closed if one errors. |
thank you man 👍 and have you got any clues why |
This issue should be resolved by #31 |
When aedes-persistence-redis used as persistence with Aedes , client connections are very slow . For 20k client connections it was taking around 10 minutes.But without persistence configured,connections are very fast around 50 k connection in less than a minute.
Please find the code below used to run aedes server.
var redis = require('mqemitter-redis');
var aedesPersistenceRedis = require('aedes-persistence-redis');
var mq = redis({
port: 6379,
host: '172.31.38.96',
db: 12
});
var persistence = aedesPersistenceRedis({
port: 6379,
host: '172.31.38.96'
});
var aedes = require('aedes')({
mq:mq,
persistence:persistence
})
var server = require('net').createServer(aedes.handle)
var port = 1883
server.listen(port, function () {
console.log('server listening on port', port)
})
The text was updated successfully, but these errors were encountered: