-
Hi -- My team is looking to examine the impact of network latency on database performance. To do that, we have two database servers ("Targets") set up - one near the client, one at a distance away network-wise. I would like to ramp up transaction rates to compare performance as the number of queries sent from client-to=Target increases. Can HammerDB rate-limit the number of transactions it sends to the target database? for example start wtih 1 request per second, then 10, then 100, etc. In this manner we can see how the compounding effect of network latency builds up and affects total run time. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
So yes there is a built-in rate limiter in the TPROC-C workload which is called keying and thinking time. When this is enabled keying time is a fixed value and thinking time a random period of time up to the value called. The defaults are what is based in the TPC-C specification so for example new order has a keying time of 18 seconds and thinking 12 seconds. You can of course modify the values in the script to limit the rate of transactions however you wish.
It is important to be aware however that typically transaction times will be in the low milliseconds so adding any keying and thinking time to the default workload even 1 second will considerably lower throughput. Therefore, you will need a larger number of sessions running at this lower rate to put the database under load. For this purpose there is the event driven scaling feature https://www.hammerdb.com/docs/ch04s06.html#d0e1716 - this enables one virtual user to manage multiple sessions with the keying and thinking time managed asynchronously. Using the default keying and thinking time you would expect in the range of appx 1 NOPM per connection (based on the specification that is 10 connections per warehouse, 9-12.86 NOPM per warehouse is going to be roughly in the range of 1 NOPM per connection). As an example using HammerDB on PostgreSQL with 10k total connections drove 10k NOPM with the database server CPU to 1-2% CPU utilization with 1000 warehouses. Scaling up from this point will need considerably more storage as you increase the virtual users/warehouses. Using event-driven scaling you can also reduce the keying and thinking time as you wish to increase the throughput but should be aware that the asynchronous part is for the keying and thinking time, so it would not be applicable to scale up this way by setting keying and thinking time to 0. Nevertheless, as a general concept it is this keying and thinking time approach that is used to give us 'fixed throughput' that increases predictably as we add warehouses and virtual users. |
Beta Was this translation helpful? Give feedback.
So yes there is a built-in rate limiter in the TPROC-C workload which is called keying and thinking time. When this is enabled keying time is a fixed value and thinking time a random period of time up to the value called. The defaults are what is based in the TPC-C specification so for example new order has a keying time of 18 seconds and thinking 12 seconds. You can of course modify the values in the script to limit the rate of transactions however you wish.