PAYMENT results much lower than expected? #112
-
As you can see in the image the PAYMENT column results are far, far lower than the others. This doesn't seem right to me as the column referenced is pulled from aggregate data of tests running a million transactions. Is this an error on my end or just me misunderstanding something? Any help would be great. |
Beta Was this translation helpful? Give feedback.
Replies: 9 comments
-
Hi, You need to provide a lot more information about your test environment to start troubleshooting. In particular the database and version, storage engine you are testing is a great start. Then the operating system of client and server, HammerDB version, database server hardware including CPU, memory and disk and also how you are running the tests and the sort of performance you are seeing the more information you can provide the better. On the HammerDB side have you set the parameter "RAISEERROR"? This is a common parameter for all databases. By default errors are ignored by setting this parameter errors will be caught and reported. Response times are measured from the client side so if a stored procedure errors the response time will drop. You can also use the percentile response times to track a run over time and see if there is a point where the response time changes indicating an error. Also it is well worth checking the database logs and system logs to see if there are any errors reported on the database side such as deadlocks or timeouts (Payment does a lot of updates and also inserts into the history table which will continue to grow). |
Beta Was this translation helpful? Give feedback.
-
Thank you for the response, I am totally new to benchmark testing and HammerDB so it is likely something I have done incorrectly. I am using Oracle 18c and the latest HammerDB version, both are running locally on a Windows 10 PC with the following specs:
The million transaction test using a single user took roughly 45 minutes and as you can see from the graph the performance is what I expected with the exception of the PAYMENT procedure, I have not been logging the CPU usage or disk usage as I have been unable to get the HammerDB agent to connect, though I have noticed that both peak at roughly 80% usage with no other programs or background processes running via the task manager. Looking at the log files it appears that the 99th percentile PAYMENT procedure stays at a consistent rate from the beginning of the test, the MAX is only ever a couple of hundred microseconds slower throughout as well and this is true for each of the five separate, identical tests I ran. Also if you could describe what each procedure does in relation to TPC-C/CRUD attributes, that would be excellent as I have not been able to locate this in the documentation. After enabling the RAISEERROR parameter, any tests now fail with the error code: ORA-01438 |
Beta Was this translation helpful? Give feedback.
-
Yes - this would account for a zero response time for a stored procedure: ORA-01438:value larger than specific precision allowed for this column (You may get more information from the Oracle alert log) There is a previous discussion of this error here https://sourceforge.net/p/hammerdb/discussion/292313/thread/cd11ac69/ |
Beta Was this translation helpful? Give feedback.
-
Hello Steve, thanks for the swift reply. There is just a single warehouse being created. The full error message is as follows:
Should I be using more than one warehouse for tests these large? |
Beta Was this translation helpful? Give feedback.
-
As a side note, because I am using the default schema would it be easier to fix if I just dropped all the tables and started again? |
Beta Was this translation helpful? Give feedback.
-
Hi, Yes you should be creating more than one warehouse, this blog post should help you. |
Beta Was this translation helpful? Give feedback.
-
I see, I think I must have confused warehouses in the HammerDB context with data warehouses and thus never thought to change it. This does mean that I will have to redo all of my tests from the previous two days but it's good to know why the tests failed. I will likely drop everything and start fresh just to be on the safe side, I currently have it set to 100Gb, do you believe that is sufficient or would you suggest more? Thank you for all the help, this benchmark is being used for my dissertation and so I desperately needed to fix this problem. As a side note, are you willing to give me permission to quote/reference you and your solution in my summary? |
Beta Was this translation helpful? Give feedback.
-
For a database on a 4 core server from minimum of 25 warehouses to around 50 should be enough to reach the maximum performance with the default configuration (without keyandthink time). It is typically 100MB per warehouse but the schema will grow. The key bottleneck is often the redo logs so make sure these are sized sufficiently so you do not see 'checkpoint not complete' in the alert log. The graphical Oracle metrics view in HammerDB will show you the bottlenecks. Yes of course fine to quote/reference and good luck with the dissertation. |
Beta Was this translation helpful? Give feedback.
-
The system is working now with the warehouse increase, shown in green, thank you again for the help. |
Beta Was this translation helpful? Give feedback.
Hi, Yes you should be creating more than one warehouse, this blog post should help you.
https://www.hammerdb.com/blog/uncategorized/how-many-warehouses-for-the-hammerdb-tpc-c-test/
Clearly with all of the inserts and updates focusing on a single warehouse there has been an overflow elsewhere. Creating more warehouses will fix this.
Also yes, for Oracle the best way is "drop user tpcc cascade" and just recreate the schema again a bit larger using the advice from the blog post above.