Skip to content

Benchmark results: HerdDB 0.1.0

Alessandro Luccaroni edited this page Oct 11, 2017 · 1 revision

See this page for more details about the tool used (YCSB) and how to reproduce the bench

Tools and system used

This bench compares the throughput of HerdDB 0.1.0 and Mysql 5.6.32 on different disk subsystems (SSD and SAS 15K HDD). To maintain repeatable results each bench has been performed multiple times on the same machine (CentOS 7 physical machine with 32GB of RAM and 8 CPU cores) starting each time with an empty table. The data-set can be fully saved in-memory by both applications (1000000 recordcount and operationcount).

Please note that this bench has been used as a tool for the development of HerdDB and a more scientific analysis/real-world data approach is needed to correlate the results to real-world scenarios. For example, the access pattern of your application may differ heavily from the ones used in the benches.

You will find the my.cnf and other configuration files used for HerdDB, YCSB and Mysql in the footnotes.

Results

Data saved on SSD (in ops/sec using 200 thread parallelism):

Workload HERD 0.1.0 MSYQL 5.6.32 % ratio notes
load phase SSD 7980 4538 176 100% INSERT
workloadA SSD 9867 12024 82 50% READ - 50% UPDATE
workloadB SSD 18101 23686 76 95% READ - 5% UPDATE
workloadC SSD 20026 27249 73 100% READ
workloadD SSD 19015 22074 86 95% READ (latest rows) - 5% INSERT
workloadE SSD 1480 6339 23 95% SCAN (short range of records) - 5% INSERT
workloadF SSD 8056 10059 80 50% READ - 50% READ,MODIFY,WRITE
workloadems1 SSD 8329 7522 111 32% INSERT - 64% UPDATE - 4% READ

Data saved on HDD (in ops/sec using 200 thread parallelism):

Workload HERD 0.1.0 MSYQL 5.6.32 % ratio notes
load phase HDD 1985 465 427 100% INSERT
workloadA HDD 1768 942 188 50% READ - 50% UPDATE
workloadB HDD 16515 9215 179 95% READ - 5% UPDATE
workloadC HDD 19619 25671 76 100% READ
workloadD HDD 16711 9531 175 95% READ (latest rows) - 5% INSERT
workloadE HDD 1485 5831 25 95% SCAN (short range of records) - 5% INSERT
workloadF HDD 1647 1002 164 50% READ - 50% READ,MODIFY,WRITE
workloadems1 HDD 1856 534 348 32% INSERT - 64% UPDATE - 4% READ

Configuration files

YCSB ems1 custom workload

workload=com.yahoo.ycsb.workloads.CoreWorkload
recordcount=1000000
operationcount=1000000
insertstart=0
fieldcount=10
fieldlength=100
readallfields=true
writeallfields=false
fieldlengthdistribution=constant
readproportion=0.02
updateproportion=0.64
insertproportion=0.32
readmodifywriteproportion=0.02
scanproportion=0
maxscanlength=1000
scanlengthdistribution=uniform
insertorder=ordered
requestdistribution=latest
hotspotdatafraction=0.2
hotspotopnfraction=0.8
table=usertable
histogram.buckets=1000
timeseries.granularity=1000

Mysql my.cnf

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html

[client]
port    = 3307
socket = /data2/mysql-5.6/data/mysqld.sock
default-character-set=utf8mb4

[mysqld]

# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M

# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin

# These are commonly set, remove the # and set as required.
# basedir = .....
# datadir = .....
# port = .....
# server_id = .....
# socket = .....

# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M

port = 3307

socket = /data2/mysql-5.6/data/mysqld.sock
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
lower_case_table_names=1
back_log = 1000
max_connections = 1000
max_connect_errors = 10
table_open_cache = 8192
max_allowed_packet = 16M
max_heap_table_size = 512M
read_buffer_size = 8M
read_rnd_buffer_size = 64M
sort_buffer_size = 32M
join_buffer_size = 32M
thread_cache_size = 200
thread_concurrency = 32
query_cache_size = 256M
query_cache_limit = 8M
ft_min_word_len = 4
default-storage-engine = InnoDB
thread_stack = 512K
transaction_isolation = REPEATABLE-READ
tmp_table_size = 1G

innodb_buffer_pool_size = 8G
#innodb_buffer_pool_size = 300MB
innodb_data_file_path = ibdata1:10M:autoextend
innodb_write_io_threads = 16
innodb_read_io_threads = 16

innodb_thread_concurrency = 32
innodb_flush_log_at_trx_commit = 1
innodb_log_buffer_size = 32M
innodb_log_file_size = 512M
innodb_log_files_in_group = 4
innodb_max_dirty_pages_pct = 90
innodb_lock_wait_timeout = 50

explicit_defaults_for_timestamp=1
innodb_file_per_table=1
innodb_flush_method=O_DIRECT
innodb_buffer_pool_instances=8
innodb_read_ahead_threshold=0
innodb_io_capacity=20000

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

[mysqldump]
# Do not buffer the whole result set in memory before writing it to
# file. Required for dumping very large tables
quick
max_allowed_packet = 16M

[mysqld_safe]
# Increase the amount of open files allowed per process. Warning: Make
# sure you have set the global system limit high enough! The high value
# is required for a large number of opened tables
open-files-limit = 16384

HerdDB setenv.sh

# Basic Environment and Java variables

JAVA_HOME=/usr/java/jdk1.8.0_121/
JAVA_OPTS="-XX:+UseG1GC -Duser.language=en -Xmx8g -Xms8g -XX:+AggressiveOpts -XX:+UseFastAccessorMethods -Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=4g -Djava.util.logging.config.file=conf/logging.properties"

if [ -z "$JAVA_HOME" ]; then
  JAVA_PATH=`which java 2>/dev/null`
  if [ "x$JAVA_PATH" != "x" ]; then
    JAVA_BIN=`dirname $JAVA_PATH 2>/dev/null`
    JAVA_HOME=`dirname $JAVA_BIN 2>/dev/null`
  fi
  if [ -z "$JAVA_HOME" ]; then
    echo "JAVA_HOME environment variable is not defined and is needed to run this program"
    exit 1
  fi
fi

HerdDB server.properties

# standalone|cluster
server.mode=standalone

# listening endpoint for client and server-to-server communications
# if you leave host empty a discovery of the local name will be attempted
server.host=
# if you leave port to zero at every boot a random port will be used
server.port=7000

# id of the node in the cluster
# when empty the system will choose a random id and write it to server.data.dir/nodeid file
server.node.id=

# main directory, other directories by default are relative to this directory
server.base.dir=dbdata

# directory to use to store data
server.data.dir=data

# temporary directory for swap
server.tmp.dir=tmp

# metadata directory. in clustered mode metadata are store on ZooKeeper, not in this directory
server.metadata.dir=metadata

# this directory contains the transaction log
server.log.dir=txlog


# SSL configuration
# if no file is configured a self signed certificate will be generated at every boot
server.ssl=false

# for server.mode=cluster
server.zookeeper.address=localhost:2181
server.zookeeper.session.timeout=40000
server.zookeeper.path=/herd


# bookkeeper client parameters
server.bookkeeper.ensemble.size=1
server.bookkeeper.write.quorum.size=1
server.bookkeeper.ack.quorum.size=1

# bookkeeper client parameters. for a fault-tolerant system use should have at least 3 bookies and configure these values
#server.bookkeeper.ensemble.size=3
#server.bookkeeper.write.quorum.size=2
#server.bookkeeper.ack.quorum.size=2

# retention period, in milliseconds, of bookkeeper ledgers
server.bookkeeper.ledgers.retention.period=34560000

# start a bookie inside the same JVM (if the server is started in cluster mode)
server.bookkeeper.start=true
# if you leave port to zero a random port will be used an then persisted to bookie_port file
# bookkeeper uses local hostname and this port to identify bookies
server.bookkeeper.port=-1

# max "logical" size in bytes of a data page. Defaults to 1MB
#server.memory.page.size=

# period of time in milliseconds between forced checkpoints. Defaults to 15 minutes
#server.checkpoint.period=

# Maximum target time in milliseconds to spend during standard checkpoint operations. Checkpoint duration
# could be longer than this to complete pages flush. If set to -1 checkpoints won't have a time limit. Be
# aware that configuring this parameter to small values could impact performances on the long run
# increasing pages pollution with dirty not reclaimed records: in many cases is safer to configure a
# wider dirty page threshold.
#server.checkpoint.duration=

# Maximum target time in milliseconds to spend during standard checkpoint operations on compacting
# smaller pages. Is should be less than the maximum checkpoint duration configured by
# "server.checkpoint.duration". If set to -1 checkpoints won't have a time limit. Regardless his
# value at least one page will be compacted for each checkpoint.
#server.checkpoint.compaction=

# Maximum dirty bytes percentage at which a pages will be considered for rebuild during a checkpoint.
# This value must be between 0 and 1.0.
#server.checkpoint.page.dirty.max.threshold=

# Minimum byte fill percentage at which a pages will be considered for rebuild during a checkpoint.
# This value must be between 0 and 1.0.
#server.checkpoint.page.fill.min.threshold=

# option to halt the JVM in case of error during the boot of a tablespace
# use this option to debug boot problems
server.halt.on.tablespace.boot.error=true

# users file, you'd better to set UNIX permissions properly
server.users.file=../conf/users

# overall limit on memory usage. it defaults to maximum heap size configured on the JVM
#server.memory.max.limit=

# maximum amount of memory (in bytes) used for data. Defaults to 30% of server.memory.max.limit
#server.memory.data.limit=
#server.memory.data.limit=104857600
server.memory.data.limit=4294967296

# maximum amount of memory (in bytes) used for primary indexes. Defaults to 20% of server.memory.max.limit
#server.memory.pk.limit=
#server.memory.pk.limit=2147483648
server.memory.pk.limit=2147483648

# enable/disable JMX
#server.jmx.enable=true
Clone this wiki locally