Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixing multi runner #9

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Multi-node client testing can be used to either test a single namespace solution

## Prerequisite

The script assumes that [kdb+ is installed](https://code.kx.com/q/learn/install/). If the q binary is not on the path or `QHOME` differs from `$HOME/q` then you need to set `QHOME` in `config/kdbenv`.
The script assumes that [kdb+ is installed](https://code.kx.com/q/learn/install/). If the q home differs from `$HOME/q` then you need to set `QHOME` in `config/kdbenv`.

The script assumes that the following commands are available - see `Dockerfile` for more information
* yq
Expand Down Expand Up @@ -139,7 +139,7 @@ $ ./multihost.sh $(nproc) full delete
```

### Running several tests with different process count
If you are interested how the storage medium scales with the number of parallel requests, then you can run `runSeveral.sh`. It simply calls `mthread.sh` with different process numbers and does a log processing to generate a result CSV file. The results are saved in file `results/aggr_total.csv` but this can be overwritten by a command line parameter.
If you are interested how the storage medium scales with the number of parallel requests, then you can run `runSeveral.sh`. It simply calls `mthread.sh` with different process numbers and does a log processing to generate a result CSV file. The results are saved in file `results/throughput_total.csv` but this can be overwritten by a command line parameter.


### Results
Expand All @@ -162,8 +162,8 @@ The bash script starts a controller kdb+ process that is responsible to start ea
A docker image is available for nano on Gitlab and on nexus:

```bash
$ docker pull registry.gitlab.com/kxdev/benchmarking/nano/nano:2.4.2
$ docker pull ext-dev-registry.kxi-dev.kx.com/benchmarking/nano:2.4.2
$ docker pull registry.gitlab.com/kxdev/benchmarking/nano/nano:2.5.4
$ docker pull ext-dev-registry.kxi-dev.kx.com/benchmarking/nano:2.5.4
```

The nano scripts are placed in the docker directory `/opt/kx/app` -see `Dockerfile`
Expand All @@ -180,8 +180,8 @@ By default `flush/directmount.sh` is selected as the flush script which requires
Example usages:

```bash
$ docker run --rm -it -v $QHOME:/tmp/qlic:ro -v /mnt/$USER/nano:/appdir -v /mnt/$USER/nanodata:/data --privileged ext-dev-registry.kxi-dev.kx.com/benchmarking/nano:2.5.1 4 full delete
$ docker run --rm -it -v $QHOME:/tmp/qlic:ro -v /mnt/$USER/nano:/appdir -v /mnt/storage1/nanodata:/data1 -v /mnt/storage2/nanodata:/data2 -v ${PWD}/partitions_2disks:/opt/kx/app/partitions:ro -e FLUSH=/opt/kx/app/flush/noflush.sh -e THREADNR=5 ext-dev-registry.kxi-dev.kx.com/benchmarking/nano:2.5.1 4 full delete
$ docker run --rm -it -v $QHOME:/tmp/qlic:ro -v /mnt/$USER/nano:/appdir -v /mnt/$USER/nanodata:/data --privileged ext-dev-registry.kxi-dev.kx.com/benchmarking/nano:2.5.4 4 full delete
$ docker run --rm -it -v $QHOME:/tmp/qlic:ro -v /mnt/$USER/nano:/appdir -v /mnt/storage1/nanodata:/data1 -v /mnt/storage2/nanodata:/data2 -v ${PWD}/partitions_2disks:/opt/kx/app/partitions:ro -e FLUSH=/opt/kx/app/flush/noflush.sh -e THREADNR=5 ext-dev-registry.kxi-dev.kx.com/benchmarking/nano:2.5.4 4 full delete
```

## Technical Details
Expand Down
16 changes: 6 additions & 10 deletions config/kdbenv
Original file line number Diff line number Diff line change
@@ -1,16 +1,12 @@
if ! command -v q &> /dev/null
then
export QHOME=$HOME/q # SET QHOME MANUALLY
export QHOME=$HOME/q # SET QHOME MANUALLY

if [ `uname -s` = "Darwin" ]; then
QSUBDIR=w64
else
QSUBDIR=l64
fi
export QBIN="$QHOME/$QSUBDIR/q"

if [ `uname -s` = "Darwin" ]; then
QSUBDIR=m64
else
export QBIN=$(which q)
QSUBDIR=l64
fi

export QBIN="$QHOME/$QSUBDIR/q"
echo "QBIN is set to $QBIN"

37 changes: 22 additions & 15 deletions mthread.sh
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ mkdir -p ${CURRENTLOGDIR}

RESFILEPREFIX=${RESDIR}/detailed-${HOST}-
IOSTATFILE=${RESDIR}/iostat-${HOST}.psv
THROUGHPUTFILE=${RESDIR}/throughput-${HOST}.psv
AGGRFILEPREFIX=${RESDIR}/${HOST}-

LOGFILEPREFIX="${CURRENTLOGDIR}/${HOST}-${NUMPROCESSES}t-"

Expand All @@ -65,6 +65,12 @@ function notObjStore {
if [[ $1 != s3://* && $1 != gs://* && $1 != ms://* ]]; then return 0; else return 1; fi
}

if [[ $(uname) == "Linux" ]]; then
CORECOUNT=$(nproc)
else
CORECOUNT=$(sysctl -n hw.ncpu)
fi

echo "Persisting config"
CONFIG=${RESDIR}/config.yaml
echo "Persisting config to $CONFIG"
Expand All @@ -82,7 +88,8 @@ yq -i ".dbize.MEMUSAGEVALUE=$MEMUSAGEVALUE" $CONFIG
yq -i ".dbize.RANDOMREADFILESIZETYPE=\"$RANDOMREADFILESIZETYPE\"" $CONFIG
yq -i ".dbize.RANDOMREADFILESIZEVALUE=$RANDOMREADFILESIZEVALUE" $CONFIG
yq -i ".dbize.DBSIZE=\"$DBSIZE\"" $CONFIG
yq -i ".system.cpunr=$(nproc)" ${CONFIG}
yq -i ".system.os=\"$(uname)\"" $CONFIG
yq -i ".system.cpunr=$CORECOUNT" ${CONFIG}
yq -i ".system.memsize=\"$(grep MemTotal /proc/meminfo |tr -s ' ' | cut -d ' ' -f 2,3)\"" ${CONFIG}

CONTROLLERPORT=6000
Expand Down Expand Up @@ -113,10 +120,10 @@ if [ "$SCOPE" = "full" ]; then
echo
echo "STARTING WRITE TEST"

${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller 2 >&1 &
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} > ${CURRENTLOGDIR}/controller_prepare.log 2 >&1 &
j=0
for i in `seq $NUMPROCESSES`; do
${QBIN} ./src/prepare.q -processes $NUMPROCESSES -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -q -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i} 2 >&1 &
${QBIN} ./src/prepare.q -processes $NUMPROCESSES -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -q -p $((WORKERBASEPORT + i)) > ${LOGFILEPREFIX}${i}_prepare.log 2 >&1 &
j=$(( ($j + 1) % $NUMSEGS ))
done

Expand All @@ -136,10 +143,10 @@ echo "STARTING SEQUENTIAL READ TEST"
${FLUSH}
touch ${CURRENTLOGDIR}/sync-$HOST

${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller 2 >&1 &
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} > ${CURRENTLOGDIR}/controller_read.log 2 >&1 &
j=0
for i in `seq $NUMPROCESSES`; do
${QBIN} ./src/read.q -processes $NUMPROCESSES -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i} 2>&1 &
${QBIN} ./src/read.q -processes $NUMPROCESSES -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -p $((WORKERBASEPORT + i)) > ${LOGFILEPREFIX}${i}_read.log 2>&1 &
j=$(( ($j + 1) % $NUMSEGS ))
done
wait -n
Expand All @@ -157,10 +164,10 @@ echo
echo "STARTING SEQUENTIAL RE-READ (CACHE) TEST"

touch ${CURRENTLOGDIR}/sync-$HOST
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller 2 >&1 &
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} > ${CURRENTLOGDIR}/controller_reread.log 2 >&1 &
j=0
for i in `seq $NUMPROCESSES`; do
${QBIN} ./src/reread.q -processes $NUMPROCESSES -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i} 2>&1 &
${QBIN} ./src/reread.q -processes $NUMPROCESSES -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -p $((WORKERBASEPORT + i)) > ${LOGFILEPREFIX}${i}_reread.log 2>&1 &
j=$(( ($j + 1) % $NUMSEGS ))
done
wait
Expand All @@ -177,10 +184,10 @@ if [ "$SCOPE" = "full" ]; then
${FLUSH}

touch ${CURRENTLOGDIR}/sync-$HOST
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller 2 >&1 &
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} > ${CURRENTLOGDIR}/controller_meta.log 2 >&1 &
j=0
for i in `seq $NUMPROCESSES`; do
${QBIN} ./src/meta.q -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i} 2>&1 &
${QBIN} ./src/meta.q -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -s ${THREADNR} -p $((WORKERBASEPORT + i)) > ${LOGFILEPREFIX}${i}_meta.log 2>&1 &
j=$(( ($j + 1) % $NUMSEGS ))
done

Expand All @@ -197,19 +204,19 @@ function runrandomread {
echo "test${mmap} with block size ${listsize}"

touch ${CURRENTLOGDIR}/sync-$HOST
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller 2 >&1 &
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller_randomread_$listsize.log 2 >&1 &
j=0
sleep 5
for i in `seq $NUMPROCESSES`; do
${QBIN} ./src/randomread.q -testname randomread -listsize ${listsize} ${mmap} -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -testtype "read disk" -s ${THREADNR} -S ${SEED} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i} 2>&1 &
${QBIN} ./src/randomread.q -testname randomread -listsize ${listsize} ${mmap} -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -testtype "read disk" -s ${THREADNR} -S ${SEED} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i}_randomread_$listsize.log 2>&1 &
j=$(( ($j + 1) % $NUMSEGS ))
done
wait

${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller 2 >&1 &
${QBIN} ./src/controller.q -iostatfile ${IOSTATFILE} -s $NUMPROCESSES -q -p ${CONTROLLERPORT} >> ${CURRENTLOGDIR}/controller_randomreread_$listsize.log 2 >&1 &
j=0
for i in `seq $NUMPROCESSES`; do
${QBIN} ./src/randomread.q -testname randomreread -listsize ${listsize} ${mmap} -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -testtype "read mem" -s ${THREADNR} -S ${SEED} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i} 2>&1 &
${QBIN} ./src/randomread.q -testname randomreread -listsize ${listsize} ${mmap} -db ${array[$j]}/${HOST}.${i}/${DATE} -result ${RESFILEPREFIX}${i}.psv -controller ${CONTROLLERPORT} -testtype "read mem" -s ${THREADNR} -S ${SEED} -p $((WORKERBASEPORT + i)) >> ${LOGFILEPREFIX}${i}_randomreread_$listsize.log 2>&1 &
j=$(( ($j + 1) % $NUMSEGS ))
done
wait
Expand Down Expand Up @@ -247,7 +254,7 @@ wait
syncAcrossHosts

echo "Aggregating results"
${QBIN} ./src/postproc.q -inputs ${RESFILEPREFIX} -iostatfile ${IOSTATFILE} -processes ${NUMPROCESSES} -output ${THROUGHPUTFILE} -q
${QBIN} ./src/postproc.q -inputs ${RESFILEPREFIX} -iostatfile ${IOSTATFILE} -processes ${NUMPROCESSES} -outputprefix ${AGGRFILEPREFIX} -q

#
# an air gap for any storage stats gathering before unlinks go out ...
Expand Down
8 changes: 4 additions & 4 deletions runSeveral.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,22 @@ set -euo pipefail
if [ $# -gt 0 ]; then
OUTPUT=$1
else
OUTPUT=./results/aggr_total.csv
OUTPUT=./results/throughput_total.csv
fi

HOST=$(uname -n)
DATES=()

for i in {1,2,4,8,16,32,64,96}; do
DATE=$(date +%m%dD%H%M)
DATE=$(date +%m%d_%H%M)
DATES+=($DATE)
./mthread.sh $i full delete ${DATE}
done

head -n 1 results/${DATES[1]}-${DATES[1]}/aggregate-$HOST.psv > ${OUTPUT}
head -n 1 results/${DATES[1]}-${DATES[1]}/$HOST-throughput.psv > ${OUTPUT}
TMP="$(mktemp)"
for DATE in ${DATES[@]}; do
tail -n +2 results/${DATE}-${DATE}/aggregate-$HOST.psv >> ${TMP}
tail -n +2 results/${DATE}-${DATE}/$HOST-throughput.psv >> ${TMP}
done

sort ${TMP} -t '|' -k 3,3 >> ${OUTPUT}
Expand Down
11 changes: 10 additions & 1 deletion src/common.q
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,16 @@ writeRes: {[testtype; test; qexpression; repeat; length; times; result; unit]
controller: `$"::",argv `controller;

msstring:{(string x)," ms"}
getDisk: {first " " vs last system "df ", DB}
// df returns partition like /dev/nvme0n1p1
getPartition: {first " " vs last system "df ", DB}

// disk is looked up from partition by e.g. /sys/class/block/nvme0n1p1
getDisk:{
p: ssr[;"/dev/";""] getPartition[];
l:first system "readlink /sys/class/block/", p;
"/dev",deltas[-2#l ss "/"] sublist l
}

getTests: {[ns] .Q.dd[ns;] each except[; `] key ns}

fRead: hsym `$DB, fReadFileName: "/seqread"
Expand Down
44 changes: 26 additions & 18 deletions src/controller.q
Original file line number Diff line number Diff line change
Expand Up @@ -5,40 +5,47 @@ system "l src/util.q";
workerNr: system "s" // We assume that each worker has its own thread
iostatH: hopen ":", argv `iostatfile

`alltest set ();
`workers set ();
`disks set ();
Alltest:Workers:Disks: ();

iostatError: `kB_read`kB_wrtn!2#0Nj;
iostatError: `kB_read`kB_wrtn!2#0Nj
Start: 0Np

getKBReadMac: {[x] iostatError}
getKBReadLinux: {[disks]
iostatcmd: "iostat -dk -o JSON ", (" " sv disks), " 2>&1";
r: @[system; iostatcmd; .qlog.error];
:$[10h ~ type r; [
:$[0h ~ type r; [
iostats: @[; `disk] first @[; `statistics] first first first value flip value .j.k raze r;
$[count iostats; exec `long$sum kB_read, `long$sum kB_wrtn from iostats; iostatError]];
iostatError]
}

getKBRead: $["Darwin" ~ first system "uname -s"; getKBReadMac; getKBReadLinux]
getKBRead: $[.z.o ~ `m64; getKBReadMac; getKBReadLinux]

finish: {[x]
.qlog.info "Sending exit message to workers";
@[; "exit 0"; ::] each Workers;
exit x
}

TIMEOUT: 0D00:01
executeTest: {[dontcare]
if[workerNr = count workers;
if[TIMEOUT < .z.p - Start;
.qlog.error "Waiting for workers timed out.";
finish 3];
if[workerNr = count Workers;
system "t 0";
if[ any 1_differ alltest; .qlog.error "Not all tests are the same!"; exit 1];
if[ any 1_differ Alltest; .qlog.error "Not all tests are the same!"; exit 1];
{[t]
.qlog.info "Executing test ", string t;
ddisks: distinct disks;
ddisks: distinct Disks;
sS: getKBRead[ddisks]; sT: .z.n;
@[; (t; ::)] peach workers;
@[; (t; ::)] peach Workers;
eT: .z.n; eS: getKBRead[ddisks];
iostatH string[t], SEP, (SEP sv value fix[2; (eS-sS)%1000*tsToSec eT-sT]),"\n";
} each first[alltest] except exclusetests;
.qlog.info "All tests were executed. Sending exit message to workers.";
if[not `debug in argvk;
@[; "exit 0"; ::] each workers;
exit 0];
} each first[Alltest] except exclusetests;
.qlog.info "All tests were executed.";
if[not `debug in argvk; finish 0];
];
}

Expand All @@ -53,9 +60,10 @@ handleToIP: (`int$())!()
addWorker: {[port; disk; tests]
addr:handleToIP[.z.w],":",string port;
.qlog.info "adding tests from address ", addr, " using disk ", disk;
alltest,: enlist tests;
workers,: hsym `$addr;
disks,: enlist disk;
if[0=count Workers; Start:: .z.p];
Alltest,: enlist tests;
Workers,: hsym `$addr;
Disks,: enlist disk;
}


Expand Down
5 changes: 4 additions & 1 deletion src/meta.q
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,7 @@ $[OBJSTORE;
writeRes["read disk";".meta.get|mmap";"get"; N; count get fmmap; sT, eT; fix[4;1000 * tsToSec[eT-sT]%N];"ms\n"];
}

controller (`addWorker; system "p"; getDisk[]; getTests[`.meta]);
@[controller; (`addWorker; system "p"; getDisk[]; getTests[`.meta]);
{.qlog.error "Error sending meta tests to the controller: ", x; exit 1}]

.qlog.info "Ready for test execution";
15 changes: 9 additions & 6 deletions src/postproc.q
Original file line number Diff line number Diff line change
Expand Up @@ -5,18 +5,21 @@ argvk:key argv:first each .Q.opt .z.x
resfileprefix: "," vs argv `inputs;
iostatfile: argv `iostatfile
nproc: "I"$argv `processes;
output: hsym `$argv `output;

outputprefix: argv `outputprefix;
outthroughput: hsym `$outputprefix,"throughput.psv"
outlatency: hsym `$outputprefix,"latency.psv"

results: raze {("SSS*IJNNFS"; enlist "|") 0:x} each `$resfileprefix cross string[1+til nproc] ,\: ".psv";
aggregate: select numproc: count result, accuracy: 5 sublist string 100*1- (max[starttime] - min starttime) % avg endtime-starttime, throughput: sum result, first unit by testid, testtype, test, qexpression from results where not unit = `ms;
throughput: select numproc: count result, accuracy: 5 sublist string 100*1- (max[starttime] - min starttime) % avg endtime-starttime, throughput: sum result, first unit by testid, testtype, test, qexpression from results where not unit = `ms;
latency: select numproc: count result, accuracy: 5 sublist string 100*1- (max[starttime] - min starttime) % avg endtime-starttime, avgLatency: avg result, maxLatency: max result, first unit by testid, testtype, test, qexpression from results where unit = `ms;
iostat: ("SFF"; enlist "|") 0: `$iostatfile;

output 0: "|" 0: `numproc xcols delete testid from 0!aggregate lj `testid xkey iostat;
outthroughput 0: "|" 0: `numproc xcols delete testid from 0!throughput lj `testid xkey iostat;
outlatency 0: "|" 0: `numproc xcols delete testid from 0!latency;

if[ 0 < exec count i from aggregate where not numproc = nproc;
if[ 0 < exec count i from throughput where not numproc = nproc;
.qlog.error "The following tests were not executed by all processes: ",
"," sv string exec distinct test from aggregate where not numproc = nproc;
"," sv string exec distinct test from throughput where not numproc = nproc;
exit 1
];

Expand Down
3 changes: 2 additions & 1 deletion src/prepare.q
Original file line number Diff line number Diff line change
Expand Up @@ -278,6 +278,7 @@ write:{[file]
exitcustom: {[]
if[OBJSTORE; hdel tmpdirH]};

controller (`addWorker; system "p"; getDisk[]; getTests[`.prepare]);
@[controller; (`addWorker; system "p"; getDisk[]; getTests[`.prepare]);
{.qlog.error "Error sending prepare tests to the controller. Error: ", x; exit 1}]

.qlog.info "Ready for test execution";
5 changes: 4 additions & 1 deletion src/randomread.q
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,7 @@ randomreadwithmmap:{[blocksize]
fn: $[`withmmap in argvk; randomreadwithmmap; randomread]
.Q.dd[`.randomread; `$argv[`testname]] set fn "I"$argv `listsize;

controller (`addWorker; system "p"; getDisk[]; getTests[`.randomread]);
@[controller; (`addWorker; system "p"; getDisk[]; getTests[`.randomread]);
{.qlog.error "Error sending randomread tests to the controller: ", x; exit 1}]

.qlog.info "Ready for test execution";
5 changes: 4 additions & 1 deletion src/read.q
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,7 @@ system "l src/common.q";
writeRes["read disk";".read.readbinary|sequential read binary";"read1"; 1; hcount fReadBinary; sT, eT; fix[2;getMBPerSec[div[; 8] -16+hcount fReadBinary; eT-sT]]; "MB/sec\n"]; // TODO: avoid recalculating theoretical read binary file size
}

controller (`addWorker; system "p"; getDisk[]; getTests[`.read]);
@[controller; (`addWorker; system "p"; getDisk[]; getTests[`.read]);
{.qlog.error "Error sending read tests to the controller: ", x; exit 1}]

.qlog.info "Ready for test execution";
5 changes: 4 additions & 1 deletion src/reread.q
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,7 @@ system "l src/common.q";
writeRes["read mem";".reread.readbinary|sequential read binary";"read1"; 1; hcount fReadBinary; sT, eT; fix[2;getMBPerSec[div[; 8] -16+hcount fReadBinary; eT-sT]]; "MB/sec\n"];
}

controller (`addWorker; system "p"; getDisk[]; getTests[`.reread])
@[controller; (`addWorker; system "p"; getDisk[]; getTests[`.reread]);
{.qlog.error "Error sending reread tests to the controller: ", x; exit 1}]

.qlog.info "Ready for test execution";
5 changes: 4 additions & 1 deletion src/xasc.q
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,7 @@ system "l src/common.q";
writeRes["read write disk";".xasc.phash|add attribute";"@[; `sym; `p#]"; 1; count get KDBTBL; sT, eT; fix[2;getMBPerSec[count get KDBTBL; eT-sT]]; "MB/sec\n"];
}

controller (`addWorker; system "p"; getDisk[]; getTests[`.xasc])
@[controller; (`addWorker; system "p"; getDisk[]; getTests[`.xasc]);
{.qlog.error "Error sending xasc tests to the controller: ", x; exit 1}]

.qlog.info "Ready for test execution";
4 changes: 2 additions & 2 deletions version.yaml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
pub: 2.5.2
dev: 2.5.3
pub: 2.5.4
dev: 2.5.4