You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The RNN-T CmdGen is work-in-progress. We started it for the v0.7 submission round, but eventually did not submit due to a belatedly discovered postprocessing issue. Not surprisingly, it needs more love to get into shape.
Currently, the usage is:
$ ck run cmdgen:benchmark.speech-recognition-loadgen --model=rnnt \
--scenario=singlestream --mode=accuracy \
--sut=aws-g4dn.4xlarge
Future improvements:
The --model parameter should be optional. (We only support one model after all.)
The --sut parameter should allow any SUT name. At the moment, it is restricted to handful, and alternatives result in an error (CK error: [cmdgen] build_map[sut] is missing both 'aws-g4dn.4xsmall' and '###' values!.)
The record name (e.g. mlperf-closed-aws-g4dn.4xlarge-pytorch-v1.15.1-rnnt-singlestream) must include the --mode to allow keeping both performance and accuracy experiment entries simultaneously. At the moment, a previously recorded experiment entry for one mode must be removed (ck rm local:experiment:mlperf-closed*rnnt* -f) to allow for the other mode.
The record name should not include a bogus inference engine version (v1.15.1). The default inference engine name (pytorch) should be customizable according to the plugins used.
The text was updated successfully, but these errors were encountered:
By the way, the above mentioned postprocessing issue also needs to be fixed but here's a quick workaround (installing the LoadGen Python module to userspace):
The RNN-T CmdGen is work-in-progress. We started it for the v0.7 submission round, but eventually did not submit due to a belatedly discovered postprocessing issue. Not surprisingly, it needs more love to get into shape.
Currently, the usage is:
Future improvements:
--model
parameter should be optional. (We only support one model after all.)--sut
parameter should allow any SUT name. At the moment, it is restricted to handful, and alternatives result in an error (CK error: [cmdgen] build_map[sut] is missing both 'aws-g4dn.4xsmall' and '###' values!
.)mlperf-closed-aws-g4dn.4xlarge-pytorch-v1.15.1-rnnt-singlestream
) must include the--mode
to allow keeping both performance and accuracy experiment entries simultaneously. At the moment, a previously recorded experiment entry for one mode must be removed (ck rm local:experiment:mlperf-closed*rnnt* -f
) to allow for the other mode.v1.15.1
). The default inference engine name (pytorch
) should be customizable according to the plugins used.The text was updated successfully, but these errors were encountered: