Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RNN-T CmdGen improvements #59

Open
4 tasks
psyhtest opened this issue Dec 4, 2020 · 1 comment
Open
4 tasks

RNN-T CmdGen improvements #59

psyhtest opened this issue Dec 4, 2020 · 1 comment

Comments

@psyhtest
Copy link
Contributor

psyhtest commented Dec 4, 2020

The RNN-T CmdGen is work-in-progress. We started it for the v0.7 submission round, but eventually did not submit due to a belatedly discovered postprocessing issue. Not surprisingly, it needs more love to get into shape.

Currently, the usage is:

$ ck run cmdgen:benchmark.speech-recognition-loadgen --model=rnnt \
--scenario=singlestream --mode=accuracy \
--sut=aws-g4dn.4xlarge

Future improvements:

  • The --model parameter should be optional. (We only support one model after all.)
  • The --sut parameter should allow any SUT name. At the moment, it is restricted to handful, and alternatives result in an error (CK error: [cmdgen] build_map[sut] is missing both 'aws-g4dn.4xsmall' and '###' values!.)
  • The record name (e.g. mlperf-closed-aws-g4dn.4xlarge-pytorch-v1.15.1-rnnt-singlestream) must include the --mode to allow keeping both performance and accuracy experiment entries simultaneously. At the moment, a previously recorded experiment entry for one mode must be removed (ck rm local:experiment:mlperf-closed*rnnt* -f) to allow for the other mode.
  • The record name should not include a bogus inference engine version (v1.15.1). The default inference engine name (pytorch) should be customizable according to the plugins used.
@psyhtest
Copy link
Contributor Author

psyhtest commented Dec 4, 2020

By the way, the above mentioned postprocessing issue also needs to be fixed but here's a quick workaround (installing the LoadGen Python module to userspace):

$ python -m pip install --user $(find $(ck locate env --tags=mlperf,inference,dividiti.rnnt) -name *.whl)
Processing ./dist/mlperf_loadgen-0.5a0-cp38-cp38-linux_x86_64.whl
Installing collected packages: mlperf-loadgen
Successfully installed mlperf-loadgen-0.5a0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant