You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run Resnet50 without the precision parameter in the docker successfully and get the result (measurements.json) that displays data type is int8.
Then run Resnet50 again with the precision=float16 as below in the docker successfully but still get the data type is int8 from the result (measurements.json).
It seems that the parameter of precision=float16 didn't take effect. How can I run the model in different data precision conveniently on MLPerf?
cm run script --tags=run-mlperf,inference,_r4.1-dev
--model=resnet50
*--precision=float16 *
--implementation=nvidia
--framework=tensorrt
--category=edge
--scenario=Offline
--execution_mode=valid
--device=cuda
--division=closed
--rerun
--quiet
Hi @Bob123Yang for nvidia implementation this is expected behaviour as the precision is automatically chosen by the implementation - often the best one satisfying the accuracy requirement for MLPerf. We don't have a choice to change this.
You're welcome @Bob123Yang Actually what I told is also true for other vendor implementations like Intel, AMD, Qualcomm etc. Reference implementations usually have fp16 and fp32 options especially for the pytorch models.
Could you help confirm one more question for NVIDIA multiple GPU scenario - how to run MLPerf inference on multiple GPUs which are connected with NVLink? Is there any parameter dedicated for that scenario or without any special parameter and just prepare the physical connection (such as NVLINK) working well for the multiple GPUs and then MLPerf running will automatically enable all GPU resources for usage?
Run Resnet50 without the precision parameter in the docker successfully and get the result (measurements.json) that displays data type is int8.
Then run Resnet50 again with the precision=float16 as below in the docker successfully but still get the data type is int8 from the result (measurements.json).
It seems that the parameter of precision=float16 didn't take effect. How can I run the model in different data precision conveniently on MLPerf?
cm run script --tags=run-mlperf,inference,_r4.1-dev
--model=resnet50
*--precision=float16 *
--implementation=nvidia
--framework=tensorrt
--category=edge
--scenario=Offline
--execution_mode=valid
--device=cuda
--division=closed
--rerun
--quiet
The text was updated successfully, but these errors were encountered: