-
Notifications
You must be signed in to change notification settings - Fork 737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU版找不到文件model_blade.torchscript #2201
Comments
删除/workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch后, |
是版本太高了? +-----------------------------------------------------------------------------------------+ |
Notice: In order to resolve issues more efficiently, please raise issue following the template.
(注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
❓ Questions and Help
Before asking:
What is your question?
运行WSL-Ubuntu-Docker GPU版 提示这个模型里没有文件 model_blade.torchscript
按照官网https://github.com/modelscope/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline_gpu_zh.md
到最后一步
nohup bash run_server.sh
--download-model-dir /workspace/models
--vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx
--model-dir damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch
--punc-dir damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx
--lm-dir damo/speech_ngram_lm_zh-cn-ai-wesp-fst
--itn-dir thuduj12/fst_itn_zh
--hotword /workspace/models/hotwords.txt > log.txt 2>&1 &
执行后,查看log.txt
tail -f log.txt
If you want to use ffmpeg backend to load audio, please install it by:
sudo apt install ffmpeg # ubuntu
# brew install ffmpeg # mac
transformer is not installed, please install it if you want to use related modules
model is not exist, begin to export /workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model_blade.torchscript
funasr version: 1.1.8.
Check update of funasr, and it would cost few times. You may disable it by set
disable_update=True
in AutoModelNew version is available: 1.1.14.
Please use the command "pip install -U funasr" to upgrade.
Downloading: 100%|██████████| 6.00k/6.00k [00:00<00:00, 18.3kB/s]
Downloading: 100%|██████████| 10.9k/10.9k [00:00<00:00, 38.9kB/s]
Downloading: 100%|██████████| 408k/408k [00:00<00:00, 897kB/s]
Downloading: 100%|██████████| 2.83k/2.83k [00:00<00:00, 9.47kB/s]
Downloading: 100%|██████████| 693/693 [00:00<00:00, 2.22kB/s]
Downloading: 100%|██████████| 859M/859M [01:21<00:00, 11.0MB/s]
Downloading: 100%|██████████| 19.2k/19.2k [00:00<00:00, 55.9kB/s]
Downloading: 100%|██████████| 7.90M/7.90M [00:01<00:00, 6.52MB/s]
Downloading: 100%|██████████| 48.7k/48.7k [00:00<00:00, 130kB/s]
Downloading: 100%|██████████| 91.5k/91.5k [00:00<00:00, 196kB/s]
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 58, in
main()
File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 51, in main
export_model.export(
File "/workspace/FunASR/funasr/auto/auto_model.py", line 662, in export
export_dir = export_utils.export(model=model, data_in=data_list, **kwargs)
File "/workspace/FunASR/funasr/utils/export_utils.py", line 31, in export
assert (
AssertionError: Currently bladedisc optimization for FunASR only supports GPU
I20241108 06:19:07.329227 161 funasr-wss-server.cpp:308] Failed to download model from modelscope. If you set local asr model path, you can ignore the errors.
E20241108 06:19:07.330305 161 funasr-wss-server.cpp:312] /workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model_blade.torchscript do not exists.
What have you tried?
有damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch模型,想要手动重新下载,发现ModelScope对应模型文件夹下并没有文件model_blade.torchscript
What's your environment?
WSL-Ubuntu-Docker
WSL2.0
Ubuntu22.04
Docker version 24.0.7, build 24.0.7-0ubuntu2~22.04.1
The text was updated successfully, but these errors were encountered: