From 6c3f60f04bbc01f3a316316df2d207190d32eba9 Mon Sep 17 00:00:00 2001 From: ZhengkunMei <113121334+ZhengkunMei@users.noreply.github.com> Date: Fri, 11 Nov 2022 15:39:05 +0100 Subject: [PATCH 1/3] Create 1 --- Mei_run _inference/1 | 1 + 1 file changed, 1 insertion(+) create mode 100644 Mei_run _inference/1 diff --git a/Mei_run _inference/1 b/Mei_run _inference/1 new file mode 100644 index 000000000..8b1378917 --- /dev/null +++ b/Mei_run _inference/1 @@ -0,0 +1 @@ + From 24811d4d14be7118ee459c7bde528d869a0521f4 Mon Sep 17 00:00:00 2001 From: ZhengkunMei <113121334+ZhengkunMei@users.noreply.github.com> Date: Fri, 11 Nov 2022 15:40:29 +0100 Subject: [PATCH 2/3] Create README.rst --- Mei_run _inference/README.rst | 69 +++++++++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) create mode 100644 Mei_run _inference/README.rst diff --git a/Mei_run _inference/README.rst b/Mei_run _inference/README.rst new file mode 100644 index 000000000..5d7c2fd91 --- /dev/null +++ b/Mei_run _inference/README.rst @@ -0,0 +1,69 @@ +🐸How to use the pre-trained model to simply run inference +--------------- + + +🐸Download trained Coqui STT models and run the inference on it. Though the old README file has told us how to do it, but I still get some errors in the process. And I find the guidance lack of picture example and error-fixed solution, so I create the new README file just to guide people facing the same error as me. + + +* You can use the 🐸STT Model Manager by following these steps. + # Create a virtual environment + + $ python3 -m venv venv-stt + + $ source venv-stt/bin/activate + + # Install 🐸STT model manager + + $ python -m pip install -U pip + + $ python -m pip install coqui-stt-model-manager + + # Run the model manager. A browser tab will open and you can then download and test models from the Model Zoo. + + $ stt-model-manager + + # Problem occurs when I use this method: + + *When using the provided way to create virtual environment, it can not find the bin file. So I change to use the mkvirtualenv + *After creating the enviroment, error still occurs when I want to download STT manager + +.. |doc-img| image:: https://github.com/ZhengkunMei/STT/blob/main/images/virtual%20environment.png + :target: https://github.com/ZhengkunMei/STT/blob/main/images/virtual%20environment.png + :alt: Documentation + + +.. |covenant-img| image:: https://github.com/ZhengkunMei/STT/blob/main/images/STT%20manager%20(2).png + :target: https://github.com/ZhengkunMei/STT/blob/main/images/STT%20manager%20(2).png + :alt: Contributor Covenant + + + +|doc-img| |covenant-img| + + + + +* If you face the same error as me, you can choose the second way to get the model + + *Using `STT model `_ to download your model + + + +* Then installing the stt to virtual environment + + *(coqui-stt-venv)$ python -m pip install -U pip && python -m pip install stt + +* Use the command below to test your inference + + *(coqui-stt-venv)$ stt --model model.tflite --scorer huge-vocabulary.scorer --audio my_audio_file.wav + + +* SoX lacking error and its solution + + *When we use the last command to run the model, there is an error showing we did not install the SoX + *Solution and result: the audio file need to be 16000Hz instead of 44100Hz, so I record my own voice "Hello world" and test it. + *The result is a little bit different than I expected but still close to it +.. |doc-img| image:: https://github.com/ZhengkunMei/STT/blob/main/images/output.png + :target: https://github.com/ZhengkunMei/STT/blob/main/images/output.png + :alt: Documentation +|doc-img| From dfc9ec5cb4eecf190ad33cfaaeb1aa023e533263 Mon Sep 17 00:00:00 2001 From: ZhengkunMei <113121334+ZhengkunMei@users.noreply.github.com> Date: Fri, 11 Nov 2022 15:41:24 +0100 Subject: [PATCH 3/3] Delete 1 --- Mei_run _inference/1 | 1 - 1 file changed, 1 deletion(-) delete mode 100644 Mei_run _inference/1 diff --git a/Mei_run _inference/1 b/Mei_run _inference/1 deleted file mode 100644 index 8b1378917..000000000 --- a/Mei_run _inference/1 +++ /dev/null @@ -1 +0,0 @@ -