You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In OpenAI’s Whisper model, setting the language to “auto” allows for automatic language detection, enabling multilingual transcription. However, in WhisperKit, this “auto” setting doesn’t seem to function as expected. Could you advise on how to implement automatic language recognition within WhisperKit?
The text was updated successfully, but these errors were encountered:
When do you not set the language, WhisperKit will let Whisper infer the language in its input. Could you please share an example of unexpected behavior you are observing? (Ideally an input audio file and the model version used)
Note that it could be a failure of Whisper to correctly detect the language as well.
Hi, I have a question regarding this issue subject.
As mentioned by atriorh, I've managed to detect the language.. But I'm having difficulty Setting a Specific Language to detect in a Transcribe task.
Any help with that please?
In OpenAI’s Whisper model, setting the language to “auto” allows for automatic language detection, enabling multilingual transcription. However, in WhisperKit, this “auto” setting doesn’t seem to function as expected. Could you advise on how to implement automatic language recognition within WhisperKit?
The text was updated successfully, but these errors were encountered: