Skip to content
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Adding support for speech library 1.0.5 containing multi-lingual capabilities and the possibility to set a tag do identify the product using the speech API. #633

Merged
merged 1 commit into from
Oct 17, 2018

Conversation

andrenatal
Copy link
Contributor

No description provided.

@andrenatal
Copy link
Contributor Author

TC is failing because the building system can't yet see the 1.0.5 speech library on maven. Seems its still being propagated.

Copy link
Contributor

@keianhzo keianhzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this PR adds support for multi-lingual speech recognition but for the moment we force it to only "en-us", is that right?

@andrenatal
Copy link
Contributor Author

andrenatal commented Oct 16, 2018

Yes, but the intent of this PR is to land support only for the API's new version. To add support for new languages we'll need to design and develop some UI so the user could choose the language he wants, so for that reason I'd do it in two different PRs.

@larsbergstrom
Copy link

Would the user select the language or would we just pick up the system language? Or do you mean in the keyboard/voice system we should select the language? Actually, I'll open up a separate design issue for that...

@andrenatal
Copy link
Contributor Author

Speech API's version 1.0.5 also introduces two new methods:

            mMozillaSpeechService.storeSamples(bool);
            mMozillaSpeechService.storeTranscriptions(bool);

The former tells the backend to save the audio sample, and the latter to save the transcribed sentence.

@andrenatal
Copy link
Contributor Author

andrenatal commented Oct 16, 2018

@larsbergstrom The way I'll do on VoiceFill is to create a panel so the user could pick his preferred language, so maybe we can do the same on Fxr.

Take my situation for example: my system locale and language is English, but I prefer to do voice search in Brazilian-Portuguese.

In addition to that, there's the issue that the UA can return some very weird locale codes (my Firefox returns en-US,en;q=0.5 for example), and if we pass that to the backend, it will return a header error since we validate the languages against this set: https://github.com/mozilla/speech-proxy/blob/master/languages.json

So for these reasons, just pass the the system's locale code over to the API without any validation can be problematic and lead to issues.

…bilities and the possibility to set a tag do identify the product using the speech API.
@andrenatal
Copy link
Contributor Author

andrenatal commented Oct 16, 2018

Ok, taskcluster is now passing. So this is good to merge whenever you guys decide so. It is important to have this landed soon so we can properly identify the requests coming from Fxr in our speech backend metrics' system.

@andrenatal andrenatal self-assigned this Oct 16, 2018
@keianhzo keianhzo merged commit ec364b0 into MozillaReality:master Oct 17, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants