Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for metadata #79

Open
taitems opened this issue Dec 4, 2023 · 0 comments
Open

Add support for metadata #79

taitems opened this issue Dec 4, 2023 · 0 comments

Comments

@taitems
Copy link

taitems commented Dec 4, 2023

Is your feature request related to a problem? Please describe.
It is hard to know when users have finished talking and we can safely send their input for processing. For mobile we handle this with a backoff which sends after 500ms of silence. Depending on the context, we might add extra buffer in there for different situations. We also do some simple text matching to gauge whether the sentence is complete and add a further delay based on the presence of filler words.

Describe the solution you'd like
The plugin should consider making more metadata available from either iOS or Android that gives developers more power to interpret the users input. For iOS it would be including SFSpeechRecognitionMetadata which includes things such as rate of speech, and average pause length. This could help us know if a user is pausing longer than usual.

Describe alternatives you've considered
We are already trying to infer context based on text content.

Ideally another solution would involve making filler words such as "ahh" and "umm" appear in the transcribed results but I believe they are removed by AI/ML and cannot be force included.

Additional context
Link to metadata docs https://developer.apple.com/documentation/speech/sfspeechrecognitionmetadata?changes=_9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant