You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
It is hard to know when users have finished talking and we can safely send their input for processing. For mobile we handle this with a backoff which sends after 500ms of silence. Depending on the context, we might add extra buffer in there for different situations. We also do some simple text matching to gauge whether the sentence is complete and add a further delay based on the presence of filler words.
Describe the solution you'd like
The plugin should consider making more metadata available from either iOS or Android that gives developers more power to interpret the users input. For iOS it would be including SFSpeechRecognitionMetadata which includes things such as rate of speech, and average pause length. This could help us know if a user is pausing longer than usual.
Describe alternatives you've considered
We are already trying to infer context based on text content.
Ideally another solution would involve making filler words such as "ahh" and "umm" appear in the transcribed results but I believe they are removed by AI/ML and cannot be force included.
Is your feature request related to a problem? Please describe.
It is hard to know when users have finished talking and we can safely send their input for processing. For mobile we handle this with a backoff which sends after 500ms of silence. Depending on the context, we might add extra buffer in there for different situations. We also do some simple text matching to gauge whether the sentence is complete and add a further delay based on the presence of filler words.
Describe the solution you'd like
The plugin should consider making more metadata available from either iOS or Android that gives developers more power to interpret the users input. For iOS it would be including
SFSpeechRecognitionMetadata
which includes things such as rate of speech, and average pause length. This could help us know if a user is pausing longer than usual.Describe alternatives you've considered
We are already trying to infer context based on text content.
Ideally another solution would involve making filler words such as "ahh" and "umm" appear in the transcribed results but I believe they are removed by AI/ML and cannot be force included.
Additional context
Link to metadata docs https://developer.apple.com/documentation/speech/sfspeechrecognitionmetadata?changes=_9
The text was updated successfully, but these errors were encountered: