-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get raw data from MediaStream #327
Comments
There's no such thing as "the raw data". There are many different possible internal representations, and we do not constrain which of them browsers use. |
I believe the Media recorder only gives you the media when the stream has finished or you end the recorder. I would like the media as it is produced and as far as I am aware that is not possible with the Media recorder. |
See the "requestData" method in https://rawgit.com/w3c/mediacapture-record/master/MediaRecorder.html#methods - you can get the so-far-available data at any time. |
Thankyou! I should have done some proper research |
Seems that no action is needed. Closing. |
There's also https://w3c.github.io/mediacapture-worker/ being drafted. |
Hey @alvestrand, sorry to resuscitate an old thread, but I currently have a use case for getting the raw data, and a scenario that doesn't seem to be currently covered: I have an application with fairly long-lived streams (e.g. 1 hour long at a time), and I'd like to provide users with a way to capture short clips of the most recent, say, 15 seconds. Currently it seems that creating a buffer of the most recent data directly from the MediaStream is not supported.
What do you think of supporting this scenario, or is there an interface I'm missing? Thank you! |
@embirico did you end up finding a solution? I have a similar use case |
@bc @embirico As Harald has noted, access to raw video is already supported. Therefore the focus for new APIs is on scenarios requiring higher performance than is currently achievable via Canvas or MediaRecorder. We have the following WebRTC-NV use cases for access to raw video (raw audio access is already provided via WebAudio Worklets): So far, there are two specifications under development which could conceivably address these use cases: In the current Origin Trial, Insertable Streams provides access to encoded video, but only a minor API change would be required to add an insertion point prior to frame encoding (sender) or after frame decoding (receiver). Since raw video is much larger than encoded video, it is not clear that the existing Insertable Streams API would provided sufficient performance for the Machine Learning use case, in particular. WebCodecs is still early in its incubation, but it may provide access to encoded bitstreams and raw (decoded) video. |
Capturing microphone data to perform some processing and/or upload it over network. For some applications implementing WebRTC is more trouble than it's worth and reusing an existing http/websocket interface makes the most sense. The only way to obtain the raw data currently is with a script processor node on an AudioContext which can capture input data. This definitely feels like the wrong way to do it. MediaRecorder is not sufficient as the different supported output formats by different browsers are too restrictive to be useful in most cases |
Won't that still create a round-trip of generation loss? https://en.wikipedia.org/wiki/Generation_loss#Transcoding And even if it doesn't, that still only covers video, not audio... |
I think a lot of the problem with more modern web APIs is that they are designed high level, and are uninteroperable with existing lower level APIs. Actually, the problem isn't that these APIs are high level, it's that they were designed quite narrow-mindedly with often a single use-case in mind, take for example
What all of these have in common is that they were only designed with at most a few use-cases in mind and therefore the specs often unintentionally completely prohibit any more creative use-cases, causing the need for new APIs later on, increasing to already-heavy browser engines For this specific issue, it will not get solved with a TL;DR we suck at making specification and that's why we have issues like this |
This particular case was solved a while ago by MediaStreamTrackProcessor no? |
Neat, didn't see that! Can't wait to start using it for my apps in... ~4 years? |
@BlobTheKat Yeah, that's assuming it even gets built in another browser before someone decides to kill the spec because there's only one implementation (Chromium). (e.g. #958 ... even if something is marked "at risk", you can bet that nobody is going to bother with it at that point.) At the moment, there's debate as to whether to even allow the track processor for audio. (w3c/mediacapture-transform#29) This is the nature of it. We benefit from all this platform development, but give up a lot of influence over it in the process. |
Safari latest beta supports track processor and video track generator, as standardized. Please have a try. |
It will still be a few years before that version and above have a large enough market share to make this a viable option, not even to mention we're still waiting on firefox... |
Just for the comfort of anyone stumbling across this thread in the future, this is being tracked browser-side at https://bugzilla.mozilla.org/show_bug.cgi?id=1749532 |
Please excuse me if this is the wrong place to propose this. It would be really useful to have access to the raw data coming out of the MediaStream so it could be processed before being shipped off to a remote peer or displayed to the user. The data could perhaps be emitted in an
ondata
event which would emit raw data similar to that of the MediaRecorder. Would anyone else find something along these lines useful?The text was updated successfully, but these errors were encountered: