I am so incredibly excited for WebRTC broadcasting. I wrote up some reasons in the Broadcast Box[0] README and the OBS PR [1]
Now that GStreamer, OBS and FFmpeg all have WHIP support we finally have a ubiquitous protocol for video broadcasting for all platforms (Mobile, Web, Embedded, Broadcasting Software etc...)
I have been working on Open Source + WebRTC Broadcasting for years now. This is a huge milestone :)
When are you coming back to the WebRTC space, lots more cool stuff you could b doing :) I really loved [0] it's so cool that a user can access a server behind a firewall/NAT without setting up a VPN or having SSH constantly listening.
Working in the events broadcasting space, this opens up OBS to being a viable alternative to professional software like vMix. Especially the P2P support and support for broadcasting multiple scenes seem extremely valuable to have.
WebRTC is normally used in bidirectional use cases like video chat with text options, so I don't think it so odd that VLC doesn't outright support it. VLC does not support dialing into an Asterisk server, either.
Maybe I'm wrong but in this case, couldn't you create your own middleware server that could consume the Weber stream feed and then stream out as a regular vlc consumable feed? I'm guessing there will be some transcoding on the fly but that should be trivial..
Any plans to add multipath/failover-bonding support? e.g. mobile streaming unit connected with several 5G modems. Some people use a modified SRT to send H.265 over multiple links.
i was using vnc for remote dosbox gaming on the phone. now i can sink infinite amount of time trying to do a input handler webapp and using this+obs instead! thanks!
I've also been trying (and mostly failing) to build such a setup over the last few weeks. What are you thinking in terms of the overall building blocks to get this to work?
I've been struggling to get a proper low-latency screen+audio recording going (on macos) and streaming that over WebRTC. Either the audio gets de-sync, or the streaming latency is too high.
Not the SCTP parts! It's implementing WebRTC-HTTP Ingestion Protocol (WHIP), a commonly used low-latency HTTP protocol for talking to a gateway that talks actual WebRTC to peers over WebRTC's SCTP-based protocol. https://www.ietf.org/archive/id/draft-ietf-wish-whip-01.html
I hope some day we can switch to a QUIC or WebTransport based p2p protocol, rather than use SCTP. QUIC does the SCTP job very well atop existing UDP, rather than add such wild complexity & variance. One candidate, Media-over-Quic ?MoQ), but the browser doesn't have a p2p quic & progress on that stalled out years ago. https://quic.video/https://datatracker.ietf.org/group/moq/about/
WebRTC actual's complezity is very high. WHIP seems to be the standard path for most apps to integrate, but it does rely on an exterior service to actually do anything.
Hypothetically ffmpeg could be an ICE server for peer-connecting, do SDP for stream negotiation possibly with a side of WHEP (egress protocol) as well, could do SCTP for actual stream transfer. Such that it could sort of act as a standalone peer, rather than offload that work to a gateway service.
Worth noting that gstreamer & OBS also are WHIP based, rely on an external gateway for their WebRTC support. There's not one clear way to do a bunch of the WebRTC layer cake (albeit WHEP is fairly popular I think at this point?), so WHIP is a good way to support sending videos, without having to make a bunch of other decisions that may or may not jive with how someone wants to implement WebRTC in their system; those decisions are all in the WHIP gateway. It may be better to decouple, not try to do it all, which would require specific opinionative approaches.
I still don't understand any practical use cases. Can you give some examples? (I'm not being obtuse here I'm genuinely curious what this can enable now.)
Are there any popular/well-known WebRTC senders (or servers)? I'm pretty sure this is not for YouTube etc., right? So what would I watch through WebRTC?
LLMs really know how to use it incredibly well. You can ask them to do just about any video related task and they can give you an ffmpeg one liner to do it.
Wow, you are not wrong. I just asked Gemini "how can I use ffmpeg to apply a lower third image to a video?" and it gave a very detailed explanation of using an overlay filter. Have not tested its answer yet but on its face it looks legit.
Gajim, the XMPP client, has been awaiting this for a long time! Their Audio/Video calling features fell into deprecation, and they've been patiently waiting for FFmpeg to make it much easier for them to add Audio/Video calling features back again.
Hopefully this doesn't make it more dangerous to keep ffmpeg on our systems. WebRTC security flaws are responsible for a lot of compromises. It's one of the first things I disable after installing a browser
You're right that biggest reason people usually recommend disabling it is to prevent your IP from leaking when using a VPN https://www.techradar.com/vpn/webrtc-leaks but not having to worry about RCE or DoS is a nice bonus
I'm not sure how much will this impact ffmpeg users. Considering that WebRTC has a bad track record in terms of security though, I do worry a little that its inclusion in one more place on our systems could increase attack surface.
I assume autoexec is referring to the plethora of WebRTC vulnerabilities which have affected browsers, messengers, and any other software which implements WebRTC for client use. Its full implementation is seemingly difficult to get right.
Of course, you're right that this implementation is very small. It's very different than a typical client implementation, I don't share the same concerns. It's also only the WHIP portion of WebRTC, and anyone processing user input through ffmpeg is hopefully compiling a version enabling only the features they use, or at least "--disable-muxer=whip" and others at configure time. Or, you know, you could specify everything explicitly at runtime so ffmpeg won't load features based on variable user input.
> Hopefully this doesn't make it more dangerous to keep ffmpeg on our systems.
ffmpeg has had so many issues in the past [1], it's best practice anyway to keep it well contained when dealing with user input. Create a docker image with nothing but ffmpeg and its dependencies installed and do a "docker run" for every transcode job you got. Or maybe add ClamAV, OpenOffice and ImageMagick in the image as well if you also need to deal with creating thumbnails of images and document.
And personally, I'd go a step further and keep the servers that deal with user-generated files in more than accepting and serving them in their own, heavily locked down VLAN (or Security Group if you're on AWS).
That's not a dumbass criticism of any of these projects mentioned by the way. Security is hard, especially when dealing with binary formats that have inherited a lot of sometimes questionably reverse engineered garbage. It's wise to recognize this before getting fucked over like 4chan was.
OMG YEEEEES. I'm building web based remote control and if this allows me to do ffmpeg gdigrab, have that become a WebRTC stream and be consumed by a client without the ExpressJS gymnastics I do right now, I'll be over the moon.
Now that GStreamer, OBS and FFmpeg all have WHIP support we finally have a ubiquitous protocol for video broadcasting for all platforms (Mobile, Web, Embedded, Broadcasting Software etc...)
I have been working on Open Source + WebRTC Broadcasting for years now. This is a huge milestone :)
[0] https://github.com/Glimesh/broadcast-box?tab=readme-ov-file#...
[1] https://github.com/obsproject/obs-studio/pull/7926
When are you coming back to the WebRTC space, lots more cool stuff you could b doing :) I really loved [0] it's so cool that a user can access a server behind a firewall/NAT without setting up a VPN or having SSH constantly listening.
[0] https://github.com/maxmcd/webtty
Are we entering an era where you don't need Amazon's budget to host something like Twitch?
[1] https://gstreamer.freedesktop.org/documentation/rswebrtc/web...
ICE (protocol for networking) supports this today. It just needs to get into the software.
I've been struggling to get a proper low-latency screen+audio recording going (on macos) and streaming that over WebRTC. Either the audio gets de-sync, or the streaming latency is too high.
I hope some day we can switch to a QUIC or WebTransport based p2p protocol, rather than use SCTP. QUIC does the SCTP job very well atop existing UDP, rather than add such wild complexity & variance. One candidate, Media-over-Quic ?MoQ), but the browser doesn't have a p2p quic & progress on that stalled out years ago. https://quic.video/ https://datatracker.ietf.org/group/moq/about/
Most 'WHIP Providers' also support DataChannel. But it isn't a standardized thing yet
Hypothetically ffmpeg could be an ICE server for peer-connecting, do SDP for stream negotiation possibly with a side of WHEP (egress protocol) as well, could do SCTP for actual stream transfer. Such that it could sort of act as a standalone peer, rather than offload that work to a gateway service.
Worth noting that gstreamer & OBS also are WHIP based, rely on an external gateway for their WebRTC support. There's not one clear way to do a bunch of the WebRTC layer cake (albeit WHEP is fairly popular I think at this point?), so WHIP is a good way to support sending videos, without having to make a bunch of other decisions that may or may not jive with how someone wants to implement WebRTC in their system; those decisions are all in the WHIP gateway. It may be better to decouple, not try to do it all, which would require specific opinionative approaches.
Phoronix has a somewhat more informative page: https://www.phoronix.com/news/FFmpeg-Lands-WHIP-Muxer
If you know how to use it ffmpeg is such an amazing stand alone/plug and play piece of media software.
Especially with Simulcast it will make it SO cheap/easy for people.
I made https://github.com/Glimesh/broadcast-box in a hope to make self-hosting + WebRTC a lot easier :)
Now it is all wallet garden/app-per-service.
This implementation is very small. I feel 100% confident we are giving users the best thing possible.
You're right that biggest reason people usually recommend disabling it is to prevent your IP from leaking when using a VPN https://www.techradar.com/vpn/webrtc-leaks but not having to worry about RCE or DoS is a nice bonus
I'm not sure how much will this impact ffmpeg users. Considering that WebRTC has a bad track record in terms of security though, I do worry a little that its inclusion in one more place on our systems could increase attack surface.
Of course, you're right that this implementation is very small. It's very different than a typical client implementation, I don't share the same concerns. It's also only the WHIP portion of WebRTC, and anyone processing user input through ffmpeg is hopefully compiling a version enabling only the features they use, or at least "--disable-muxer=whip" and others at configure time. Or, you know, you could specify everything explicitly at runtime so ffmpeg won't load features based on variable user input.
ffmpeg has had so many issues in the past [1], it's best practice anyway to keep it well contained when dealing with user input. Create a docker image with nothing but ffmpeg and its dependencies installed and do a "docker run" for every transcode job you got. Or maybe add ClamAV, OpenOffice and ImageMagick in the image as well if you also need to deal with creating thumbnails of images and document.
And personally, I'd go a step further and keep the servers that deal with user-generated files in more than accepting and serving them in their own, heavily locked down VLAN (or Security Group if you're on AWS).
That's not a dumbass criticism of any of these projects mentioned by the way. Security is hard, especially when dealing with binary formats that have inherited a lot of sometimes questionably reverse engineered garbage. It's wise to recognize this before getting fucked over like 4chan was.
[1] https://ffmpeg.org/security.html