Once I posted about my WebRTC SFU library.
Developing a WebRTC SFU library in Rust
I’ve recently added a recording function to this project. This update brings new capabilities for capturing and processing RTP streams in flexible, server-friendly manner.
The v0.5.0 release introduces recording features.
And I published the documentation site. The recording feature is explained on the site.
https://h3poteto.github.io/rheomesh//pages/04_recording/
Design Philosophy: Raw RTP Packet Forwarding
Rather than creating a monolithic function that handles everything from capture to file generation, I opted for a more modular approach. The recording functionality simply forwards raw RTP packets, leaving encoding and file creation to external tools like FFmpeg or GStreamer.
This design decision prioritizes flexibility, especially for server deployments where different use cases might require different encoding strategies. Both FFmpeg and GStreamer can generate video files from raw RTP packets, making this delegation approach both practical and powerful.
SDP Generation
While the core functionality focuses on packet forwarding, I recognized that Session Description Protocol (SDP) information would be essential for most recording workflows. The library includes SDP generation capabilities, particularly important for FFmpeg-based recording setups.
You can generate SDP strings using the generate_sdp method from the RecordingTransport struct.
Recording Workflow Examples
Using FFmpeg
To start recording with FFmpeg, first generate the SDP file and set up FFmpeg to wait for incoming streams:
$ ffplay -protocol_whitelist file,rtp,udp -analyzeduration 10000000 -probesize 50000000 -f sdp -i stream.sdp
Then, initiate recording using the start_recording method. This begins raw RTP packet transmission, and you should see the video stream in FFplay.
To save the recording to a file instead of just viewing it:
$ ffmpeg -protocol_whitelist file,rtp,udp -i stream.sdp -c copy output.mkv
This command will record the stream directly to output.mkv
.
Using GStreamer
For GStreamer users, you’ll need to parse the SDP information and construct an appropriate pipeline. Here’s an example command for H.264 streams:
$ gst-launch-1.0 udpsrc port=30001 !
application/x-rtp,payload=103,encoding-name=H264 !
rtph264depay !
h264parse !
avdec_h264 !
videoconvert !
autovideosink
The specific parameters will vary depending on your encoding format. For detailed information about RTP and parser elements, consult the official GStreamer documentation:
Make sure your pipeline parameters match the encoding specified in the generated SDP.
Try It Yourself
The project includes practical examples to help you get started. Check out the examples section in the documentation.
The camera example demonstrates the recording functionality in action. It displays the generated SDP on screen, which you can copy and use with FFmpeg or GStreamer to receive and process the stream.
What’s Next
Looking ahead, I plan to focus on improving the relay functionality to make the library even more robust and feature-complete.