How to stream another cameras' video

There isn’t a plug-and-play method of adding an extra camera stream, but it’s also not crazily difficult if you’ve done some programming before, and especially not if you’ve worked with gstreamer before. The following assumes you’re using a non-networked camera (e.g. a USB camera or something using the CSI bus (e.g. RPi camera)).

Getting a H264 Stream

Blue Robotics USB cameras already come with a h264-encoded stream as an access option, so if you’re using one of them or another similar camera then you can skip this step. RPi cameras are a bit of an outlier because the camera itself doesn’t send h264, but the RPi is set up to provide a h264 interface to it so it can be used as though it does. The only thing to note there is that the RPi has to work a bit harder when using the RPi camera, which might slow some things down a bit, although for the most part it shouldn’t be an issue because most of those extra operations are happening on the GPU.

All other cameras will need to have their stream converted into a h264 stream by gstreamer, which is slow and not recommended - better to use a different camera if possible.

Forwarding the Stream over UDP (Gstreamer)

Getting the Device ID

I’d suggest using gst-inspect-1.0 or gst-device-monitor-1.0 to find the device id of the cameras you’ve connected. This lets you tell gstreamer which device you’re trying to connect to.

Note that each camera often appears as a pair of ids, and audio is often set up as a video device when available. To get the correct ids

  1. Plug in any cameras/audio devices that will be connected during operation
  2. Turn on the ROV/RPi
  3. Check the available ids and write down the ones with a h264 option
  4. Once the pipeline is set up (next section), create your parameter files to specify the relevant id, and swap the ids as required to get the cameras aligned to the desired ports

Setting up the pipeline

Existing pipeline

It’s relevant here to look at how the bluerobotics companion computer sets up the existing pipeline.

The file .companion.rc is run on startup. Video is started on line 18 (if relevant, audio is started on line 22). Line 18 starts a ‘screen session’ (basically opens a new terminal and gives it a name so you can access it again later), and runs a script called streamer.py, which starts the video stream and monitors it - restarting the stream at most 5 seconds after it fails. Note that stream failures are rare, so most of the time this is just chilling and making sure the stream is still available.

To actually start the video stream, streamer.py runs a bash script start_video.sh, using either parameters that are passed in as command-line arguments, or by reading them in from a file vidformat.param. start_video.sh is basically set up to

  1. parse the input parameters
  2. check if those parameters are for a valid h264-encoded camera
  3. if not, try to find a valid h264-encoded camera
  4. start a gstreamer stream with the specified camera and parameters

Your pipeline

To set up your own pipeline, you can either piggy-back off the existing scripts with modifications to handle your extra camera(s), or copy the relevant/desired components of them and make that run through .companion.rc. The main important factor is making sure the different pipelines can’t get confused with each other. You’ll want a separate parameter file for each camera, and you’ll need to make sure all streams go to a unique port so they’re not competing for the same receiver on the other end.

This is one way of doing the piggy-back approach. It replaces the file gstreamer2.param from the home directory (/home/pi/) with gstreamer2_BRF.param for the Front camera, and gstreamer2_BRT.param for the Top camera (both cameras were blue-robotics (BR) cameras). It has the downside that if one stream fails both/all of them are restarted, but was simpler to program at the time than it would be to properly handle individual streams in a scalable manner (although if you implement that feature it’d be great to make a pull-request to the bluerobotics companion repo). Given the low frequency of stream failures it likely isn’t much of a problem.

EDIT: there’s now another approach covered here, which keeps the cameras separated, and makes it easier to add additional streams.

Receiving the stream

QGroundControl is currently only set up to receive a single video stream. For adding one or more extra cameras, the following are some possible ways of receiving the incoming UDP stream(s):

  • obs-gstreamer
    general instructions
    1. install Open Broadcast Studio (OBS)
    2. install gstreamer
      • read the prebuilt section of the obs-gstreamer README first - provides some useful simpler install links
      • read the normal gstreamer install instructions for extra requirements like updating PATH environment variable
    3. install the obs-gstreamer plugin
      1. download the latest release (.zip) from the releases
      2. move the plugin file to the obs-studio plugins folder, e.g. C:\Program Files\obs-studio\obs-plugins\64bit\
    4. Open OBS and add a Gstreamer Source, and use udpsrc port=5600 ! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! video. for a h264 stream on port 5600 (see here for an option with audio)
  • OpenCV with ffmpeg/gstreamer (python, C++, Java, etc.)
  • VideoLan (VLC), possibly with a VLC Mosaic
  • gstreamer command-line interface
  • ffmpeg command-line interface
5 Likes