OpenCV Python with Gstreamer Backend

Hi everyone,

I’ve been playing around with getting gstreamer functionality in opencv and thought I’d prefer to use gstreamer as an opencv VideoCapture/VideoWriter backend rather than using the python gstreamer library and having to create the frames manually (which is the approach taken in the ardusub docs).

By default the opencv-python library doesn’t come with the gstreamer backend integrated, so it needs to be built from source. Thankfully the opencv-python GitHub repo does most of the heavy lifting, and also provides a simple way to specify modifications while keeping the defaults for everything else.

Gstreamer Installation

To start with, gstreamer will need to be installed. There are instructions on the gstreamer website, although on Mac you may want to use homebrew instead (e.g. brew install gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav).

OpenCV Compilation

If you’ve already got opencv-python installed you’ll want to either uninstall that (pip uninstall opencv-python), or create a virtual environment for this version. Next you’ll need a terminal or command prompt navigated to where you want to put the opencv source code, and proceed as below

# <navigate to where you want the opencv-python repo to be stored>
git clone --recursive https://github.com/skvark/opencv-python.git
cd opencv-python
export CMAKE_ARGS="-DWITH_GSTREAMER=ON"
pip install --upgrade pip wheel
# this is the build step - the repo estimates it can take from 5 
#   mins to > 2 hrs depending on your computer hardware
pip wheel . --verbose
pip install opencv_python*.whl
# note, wheel may be generated in dist/ directory, so may have to cd first

Probably took a while to load, but that’s it!

Usage

Now that it’s installed, you can use cv2.VideoCapture and cv2.VideoWriter by passing in a gstreamer command string and specifying the gstreamer backend (e.g. cv2.VideoCapture('videotestsrc ! appsink', cv2.CAP_GSTREAMER)). Note that VideoCapture strings must end with the appsink element, and VideoWriter strings must start with the appsrc element in order to be able to appropriately handle frames.

Personally I prefer to use the wrapper functionality from my library pythonic-cv, which automatically uses threading for speed, context managers for cleanup, and allows iteration over the video stream. It also makes processing convenient. Here’s an example class for a BlueROV camera:

import cv2
from pcv.vidIO import LockedCamera

class BlueROVCamera(LockedCamera):
    ''' A camera class handling h264-encoded UDP stream. '''
    command = ('udpsrc port={} ! '
               'application/x-rtp, payload=96 ! '
               '{}rtph264depay ! '
               'decodebin ! videoconvert ! '
               'appsink')
    def __init__(self, port=5600, buffer=True, **kwargs):
        '''
        'port' is the UDP port of the stream. 
        'buffer' is a boolean specifying whether to buffer rtp packets
            -> reduces jitter, but adds latency
        '''
        jitter_buffer = 'rtpjitterbuffer ! ' if buffer else ''
        super().__init__(self.command.format(port, jitter_buffer),
                         cv2.CAP_GSTREAMER, **kwargs)

if __name__ == '__main__':
    # plain stream to measure framerate
    with BlueROVCamera() as cam:
        print(f'fps of pure stream = {cam.measure_framerate(5):.3f}')

    # do some processing (display all colour channels and conversions)
    from pcv.process import channel_options

    print("press 'q' to close stream")
    with BlueROVCamera(process=lambda img: channel_options(img)) as cam:
        print(f'fps of processed stream = {cam.measure_framerate(5):.3f}')
        cam.stream()

EDIT: added frame-rate measurement example
EDIT2: adjusted command to a more reliable one, made buffering optional

5 Likes

Hey is there any simple example code you have for the cv2.VideoCapture('videotestsrc ! appsink', cv2.CAP_GSTREAMER) ) i just want to reciever a stream. Also what do i put under videotestsrc ! appsink if I’m using the blue os beta companion with a h.264 compression

The appropriate gstreamer pipeline/command depends on the stream being received. The main difference from a normal gstreamer pipeline is that feeding it into an application uses appsink instead of other sinks like autovideosink.

The command/pipeline for that should be the same as the one in my BlueROVCamera example, e.g.

import cv2

port = 5600
pipeline = ('udpsrc port={} ! '
            'application/x-rtp, payload=96 ! '
            'rtpjitterbuffer ! rtph264depay ! '
            'decodebin ! videoconvert ! '
            'appsink').format(port)
cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)

If you want to use pure OpenCV rather than pythonic-cv you can look at one of the many tutorials available online for streaming video (e.g. this one from the OpenCV documentation), although there’s a bit more to manage, and the performance may not be as good.

thanks you so much isnt it supposed to be rthp264"delay" and not depay?

Nope. rtph264depay is an element which extracts (de-payloads) a h264-encoded stream from RTP packets. It’s the opposite of rtph264pay, which is part of the encoding pipeline :slight_smile:


oh no

Hi
I attempted to run your script but came up a bit short
~/opencv-python cv-python$ pipwheel . —verbose came up with an error “pip subprocess to install build dependencies did not run successfully” exit code 1(pyproject.tom1) did not run successfully
note: This error originates from a subprocess, and is likely not a problem with pip.
So I carried on to the next step

~/opencv-python $ pip install opencv_python*.whl

The error I received is:
Defaulting to user installation because normal site-packages is not writeable Warning: Requirement 'opencv-python.whl looks like a filename, but the file does not exist.Also dist does not exist either

I cannot proceed any further. Any ideas?