# Suffering from latency and low resolution with Camera

Hello, I connected my camera to a Raspberry PI 4 with BlueOS in the latest beta version, camera model: e-CAM130_CURB.
This camera ouputs a raw UYVY video, so I used this pipeline to stream video:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1920,height=1080 ! videoconvert ! x264enc  ! queue ! rtph264pay  !  udpsink host=192.168.2.1 port=5600


I had around 2 seconds of latency. Then I tried this:

gst-launch-1.0 v4l2src device=/dev/video0 ! video/x-raw, format=UYVY, width=1920,height=1080 ! videoconvert ! x264enc tune=zerolatency speed-preset=ultrafast  ! queue ! rtph264pay  !  udpsink host=192.168.2.1 port=5600


The latency was solved but I still have very bad resolution.

Maybe try using the omx encoder.

I tried to install the omx package in different ways but I always get “WARNING: erroneous pipeline: no element “omxh264enc””.
When I do the command gst-inspect-1.0 | grep omx
I get:
libav: avenc_h264_omx: libav OpenMAX IL H.264 video encoder encoder
libav: avenc_mpeg4_omx: libav OpenMAX IL MPEG-4 video encoder encoder
But no omxh264enc.

1. Latency is a broad term, how latency is being measured here?
3. Latency, depending on what we are measuring here, especially if glass-to-glass latency, would heavily depend on the receiver, so if you can specify the GStreamer pipeline of the receiver?

### Latencies

If you are willing to investigate the pipeline latency, I suggest using GStreamer’s tracer, which can measure latencies of each element in a given pipeline, then use this information to find the bottlenecks, read each encoder/decoder documentation to see what you can adjust. Hints:

• queues separate processes, which allows things to run in parallel. The cost will be how the memory will be exchanged between those processes. Our base recipe has been to always use queues with no buffer, e.g.: queue max-size-buffers=0
• on the receiver side, I usually use avdec_h264 discard-corrupted-frames=true

Here is the GStreamer documentation about its tracing tooling, it has an example printing latencies for each element.

### Quality

Ultimately it depends on the encoder and its settings, and in limited hardware, we will always have to find a balance between achieving lower latency and achieving higher quality. Given your second pipeline, I would blame the speed-preset=ultrafast.

The current BlueOs beta ships GStreamer 1.20.2 compiled with omx support, no special steps should be needed to use omxh264enc.

Could you try to clear GStreamer’s cache and rerun the inspection tool to make it search for the plugins?

\rm -rf ~/.cache/gstreamer-1.0
gst-inspect-1.0


and then, running the inspection tool to check if the omx plugins should show:

If omxh264enc is not there, gst-inspect-1.0 --print-blacklist could give us some clues.

Thanks

1 Like

Hello, thanks for the response.
I was “measuring” lattency simply by passing my hand in front of the camera and waiting until it shows on the screen. (It’s an appreciable amount of time).
The camera is connected via FPC cable to a Raspberry Pi 4 wich is connected via ethernet cable to my computer.
I do not have a pipeline of the reciever because I’m recieving the stream with QGroundControl. Maybe there is a configuration in QGroundControl for changing the way it recieves video? Or maybe I should try recieving the video with a custom pipeline in gstreamer?

I tried the following command to have a proper measurement of the lattency:

 GST_TRACERS="latency(flags=pipeline+element+reported)" GST_DEBUG=GST_TRACER:7 gst-launch-1.0 v4l2src device=/dev/video0 ! queue max-size-buffers=0 !  video/x-raw, format=UYVY, width=1920,height=1080 ! queue max-size-buffers=0 !  videoconvert ! queue max-size-buffers=0 !  x264enc  ! queue max-size-buffers=0 ! rtph264pay  !  udpsink host=192.168.2.1 port=5600



I will attach a .txt file with lattency traces I got, but mainly I had:
videoconvert time=50000000
x264enc time: around 4000000000
v4l2src time: around 4000000000
rtph264pay time: around 1000000
I’m not sure in what are the time units, but x264enc appears to be the most problematic.
latency.txt (56.7 KB)

Regarding the omx package, I tried to clear Gstreamer’s cache with no results. When applying the command gst-inspect-1.0 --print-blacklist though, I had the following output:
Blacklisted files: libgstomx.so Total count: 1 blacklisted file
Also, I’m working with the BlueOS beta.14 and I had to manually install gstreamer and it’s packages.

Edit: I was working with the command red-pill, not entirely knowing what it does. Now, from root@blueos the omx package is recognized. But when I try to stream there’s an error:

• The latencies are given in nanoseconds - 4000000000 is 4 seconds
• Both v4lsrc and x264enc are very high, for v4l2src, try to specify a more direct io-mode, like 4 ou 5.

Sorry for not stating it clearly, but yes, our GStreamer with omx is only available inside the container, when you ran red-pill, you went out of the container. Note that any changes in the container are temporary, meaning that if you have a custom script to setup this pipeline, you should store it outside the container and place it in some accessible from path (see echo \$PATH).

I imagine omxh264enc is not working because your pipeline might not be correctly filtering the needed capabilities, try to specify the framerate, and run the gst-launch-1.0 with -v. Also, maybe it’s a good idea to prepend the command with the debug flag GST_DEBUG=3 so we can see what’s going on. You might need to add some filter after it, like 'video/x-h264,level=(string)4.2' (quotes included).

Thanks

I got the omx working, and drastically improved the latency. Now, v4lsrc and omxh264 have 80,000,000 and 40,000,000 ns of latency, they improved by a factor of 10.
The io-mode=4 didn’t make a difference, and io-mode=5 gives me an error.
With the -v I have this information:

With GST_DEBUG=3 I have this warning all the time:

Maybe I can change some parameter of the omxh264enc for reduce even more the latency?

You can try changing some of the parameters (they’re listed in Willian’s first comment). In general, reducing quality and removing filters will reduce the amount of processing required, which reduces the latency, but that can also make it a less clear/smooth stream - it’s a tradeoff.

Hmm, now I read more about it and 5 seems to be when there is another element in the pipeline that allocates the buffer and you want v4l2src to use it instead of allocating its own, our case would be 4. Check your CPU usage (using htop) when comparing them.

I’d try playing with control-rate and target-bitrate before trying anything like quant/frame settings. You can also add a filter to specify the h264 profile after the encoder, like ... omxh264enc ! video/x-h264,profile=baseline ! ..., which might also help you. I think the only options are constrained-baseline, baseline, main, and high.

1 Like