Live Camera Processing

@EliotBR , you provided some sample Python in this post to improve images in poor visibility. Is this something that can be done live on the BR2s camera feed? Thanks!

Hi @GavXYZ,

I’ve moved your comment to its own post since it’s on its own topic :slight_smile:

In short it could be, but performance may not be great. To get it to display in QGroundControl would require re-encoding it, which would add extra processing requirements and latency.

NOTE: For some context, there’s some information about understanding camera features (including processing types) in the ‘camera’ toggle in this post :slight_smile:

A camera stream generally involves

raw stream -> encoding -> transport -> decoding -> display

For best results processing should be applied as “pre-processing”, to the raw stream (and on the camera if possible), at which point it’s working with the best data. With the BR camera that’s not possible, because it does encoding on the camera, and doesn’t support a raw output or custom pre-processing (although that is how the built in brightness and contrast control adjustments are handled).

As some examples:

  • An OAK Camera is made to do custom pre-processing on board, prior to encoding, although the pre-assembled options have quite small physical pixels so may not have amazing low light performance
  • The DWE exploreHD uses a similar sensor to the BR camera, but has a processing chip that applies visibility and colour adjustments before the stream gets encoded

The steps then are

raw stream -> pre-processing -> encoding -> transport -> decoding -> display

Next best (from a latency perspective) would be processing just before it gets displayed, since then it’s already decoded into an array of pixel values. The Python code could do this, but then it would also need to do the displaying. A better approach would be to have the software that’s already decoding and displaying the video also do some processing (e.g. QGroundControl could potentially be modified to do at least similar processing based on Qt shader effects).

Processing in QGC may cause issues with recording, since generally the incoming stream can be recorded directly, separately to and before any decoding is done for displaying.

raw stream -> encoding -> transport -> decoding -> processing -> display

It is also technically possible to use Python to decode, process, then re-encode and send to QGroundControl, but that adds extra latency and processing while also losing quality from the additional encoding step, so it’s not particularly ideal.

raw stream -> encoding -> transport -> decoding -> processing -> encoding -> transport -> decoding -> display

It would be slightly better to get a YUY2 stream from the camera and run the processing and H264 encoding on the onboard computer, but that may cause latency and overheating issues, still involves some extra encoding, and uses extra USB port bandwidth.

raw stream -> (light) encoding -> processing -> (full) encoding -> transport -> decoding -> display

Thanks Eliot. Great reply and cheers for the dedicated thread.