Using Blue Robotics Low LIght USB cameras in ROV system

@jwalser I will be using your cameras in an existing system running on either a Pi3 or Jetson TX2. Can you supply drivers for the camera (I reckon it is not plug and play) without referring to the pre-loaded image?
Also, do we have the possibility to trigger the cameras externally for synchronising multiple cameras?

The cameras are compliant with the UVC standard. They will work with the v4l2 linux drivers. You will have to write your own program to deal with the camera synchronization, but yes this is possible.

Can you show some commands to do it? When I use more than two cameras I get quite a bit of corruption in the videos (pixelation), even with the cameras at 15fps.

gst-launch-1.0 -v v4l2src device=$DEVICE do-timestamp=true ! video/x-h264, width=$WIDTH, height=$HEIGHT, framerate=$FRAMERATE/1 ! h264parse ! queue ! rtph264pay config-interval=10 pt=96 ! udpsink host=192.168.2.1 port=5600

Change the device, width, height, framerate, and client ip/port as necessary for multiple streams. You should do a bandwidth test because other factors like power supply and cable/connector quality/disrepair can affect video quality.

Thank you for the snippet. By changing the sink to a filesink I am able to record video to file into a mkv-container. However, when I try to play the files in VLC, it seems there is nothing there even though the file-size is large. Furthermore I added the “-e” in order to end recording correctly from Ctrl+C.
Any thoughts here?

If you want to record the video, you’ll need to change the pipeline.

This pipeline is used to transmit video using udp and h264.

Now, to save the video as a file to play it in VLC, you will need to encode and compress the video to jpeg into an avi file.
E.g: gst-launch-1.0 -v v4l2src device=$DEVICE ! jpegenc ! avimux ! filesink location=video.avi

Wow ok, thanks Patrick!
Since the x264 compression is made on-board the camera, I thought one could simply grab it and slam it into a file. Won’t this increase the filesize enormously?

@jchr89 You are correct. It should go something like this:

gst-launch-1.0 -ev v4l2src device=/dev/video1 do-timestamp=true ! video/x-h264, width=1920, height=1080, framerate=30/1 ! h264parse ! mp4mux ! queue ! filesink location=test.mp4

Without the mux you are recording a raw stream with no information about duration and timestamps. Some players will play the raw stream just fine, but seeking won’t work. While @patrickelectric example won’t increase the filesize per-se, it will introduce a lot of computational overhead in re-encoding the video if you are using the H.264 stream (note the camera is capable of mjpeg and h264 output).

I see, thanks guys.
Since I will be using a single Raspberry Pi 3 to collect data from 2 of these cameras simultaneously, I think I need to grab the H.264 stream (video1, right?). Not sure the bandwidth is big enough for 2 HD streams otherwise. Not even sure it is big enough even when using the H.264 stream.
I will try this out at work tomorrow - thanks again!

Just to get back to you and others who might think of doing this. I can confirm that it works with a Raspberry Pi 3 and 2 cameras recording simultaneously in HD.
I made a bash-script for starting and stopping recording. See below:

#!/bin/bash

WIDTH=1920
HEIGHT=1080
FRAMERATE=15
DATE_STAMP=$(date +%Y%m%d)
TIME_STAMP=$(date +%H%M%S)
DATA_PATH="data/$DATE_STAMP"
FILENAME_BOW=$TIME_STAMP"_bow.mp4"
FILENAME_PS=$TIME_STAMP"_ps.mp4"

mkdir -p $DATA_PATH;
cd $DATA_PATH

echo "starting bow video stream"
gst-launch-1.0 -e v4l2src device=/dev/video1 do-timestamp=true ! video/x-h264, width=$WIDTH,     height=$HEIGHT, framerate=$FRAMERATE/1 ! h264parse ! mp4mux ! queue ! filesink     location=$FILENAME_BOW &
echo "starting port side video stream"
gst-launch-1.0 -e v4l2src device=/dev/video3 do-timestamp=true ! video/x-h264, width=$WIDTH,     height=$HEIGHT, framerate=$FRAMERATE/1 ! h264parse ! mp4mux ! queue ! filesink     location=$FILENAME_PS &

and for stopping the recording:

#!/bin/bash
echo "stopping video recording"
killall -SIGINT gst-launch-1.0

If I would like to stream the video both to a file, as I am doing now, and simultaneously show the video to the display - is this possible by adding more commands to gst-launch?

1 Like

It is possible, there are examples to do this in this forum and online. The exact implementation will depend on where (locally or remotely) you want to save and display video.

Hi Jacob,

When getting video and saving it to a file as shown above, I get these laggish effects every 15 seconds a so (see uploaded file).
Other than messing with the pixels it also lags quite a bit. I have replayed the files in VLC on both Ubuntu and Max OS. Any thoughts on this?

You are recording this to the pi sd card? The script is exactly the same as above post?

Yes, it is dumped directly on the Raspberry Pi 3 sd card. Script is exactly the same as above, yes.

What kind of SD card are you using?

Some Kingston 64 GB Class 10 Ultra. Can’t remember more details about - will update you tomorrow when I’m at work, if you need it,

Where did you purchase it? Counterfeits abound.
I still have yet to try your script myself, but a bottleneck in filesystem I/O is my first idea of the problem.

We bought it through our industrial supplier in Denmark, so we should be good :slight_smile:
Yeah, I am running two concurrent instances (2 cameras to the same Pi) in HD. But “only” at 15 FPS.
I haven’t tried lowering the resolution though…

Jacob, I forgot that I made some extensions to the above script.
In addition to saving the H.264 stream to a file, I also grab the raw stream and show to display. This is done as shown below:

#!/bin/bash

WIDTH=1920
HEIGHT=1080
FRAMERATE=15
DATE_STAMP=$(date +%Y%m%d)
TIME_STAMP=$(date +%H%M%S)
DATA_PATH="data/$DATE_STAMP"
FILENAME_BOW=$TIME_STAMP"_bow.mp4"
FILENAME_PS=$TIME_STAMP"_ps.mp4"

mkdir -p $DATA_PATH;
cd $DATA_PATH

echo "starting bow video stream"
gst-launch-1.0 -e v4l2src device=/dev/video1 do-timestamp=true ! video/x-h264, width=$WIDTH,     height=$HEIGHT, framerate=$FRAMERATE/1 ! h264parse ! mp4mux ! queue ! filesink     location=$FILENAME_BOW &
gst-launch-1.0 -e v4l2src device=/dev/video2 ! video/x-raw, format=YUY2, width=640, height=480, framerate=15/1 ! xvimagesink sync=false &
echo "starting port side video stream"
gst-launch-1.0 -e v4l2src device=/dev/video3 do-timestamp=true ! video/x-h264, width=$WIDTH,     height=$HEIGHT, framerate=$FRAMERATE/1 ! h264parse ! mp4mux ! queue ! filesink         location=$FILENAME_PS &
gst-launch-1.0 -e v4l2src device=/dev/video0 ! video/x-raw, format=YUY2, width=640, height=480, framerate=15/1 ! xvimagesink sync=false &

And this is what seems to cause I/O problems. I am doing this only because I can’t seem to get the H.264 stream shown on display and simultaneously written to a file.

I’m trying to use gstreamer to stream from the low-light USB webcam to QGroundControl with the following command:

gst-launch-1.0 -v v4l2src device=$DEVICE do-timestamp=true ! video/x-h264, width=$WIDTH, height=$HEIGHT, framerate=$FRAMERATE/1 ! h264parse ! queue ! rtph264pay config-interval=10 pt=96 ! udpsink host=192.168.1.222 port=5600

(already have WIDTH=1920, HEIGHT=1080, FRAMERATE=15, DEVICE=/dev/video1, with video1 verified to be the H.264 source)

It fails quickly with the following error:

ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Video device did not suggest any buffer size.
Additional debug info:
gstv4l2object.c(3417): gst_v4l2_object_decide_allocation (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0
Execution ended after 0:00:00.073182406

Using Raspbian Jessie on a Raspberry Pi 3 Model B.

(The older version of Raspbian is in play because I hope to install ROS Kinetic)

Any ideas on how to fix this?