Using GStreamer with Jetson Nano to replace Raspberry

I am trying to send live footage using BlueRobotics’ low light usb camera from Jetson to a surface computer using python’s socket library. I am using UDP as my transport protocol for better timing but I can only send at most 64KB because of this I am having to split the image data into pieces and send them over one by one. This causes the video to become jittery and lag behind.
I think the companion software sends image data using gstreamer but I am using a Jetson Nano instead of a Raspberry Pi. Is there a way I can use the gstreamer settings in the companion software in Jetson Nano? Ardusub site already provides examples of receiving gstreamer data but it doesn’t have examples of sending gstreamer data.
Essentialy I am trying to replace Raspberry with Jetson Nano to have a better performance while doing autonomous missions.
Can you show me where the gstreamer code is located in companion software?

Thank you for your answers!

Server Code

import socket
import cv2
import numpy

server = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
address = ("192.168.1.41", 8080)
server.bind(address)

buffer = 65502
segment_amount = 0
received_segments = [None]
rawFrame = ""

while True:
    data, address = server.recvfrom(buffer)
    print("Type:", data[0], "|| Extra:", data[1])

    if data[0] == 1:
        segment_amount = data[1]
        received_segments = [None] * segment_amount
        print("New segment amount:", segment_amount)

    elif data[0] == 0:
        if received_segments[data[1]] is None:
            print(f"Received[{data[1]}]")
            received_segments[data[1]] = data[2:]
            print(f"New segment, len({len(data[2:])})")

    if all(received_segments):
        print("Assembling Frame")
        rawFrame = received_segments[0]
        for i in range(1, segment_amount):
            rawFrame += received_segments[i]
        frame = numpy.reshape(numpy.frombuffer(rawFrame, numpy.uint8), (480, 640, 3))
        cv2.imshow("Output", frame)

        if cv2.waitKey(1) & 0xff == ord("q"):  # check if q is pressed, as the wait time gets bigger the video plays slower
            break

        received_segments = [None] * segment_amount

Client Code

import socket
import cv2
import numpy

client = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_address = ("192.168.1.41", 8080)

camera = cv2.VideoCapture(0)
buffer = 65502
dataBuffer = buffer - 2
last = 0

while camera.isOpened():
    got, frame = camera.read()

    # frame.shape
    # frame.tobytes()
    # frame.dtype
    # frame2 = numpy.reshape(numpy.frombuffer(binary, numpy.uint8), (480, 640, 3))

    frameBin = frame.tobytes()  # raw binary data of frame
    binLen = len(frameBin)
    print(binLen)

    segment_amount = binLen // dataBuffer + int(bool(binLen % dataBuffer))
    print("Segment amount:", segment_amount)
    client.sendto(b"\x01" + segment_amount.to_bytes(1, "big"), server_address)  # send SA ID[1] and Segment Amount(SA)

    # divide binary data to 65K segments
    for i in range(segment_amount - 1):
        client.sendto(b"\x00" + i.to_bytes(1, "big") + frameBin[i * dataBuffer:(i + 1) * dataBuffer], server_address)  # send IMG ID[0], Segment Index(SI) and Data
        print(f"Sent[{i}], len({len(frameBin[i * dataBuffer:(i + 1) * dataBuffer])})")
        last = i
    client.sendto(b"\x00" + (last + 1).to_bytes(1, "big") + frameBin[-(binLen % dataBuffer):], server_address)  # send the left over segment
    print(f"Sent[{last + 1}], len({len(frameBin[-(binLen % dataBuffer):])})")

    cv2.imshow("Camera Input", frame)
    if cv2.waitKey(1) & 0xff == ord("q"):  # check if q is pressed, as the wait time gets bigger the video plays slower
        break

Hi @MemeOverload, welcome to the forum :slight_smile:

It’s definitely better to use gstreamer or ffmpeg or similar here, reading in the data with Python (and especially decoding + re-encoding it with OpenCV) will certainly slow things down unnecessarily. The only reason you might want Python+OpenCV involved is if you want to do some processing on the video, otherwise I’d recommend one of the other alternatives for receiving a stream.

On our stable Companion software

  1. .companion.rc starts the video service
  2. streamer.py is in charge of restarting the video stream if it fails/closes
  3. start_video.sh confirms the camera is valid and working and starts the gstreamer pipeline

In our new Companion Beta software

  1. core runs the mavlink camera manager tool
  2. the camera manager handles video streams, and presents them via MAVLink to the ground control station for switching between them (requires the QGC master, and requires the user to specify which cameras they want to stream, via the web interface)