Gstreamer in win10

@jwalser: You do have the capability to change the port to 5700.
It just doesn’t work…
For the record I use the latest Debian and the 3.1.2 Appimage of QGC…
(Now that I think of it, maybe there is a hidden OK button I should press when i change the port? I cannot see any such button…)

There is no button, but you have to restart QGC for the changes to the desired UDP port to take effect. It works for me using 3.1.2 in Ubuntu…

@jwalser It changes to 5700, but it does not shows the video.

In the linux system the gst-launch command opens the desktop window but it shows a green screen.
When only QGC runs the video shows OK.

In a nutshell (in Linux)

  1. In Linux:

1a. QGC works perfectly standalone at UDP port 5600
1b. gst-launch standalone (again port 5600) displays a green window
1c. gst-launch (tee command) and QGC at port 5700 not working (however I just discovered that if I change UDP port on QGC to 5600 QGC video display works ok!

  1. In windows10:

2a. QGC works perfectly standalone at UDP port 5600
2b. gst-launch stand alone (again port 5600) works perfectly
2c. When trying gst tee command strange things happen… QGC does not shows video on either port 5600 or 5700… gst tee command sometimes (not always) freezes or crashes… sometimes it shows and records video normally…

With the previous (daily QGC) versions and before rpi-update, all the above were successful…

@jwalser
Hi Jacob. I need your help with gstreamer and bluerov2 (with the latest firmware):

I used to issue this command on my win10 PC to redirect the stream to port 5700 AND save it to a file.

gst-launch-1.0 -e -v udpsrc port=5600 ! tee name=STREAMOUT ! queue ! “application/x-rtp, encoding-name=H264, payload=96” ! rtph264depay ! h264parse ! mp4mux ! filesink location=%FNAME% STREAMOUT. ! queue ! udpsink host=127.0.0.1 port=5700

(variable %FNAME% is defined - this is not the problem)

FYI, both:
gst-launch-1.0 -e -v udpsrc port=5600 ! udpsink host=127.0.0.1 port=5700

and:
start gst-launch-1.0 -e -v udpsrc port=5600 ! “application/x-rtp, encoding-name=H264, payload=96” ! rtph264depay ! h264parse ! mp4mux ! filesink location=%FNAME%

work perfectly…

Any ideas (as it drives me nuts…)

(Also this works perfectly:
gst-launch-1.0 -e -v udpsrc port=5600 ! tee name=STREAMOUT ! queue ! “application/x-rtp, encoding-name=H264, payload=96” ! rtph264depay ! h264parse ! mp4mux ! filesink location=%FNAME% STREAMOUT. ! queue ! “application/x-rtp, payload=127” ! rtph264depay ! avdec_h264 ! autovideosink sync=false)

Thanks in advance,
Vagelis

This command line is working perfectly here with the last companion version.
What is your problem with it ?

it does nothing…
It creates the file (0 bytes in it) and it does not redirect to port 5700

(I use the last companion version)

Are you sure the video stream is coming? Does it show in QGC @ 5600 without the script?

Yes! I can see the video with both QGC and VLC using this SDP file:
v=0
m=video 5600 RTP/AVP 96
c=IN IP4 192.168.2.2
a=rtpmap:96 H264/90000

Only one application can see the video at one time. Does it work when you close the other apps?
Does this pipeline work?
gst-launch-1.0 -ev udpsrc port=5600 ! application/x-rtp, encoding-name=H264, payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink

Yes…
That’s why i need the Gstreamer pipeline…
We want the “tee” to save video to a file AND redirect it to port 5700 so that we can process it in real time for navigation…

This works:
gst-launch-1.0 -e -v udpsrc port=5600 ! “application/x-rtp, payload=127” ! rtph264depay ! avdec_h264 ! autovideosink sync=false)

and this:
gst-launch-1.0 -e -v udpsrc port=5600 ! tee name=STREAMOUT ! queue ! “application/x-rtp, encoding-name=H264, payload=96” ! rtph264depay ! h264parse ! mp4mux ! filesink location=%FNAME% STREAMOUT. ! queue ! “application/x-rtp, payload=127” ! rtph264depay ! avdec_h264 ! autovideosink sync=false)

I believe I’m a bit late to the party, but I’m trying to get the video in a CPP application using GStreamer in Windows10. This is a long-shot, but I’m crossing my fingers that anyone is able to help.

I’m using the following string to get the video

    "udpsrc port=5600 "
    "! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264 "
    "! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! videoconvert "
    "! appsink name=sink emit-signals=true sync=false max-buffers=1 drop=true"

This works, however what happens is that I will have occasional dropouts and parts of the image will sometimes be blurry for around a second while images may “lag behind” and suddenly keep up with the current frame.

I also tried with this command, it this runs just really smooth (but in a separate window)

    "udpsrc port=5600 "
    "! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! autovideosink"

What I want to do is to receive the image through the GStreamer pipeline and publish it as a ROS Image, which I am currently able to do with the first command, although with the issues I described above. I’m wondering if there is a better command to use, considering that QGroundControl is able to obtain the image in a nice way but the given command above cannot. Perhaps there has been found a better command to use and this is included in later releases of QGC?

The CPP code is as follows:

// Include atomic std library
#include <atomic>
 
// Include gstreamer library
#include <gst/gst.h>
#include <gst/app/app.h>
 
// Include OpenCV library
#include <ros/ros.h>
#include <sensor_msgs/Image.h>
#include <cv_bridge/cv_bridge.h>
#include <opencv2/opencv.hpp>
#include <image_transport/image_transport.h>

//#include <C:/gstreamer/1.0/msvc_x86_64/lib/gstreamer-1.0/gstclosedcaption.dll>


using namespace std;
 
// Share frame between main loop and gstreamer callback
std::atomic<cv::Mat*> atomicFrame;
 
/**
 * @brief Check preroll to get a new frame using callback
 *  https://gstreamer.freedesktop.org/documentation/design/preroll.html
 * @return GstFlowReturn
 */
GstFlowReturn new_preroll(GstAppSink* /*appsink*/, gpointer /*data*/)
{
    return GST_FLOW_OK;
}
 
/**
 * @brief This is a callback that get a new frame when a preroll exist
 *
 * @param appsink
 * @return GstFlowReturn
 */
GstFlowReturn new_sample(GstAppSink *appsink, gpointer /*data*/)
{
    static int framecount = 0;
 
    // Get caps and frame
    GstSample *sample = gst_app_sink_pull_sample(appsink);
    GstCaps *caps = gst_sample_get_caps(sample);
    GstBuffer *buffer = gst_sample_get_buffer(sample);
    GstStructure *structure = gst_caps_get_structure(caps, 0);
    const int width = g_value_get_int(gst_structure_get_value(structure, "width"));
    const int height = g_value_get_int(gst_structure_get_value(structure, "height"));
 
    // Print dot every 30 frames
    if(!(framecount%25)) {
        g_print("");
    }
 
    // Show caps on first frame
    if(!framecount) {
        g_print("caps: %s\n", gst_caps_to_string(caps));
    }
    framecount++;
 
    // Get frame data
    GstMapInfo map;
    gst_buffer_map(buffer, &map, GST_MAP_READ);
 
    // Convert gstreamer data to OpenCV Mat
    cv::Mat* prevFrame;
    prevFrame = atomicFrame.exchange(new cv::Mat(cv::Size(width, height), CV_8UC3, (char*)map.data, cv::Mat::AUTO_STEP));
    if(prevFrame) {
        delete prevFrame;
    }
 
    gst_buffer_unmap(buffer, &map);
    gst_sample_unref(sample);
 
    return GST_FLOW_OK;
}
 
/**
 * @brief Bus callback
 *  Print important messages
 *
 * @param bus
 * @param message
 * @param data
 * @return gboolean
 */
static gboolean my_bus_callback(GstBus *bus, GstMessage *message, gpointer data)
{
    // Debug message
    //g_print("Got %s message\n", GST_MESSAGE_TYPE_NAME(message));
    switch(GST_MESSAGE_TYPE(message)) {
        case GST_MESSAGE_ERROR: {
            GError *err;
            gchar *debug;
 
            gst_message_parse_error(message, &err, &debug);
            g_print("Error: %s\n", err->message);
            g_error_free(err);
            g_free(debug);
            break;
        }
        case GST_MESSAGE_EOS:
            /* end-of-stream */
            break;
        default:
            /* unhandled message */
            break;
    }
    /* we want to be notified again the next time there is a message
     * on the bus, so returning TRUE (FALSE means we want to stop watching
     * for messages on the bus and our callback should not be called again)
     */
    return true;
}
 
int main(int argc, char *argv[]) {

    ros::init(argc, argv, "video_publisher");
    ros::NodeHandle nh;
    ros::Publisher image_pub = nh.advertise<sensor_msgs::Image>("image", 1);

    cout << "ROS node initiated.";

    ros::Rate loop_rate(30); // Adjust the loop rate as needed

    cout << "Initiating GST.";

    try
    {
        gst_init(&argc, &argv);
    }
    catch(const std::exception& e)
    {
        std::cerr << e.what() << '\n';
    }
    
    cout << "GST initiated.";

    
    gchar *descr = g_strdup(
        "udpsrc port=5600 "
        "! application/x-rtp, payload=96 ! rtph264depay ! h264parse ! avdec_h264 "
        "! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! videoconvert "
        "! appsink name=sink emit-signals=true sync=false max-buffers=1 drop=true"
    );
    

    /*
    gchar *descr = g_strdup(
        "udpsrc port=5600 "
        "! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! autovideosink"
    );
    */

 
    // Check pipeline
    GError *error = nullptr;
    GstElement *pipeline = gst_parse_launch(descr, &error);
 
    if(error) {
        g_print("could not construct pipeline: %s\n", error->message);
        g_error_free(error);
        exit(-1);
    }
    else {
        g_print("Pipeline created!\n");
    }
 
    // Get sink
    GstElement *sink = gst_bin_get_by_name(GST_BIN(pipeline), "sink");
 
    /**
     * @brief Get sink signals and check for a preroll
     *  If preroll exists, we do have a new frame
     */
    gst_app_sink_set_emit_signals((GstAppSink*)sink, true);
    gst_app_sink_set_drop((GstAppSink*)sink, true);
    gst_app_sink_set_max_buffers((GstAppSink*)sink, 1);
    GstAppSinkCallbacks callbacks = { nullptr, new_preroll, new_sample };
    gst_app_sink_set_callbacks(GST_APP_SINK(sink), &callbacks, nullptr, nullptr);
    

    // Declare bus
    GstBus *bus;
    bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline));
    gst_bus_add_watch(bus, my_bus_callback, nullptr);
    gst_object_unref(bus);
 
    gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);

    // Main loop
    while(ros::ok()) {
        g_main_iteration(false);
 
        cv::Mat* frame = atomicFrame.load();
        if(frame) {
            
            sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", atomicFrame.load()[0]).toImageMsg();

            image_pub.publish(msg);

        }

        else {
            g_print("No frame...\n");
        }

        ros::spinOnce();
        loop_rate.sleep();

    }
 
    gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_NULL);
    gst_object_unref(GST_OBJECT(pipeline));
    return 0;
}

The main part of the CPP code was found here, while I have just included standard libraries for ros and cv_bridge and functionality for the image publishing.

While I believe the believe the error lies here, it may also be related to how I receive the video. I can provide code example for that as well, but long story short I receive the image in a callback, where i convert the image message to a cv2 (cv_bridge) image that I display with the cv2.imshow()-function. The time spent reading and displaying the image is 0.01-0.03 seconds. In mavproxy I have set the camera frame-rate to 15 fps, so I believe this should not be an issue (and this would not cause the blur I see on the images).

Any ideas?