IP CAMERA 4K / 60FPS / Plug and play for BR2

Hi @Itaru ,

In reality Qgroundcontrol is 0.01 seconds faster than the web browser, however in my opinion viewing through the web browser is better and smoother.

Precisely due to this fact, my next step (in addition to finishing the firmware and software) is to do tests through Cockpit and Mavlink Camera Manager. On my ROVs I have Companion and Pixhawk, so I haven’t tried it yet. But I have ordered a new Rasberry to install BlueOS and do the tests with Cockpit.

Regards

Hi @Jhans ,

Sorry for the delay in my response.

First of all, comment that the intention is not to record through Gstreamer, but through SD, in fact this has already been solved, I am waiting to receive a new SD module that solves the problem.

Anyway, I hadn’t answered you yet because I was waiting to do some tests with Cockpit and see the best method for overlay telemetry.

Therefore, I want to do these tests for two reasons:

  1. On the one hand, to see how the camera behaves with Cockpit and Mavlink Camera Manager, since, as I told @Itaru , the image is better and smoother through the web browser.

  2. On the other hand, I can’t get Qgroundcontrol to record video if it is configured through “Force NVIDIA”, and this camera, due to its power and quality, needs to connect through “Force NVIDIA” for it to look smooth in Qgroundcontrol , at least on my PCs. It is likely that with better integrated graphics it will be possible to view more fluently through “Force DirectX 11”, which does allow video recording, but I have not been able to test it.

Since I cannot record video with Qgroundcontrol in “Force NVIDIA”, I cannot capture the .ass file with the telemetry parameters and I think it is possible through Cockpit, which is why I am very interested in doing these tests.

At the moment to record with Qgroundcontrol and capture the .ass file, it is only possible to do so if I configure the camera in FHD / 60fps / 16000kbps / H265+, H265 or H264. The quality is also very good, but I want to get it to record at its highest quality and be able to capture the .ass file.

Maybe some of the bluerobitics guys like @tony-white , @EliotBR , @joaoantoniocardoso or @rafael.lehmkuhl can tell us why it is not possible record in Qgroundcontrol with “force NVIDIA”?

Maybe it’s a simple configuration of the graphics card or something else that escapes me.

Another option is to record the secondary streaming from the camera in Qgroundcontrol and record the main streaming in the SD or Gstreamer and then cut this video coinciding with the timestamps. Finally open the video from the main streaming with the .ass file from the secondary streaming.

But that said, I think there must be a better way, so I’m going to investigate Cockpit and tell you my progress. In the meantime any help from the bluerobotics team to find out why Qgroundcontrol does not record in “force NVIDIA” will be welcome.

Best regards

Hi @Andres

I did some tests today myself, and i was able to record and get the .ass file with Force Nvidia activated.

1 Like

That’s great!

What version of Qgroundcontrol and what graphics card and configuration do you have in your computer?

Did you record via RTSP or UDP?

Do you have BlueOS or Companion?

If it is BlueOS, do you send the video through Mavlink Camera Manager (UDP or RTSP)?

I was also doing tests today and I didn’t get it with RTSP, Qgroundcontrol and Companion.

However, I have found an important improvement, with a very simple change. I have raised the screen settings to the maximum 60hz, I had it at 30hz, on the computer with the worst graphics, and thanks to this change I have managed to view and record vídeo and .ass in Qgroundcontrol with DirectX 11.

Live in Qgroundcontrol it improved a lot with this chage, it had some stuttering with fast movements but the recording is quite good. (Then I’ll send you a video)

On the other hand, when going up to 60 Hz, through the web browser, telling Google Chrome to use the GPU, the change was amazing, the fluidity and quality were brutal, on the least powerful PC I have.

I await your comments to see why I don’t record in “force NVIDIA”. But if you have achieved it, it is evident that it can be done, and therefore there will be no problem with the telemetry superimposition.

I have little left to finish, I hope to be able to offer the camera as soon as possible.

Best regards

I have version 4.3.0 on Qgroundcontrol.

Graphics card is Nvidia RTX3060

I have BlueOS and send the video through UDP

I had some issues to be able to record video at all in Qgroundcontrol, so i installed another Qgroundcontrol version and updated BlueOS to another version and went back and forward until i found the versions i have now that works.

Can’t remember which BlueOS version i have now, but i can come back to you on that.

Cheers

Thanks for the input @

I also have Qgroundcontrol 4.3.0, my graphics card are Nvidia RTX 3050 , GTX1050i and Intel UHD , but I communicate with Qgroundcontrol directly through RTSP without going through any intermediate system such as Mavlink Camera Manager.

Surely Mavlink Camera Manager generates a UDP endpoint suitable for recording in Qgroundcontrol, or perhaps Qgroundcontrol simply does not record RTSP via “Force Nvidia”.

It could also be your graphics configuration, can you send me some screenshots of your Nvidia configuration?

In what format do you record .mkv, mp4 or mov?

I am going to install BlueOS shortly, I think the Rasberry will arrive today, and I will do tests with BlueOS, Mavlink and Cockpit. When you can, I would appreciate it if you could tell me the version you have of BlueOS.

Very grateful, regards!

I record in mp4.
BlueOS version is 1.1.1

How do you go directly to Qgroundcontrol without using MAVlink camera manager? Is it just by changing the video stream to RTSP instead of UDP?


Thank you for sending the characteristics of the graph configuration.

Regarding accessing Qgroundcontrol via RTSP, simply in the video section change to RTSP instead of UDP and enter the RTSP url, and that’s it. It must be taken into account that an IP camera is networked by using an Ethernet Switch on the ROV and therefore the surface PC has direct access to the camera and Qgroundcontrol as well.

I attach a photo of how to access:

I will continue to report my progress

Regards

3 Likes

Hi @Jhans ,

I finally found the reason why I couldn’t record RTSP via Qgroundcontrol with “Force Nvidia Hardware”. The problem was the streaming codec, “Force Nvidia” only records if streaming is set to h264 (I was streaming in h265/h265+). It has nothing to do with the Nvidia graphics configuration (beyond the quality improvements offered by its configuration). I have tested it, in fact, with Intel graphics and with Nvidia graphics and both record video.

I really don’t understand why, because you can view the video stream in h265+/h265 with “Force Nvidia”.

As I already told you, you can also record in h265+ / h265 with “Force DirectX 11”, but the live video stream is somewhat worse, at least on my PC and graphics.

Note: All recordings save .ass files

This week I will make a video with the different encoding formats to show the quality of the video recorded in Qgroundcontrol in 4k / 60fps/ 16000 kbps, with the telemetry overlay and thus be able to compare the quality.

Regards

Hi @Itaru

I have also tested with BlueOS and Cockpit. First of all, congratulations to the Bluerobotics team for Cockpit and their excellent work, I know it is a Beta version, but it will be a great option for everyone. The web interface is great.

I have installed BlueOS on Rasberry Pi 3B, I know it is much worse than Pi4, but I wanted to see how video streaming behaved through the less powerful option. Unfortunately the result has not been good, let me explain:

  1. On the one hand, through Cockpit the transmission has had quite poor quality, a lot of delay, constant image freezing and loss of frames. Another thing I didn’t like is that you can’t use h265+/h265 through Cockpit either due to WebRTC, and it’s a shame because the quality increases and the bandwidth is reduced. (I think it is for rights reasons of the h265 HVEC codec, but the reality is that all the IP cameras I know today use h265)

To test, I have configured the RTSP channel in “Camera Redirect” through Mavlink Camera Manager as recommended by @joao in this post, and I have made sure to be connected by network and not by Wi-Fi:

I assume that the problem stems from the lack of power of the RasberryPi 3, and Mavlink Camera Manager is not capable of processing the video with this hardware. But maybe there is something I am doing wrong, the question I have is if I am correctly forcing Cockpit to use ethernet and not wifi, what I did was disconnect wifi in blueOS. Is it the correct way to do it?

@joaoantoniocardoso , please can you tell me something about it. I know it’s not a problem with my PC, because it is relatively powerful and has good graphics. I can view this camera without problems in Qgroundcontrol, Web Browser, VLC, etc. , and without going through Mavlink Camera Manager it works very well.

I also know that the pipeline is fine, because I have also used “Camera Redirect” through Mavlink Camera Manager or BlueOS Pirate Mode, to redirect it to Qgroundcontrol and in this case I receive the signal well.

Note that I have been able to record video with an .ass file through Cockpit with this channel, in fact the recorded video is much better than the live image, something that I do not understand very well either.

Therefore I am inclined to think that it is a Rasberry Pi 3B problem or that I have incorrectly overridden the Cockpit Wi-Fi connection. In any case, I will try to try with Rasberry Pi 4, since I have seen posts that comment on good results.

  1. Finally, I would like to point out that although I receive the video image in Qgroundcontrol using the redirection of Mavlink Camera Manager or BlueOS, the image quality and fluidity is much worse than if I directly enter the RTSP in Qgroundcontrol without using Mavlink Camera Manager . Perhaps also a problem with the lack of power of the RasberryPi 3B.

A priori conclusion, I see no reason to use Mavlink Camera Manager to stream RTSP to Qgroundcontrol. In my opinion it is better to do it directly, at least with Rasberry Pi3B and I am not entirely sure but I think it will not be better if I use RasberryPi 4.

Let me explain, if I have not misunderstood, by going through Mavlink Camera Manager and then to Qgroundcontrol, we are processing the image twice, or forwarding the video through one software to another software that is responsible for processing it. On the other hand, Mavlink Camera Manager, which is responsible for processing/redirect, is installed on a Rasberry Pi 4, however when you directly connect an RTSP URL to Qgroundcontrol without going through Mavlink Camera Manager, which is in charge of processing the image It is the surface PC, much more powerful than a RasberryPi. Therefore, I have serious doubts that it will work better.

The only reason I can think of for transmitting RTSP through Mavlink Camera Manager is to use Cockpit, but my tests with Rasberry Pi3B have not been satisfactory, so I hoped that with RasberryPi 4 it will be worth it.

My question is, is it really necessary to process an RTSP stream, through Mavlink Camera Manager on a small RasberryPi computer, which is viewed in a web browser (Chrome, Edge, etc.) through Cockpit, when the RTSP signal is actually available networked on the surface computer thanks to the ROV’s Ethernet switch without any processing?

I appreciate any feedback from the Cockpit software development team.

Best regards

Hello @Andres!

I’ve been silently watching this post, as I am always interested in supporting different cameras and setups in the mavlink camera manager, this is a very cool camera project!

I have also tested with BlueOS and Cockpit. First of all, congratulations to the Bluerobotics team for Cockpit and their excellent work, I know it is a Beta version, but it will be a great option for everyone. The web interface is great.

Thank you so much, we all very much appreciate the feedback from this post! As it is in active development, be free to go to the Cockpit’s issues page if you want to give feedback directly to the developers.

On the one hand, through Cockpit the transmission has had quite poor quality, a lot of delay, constant image freezing and loss of frames. Another thing I didn’t like is that you can’t use h265+/h265 through Cockpit either due to WebRTC, and it’s a shame because the quality increases and the bandwidth is reduced. (I think it is for rights reasons of the h265 HVEC codec, but the reality is that all the IP cameras I know today use h265)

Unfortunately, H265 is not part of the supported codecs for WebRTC, that’s why Cockpit can’t receive it.

I assume that the problem stems from the lack of power of the RasberryPi 3, and Mavlink Camera Manager is not capable of processing the video with this hardware. But maybe there is something I am doing wrong, the question I have is if I am correctly forcing Cockpit to use ethernet and not wifi, what I did was disconnect wifi in blueOS. Is it the correct way to do it?

Cockpit has introduced some useful tools for exactly dealing with that:

  1. The ability to choose the stream route (the IP/interface through which the stream will be transmitted – I recommend only adding the tethered address)
  2. The ability to choose the stream protocol (UDP vs TCP – some networks can do some traffic shaping that benefits TCP over UDP, for example)
  3. The ability to add a delay to compensate for network jittering

Those options are not yet in the docs, but can be found in the video configurations page, under the hamburger menu, in the latest beta release.

@joaoantoniocardoso , please can you tell me something about it. I know it’s not a problem with my PC, because it is relatively powerful and has good graphics. I can view this camera without problems in Qgroundcontrol, Web Browser, VLC, etc. , and without going through Mavlink Camera Manager it works very well.

So, I think the easiest way to help here is to explain how the stream redirect works. There are two main cases:

  1. To a Mavlink GCS, it creates a mavlink-camera for the external RTSP source (e.g.: an IP Camera), publishing and answering all necessary messages for the GCS to receive the original stream.
  2. To Cockpit, it indeed receives the RTSP stream, redoes the RTP payload, and sends the stream via WebRTC. There is no video processing here, but all the data passes through the MCM.

The image below shows the stream data path for each case:

image

When testing, sometimes I also had some differences between Redirect and directly telling QGC the RTSP address, but there must be some bad luck when measuring, as the data path is the same (directly from the camera) in both cases.

Note that I have been able to record video with an .ass file through Cockpit with this channel, in fact the recorded video is much better than the live image, something that I do not understand very well either.

If this is happening, it indicates that the browser’s codec, graphics card, or something like the power manager of the operating system, or from the hardware, is limiting the capabilities of your graphics card. A general guidance, be sure to:

  1. have the battery fully charged and plugged into the wall
  2. configure the power configuration for performance
  3. enable your best graphics card
  4. allow your browser to use your graphics card in your operating system
  5. enable GPU acceleration for video decoding in your browser (for chrome-based, check chrome://gpu)
  6. close all background services or other software
  7. allow only the IP address from your tethered connection on Cockpit
  8. have enough bandwidth available for the video streams (12Mbps for 1080p@30fps, or 30Mbps for 4k@30fps)

All that said, to be completely transparent, there is a case of video stuttering and additional latency when using WebRTC and USB cameras that I am still investigating, and it seems to be happening inside MCM, but it is not related to processing power.

Be free to follow up with more questions, have a great week!

Hi @joaoantoniocardoso

Sorry for the delay in my response, I have been doing all the possible tests with Cockpit.

First of all, thank you for your extended and sincere response! And I’m also glad you like my camera project.

Regarding the problems I was experiencing with Cockpit, I would like to comment that I have managed to resolve them.
I already knew about the solutions that you told me about using the GPU in Chrome and had tried without success before writing to you, but searching the internet I finally found a solution. I had to activate the following parameters in Chrome://flags (photo attached).

I also have to say that on my computer, since it has dual graphics, I cannot use Chrome with the more powerful Nvidia GeForce RTX 3050, it seems that Chrome does not like it and forces me to use the Intel UHD, I have seen this problem in many forums. Internet. In my case it is a problem with the BIOS version of my computer, the next version of my computer with the RTX 3060 graphics card now allows you to use only the dedicated graphics card, which makes Chrome accept it.

In any case, the adjustments I made in Chrome://flags allow me to use the Intel UHD graphics, and they solved the problem and now the video in Cockpit literally flies, even with the RasberryPi 3B, which, as you mentioned, its hardware was not the problem, which is great because this tells me that the camera I’m designing will be usable on the older BR2s. So take advantage of this data and congratulations on your work! I find it really impressive, especially the configuration of all the parameters and settings of the Cockpit display and configuration. Very well done!

I have also installed the desktop version .exe to compare which one works better. Also with the desktop version on my computer I can use both graphics, but honestly I don’t notice any difference. Both versions have given very good results. I am attaching below videos with telemetry overlay, photos of GPU consumption and latency, from both Cockpit versions via Web Browser Chrome and via Desktop. (download the videos to watch them in full quality)

Cockpit Chrome Web Browser:

Cockpit Desktop:

I also attach a video of the latency in motion with Cockpit Desktop (web Browser is the same latency):

The only “problem” in my opinion is still not being able to use h265+, the truth is that the camera looks “noticeably” better with this codec, especially in shadowed areas, with low light and with rapid camera movements, there it is notice some mild pixelation due to this compression. With h265+ this does not happen, the image is clear at all times. That is why I ask you the following:

Would it be possible in the Cockpit Desktop version to add a video configuration in which a Gstreamer pipeline could be added?

If it is not possible, do you know of any means to convert RTSP h265 stream to RTSP h264 stream, with Gstreamer or FFMPEG? I have managed to do it from RTSP to RMTP with FFMPEG, but I can’t find a way to achieve the former. The idea is to somehow “trick” WebRTC.

Another option would be to be able to access the http:// url of the camera, through Cockpit, since through the web browser (Chrome or Edge) it can be viewed in h265+ with very low latency and very good quality.

Comment that I have installed the latest Beta versions of Cockpit and saw the delay and selection functions between UDP or TCP. In my case it was not necessary to use them, it works well with the original configuration with results of 0.2 seconds of latency. But also very good job!

I also read in the next post that @rafael.lehmkuhl commented that it would be possible to make an http request through Cockpit, without having to use NodeRED (I have tried using it and I still have a lot to read to make it work, but it seems great). I mention this because it would be super useful to control cameras with motorized zoom/focus, in addition to controlling the camera parameters (DWDR, Anti-Fog, BLC, HLC, WB, etc).

The software that I have written in Python to control the parameters of the camera and the motorized lens with the joystick are based on http requests. Being able to assign requests to the joystick from Cockpit would be great and it would not be necessary to use any other software in parallel, thus saving PC resources.

Thank you very much for your kind answer

Best regards

5 Likes

Hi @joaoantoniocardoso

I write again, because given the good results of Cockpit, I decided to test the camera on the worst computer I have, an HP Notebook 820 G3 laptop, with i5 4310U CPU processor @ 2.0 GH 2.6 GHz, 8 GB of RAM and Intel HD Graphics Family graphics.

My surprise has been enormous, after making the same changes in Chrome://flags, described in the previous post, Cockpit has left me impressed. The camera flies, just like on my most powerful PC, which tells me that the PC resources used by Cockpit are very low, in relation to GPU, CPU, RAM!! Incredible that a camera in 4k / 60fps / h264 / 16000 kbps can be displayed with such fluidity and quality.

I attach photos of the resources consumed on the PC with the camera in rapid movements, which is when it consumes the most. I also attach video of the latency. Sincerely congratulations for your work!

Also note that the tests I had previously carried out were done at night under fluorescent light and the image quality in h264 in the shadow areas had some soft pixelation. However, today, testing in daylight, the quality has improved a lot, it is not the quality of h265+, but it is not far off. So despite not using this codec I think that with halfway decent lighting very good results are achieved. So congratulations on this too!

Best regards

5 Likes

Hi, very interesting results you get!

Which camera are you using?

Cheers
Pob747

Hi @pob747

Thanks for your comment!

You have the camera characteristics in the first post of this thread. It is my own project that I am developing, it is almost there. As soon as this is ready I will offer the camera to interested people.

Best regards

1 Like

Hi @Andres !

Exciting results! I’m happy to see that it’s even working 4k 60fps via h264!

This is something we are studying, but we haven’t put much effort so far.

It seems possible to write a custom codec that receives the h265 video packets using a WebRTC data channel to overcome the browser’s WebRTC h265 limitation. This would require an effort on both MCM and Cockpit ends.

This is possible. Cockpit allows custom iframes … I imagine the hard part is to figure out the URL to get the frames. If you need more than a direct iframe from Cockpit to a URL from the camera, you can create a BlueOS extension that produces the consumable iframe for Cockpit :slight_smile:

Rafael informed me now that we plan to add support for HTTP requests using joystick buttons in the next quarter. Meanwhile, complementing the BlueOS extension running your software as a service, it can also have a URL that serves a page to be used on Cockpit’s iframe widget. This is not a perfect solution though. We have plans for supporting proper Cockpit extensions.

Also relevant, the MCM will add support to Onvif, so the current camera controls API will be able to use all available Onvif controls, and the MCM will look for cameras in the reachable networks, providing a better integration to IP cameras.

So it’s possible that in the near future your custom Python software that controls your camera doesn’t need to be maintained if all camera controls are available via Onvif :slight_smile:

3 Likes

Hi @joaoantoniocardoso ,

Thanks for your answer!

I tested connecting the camera through an iframe in Cockpit, as you recommended, and I have to say that it worked wonderfully.

At the moment I have tested it only with the desktop Cockpit, and I can confirm that the camera works perfectly, therefore I can now view it in 4k / h265+ / 60fps, that is, at its maximum performance and quality, which is fantastic!!!

Additionally, thanks to the Cockpit Mini Widgets, all the ROV’s telemetry parameters can be overlay on the camera image, which is also fantastic.

On the other hand, using the iframe, you can access the configuration of all the camera parameters without leaving Cockpit, which is also wonderful.

I will try it shortly with the Cockpit web browser version. I have not tested it this way because I have Companion on the ROV where I have the camera installed. I will remove it from the ROV and install it on a RasberryPi with BlueOS and Pixhawk that I have for testing. I think it will probably be even faster, since Cockpit web browser uses fewer PC resources.

The only thing to comment is that not all IP cameras can be viewed through iframe, this is not the case with my camera, but I have tried others that I have and if the camera has an html header “X-Frame-Options: SAMEORIGIN”, the camera’s web browser will not load via iframe. Apparently it is a security system for some cameras to avoid being embedded in other websites.

Anyway, congratulations, it’s simply fantastic :clap::clap::clap::clap:!!!

I take this opportunity to ask you, if it is possible to record the html displayed through the Cockpit iframe with the telemetry data?

On the other hand, do you know if it is possible to record only telemetry data with Cockpit?

Regarding http requests via joystick, it is fantastic that @rafael.lehmkuhl is working on it and that soon there will be no need for support software to control camera parameters (zoom, focus, WDR, Anti-Fog, BLC, HLC , etc), in any case for now, I will provide this small application with the camera for its control.

Next week I will try the new recording module that just arrived, to be able to record in SD directly on the ROV, and thus avoid recording video on the surface after having been transmitted via Ethernet. Thus saving PC resources and bandwidth, in addition to obtaining the video at its highest quality.

So if everything goes well with the SD recording tests the camera will be ready for interested people!!

Very grateful for your kind response

9 Likes