Research on ROV visual autonomous obstacle avoidance navigation based on deep learning

I recommend you try to

If you want to integrate with Cockpit then your main options would be to either

  1. serve a web-page that talks back to your BlueOS Extension to receive the annotation information, then draws it
    • this could be displayed in Cockpit using an iframe widget, as I mentioned in my initial comment
  2. provide a web-socket from your Extension that talks to a custom Cockpit widget
    • this could be tested using a DIY widget, with code to draw annotations on a HTML canvas

The main components of that could look something like this:

and you could use an existing OAK-D focused Extension like this one or this other one as a basis for several of the relevant steps.


Note also that processing frames on the Raspberry Pi requires a lot of data bandwidth to it, and generally quite a lot of processing capacity on it that may be better directed to other services and tasks, while typically adding latency to the stream, especially if all you’re doing on the Pi is overwriting image data with annotations and then encoding the stream to send it elsewhere.

If you want to install BlueOS as secondary functionality on an existing operating system image then you likely want to use the install script rather than trying to manually install a BlueOS docker image or something. That said, we’d generally recommend installing a BlueOS image and then running other things in parallel via the BlueOS Extension system, so that it doesn’t interfere with the core BlueOS services, and so the setup can be more easily reproduced and shared to other BlueOS devices.

The Raspberry Pi operating system images that get flashed onto an SD card are not the same as Docker images (which require a base operating system to be already installed, as well as Docker), so this question doesn’t make much sense.