Smart camera for station keeping and dynamic positioning

Hi everyone,

We are researchers from the I3S Laboratory which is a part of the CNRS (National Center for Scientific Research) in France.

Inspired by the autonomous inspection of underwater infrastructures, since 2016 we have developed an extremely low-cost and compact Vision-based Positioning solution for ROVs, well suited for the Blue Robotics community. Here are some youtube video demos that will surely give you a great impression:

Our solution not only works for fully-actuated ROV but also for a more challenging case of underactuated ROV without lateral thruster as shown below:

We believe that stable and precise positioning capability is very useful for an operator when performing inspection or manipulation tasks despite challenging conditions like low visibility, highly turbid water, strong currents, etc.

We strongly believe that cost effectiveness is an appealing factor when addressing the Blue Robotics community. Keeping this in mind, we developed a vision-based stabilization solution that requires only a low-cost monocular camera, a low-cost IMU along with a low-cost SBC computer (Jetson Nano for example) with the entire hardware system costing barely 300 USD.

We intend to share this disruptive technology with the Blue Robotics community by developing a “smart camera system” that incorporates all hardware and software features in a compact system that can be easily mounted in all the BlueRovs at an affordable price available to everyone.

However, we are not at that final stage yet! But we would be really grateful to anyone willing to contribute to this post by sharing their vision, expertise and advice, or even questions and debates that will undoubtedly help us to advance in the right direction.

Here are some questions regarding which we would like to get your valuable feedback:

  1. What do you think about the vision-based stabilization/positioning functionality in general in terms of usefulness and demand from the perspective of inspection and exploration activities?

  2. Our C++ vision and control libraries run on the NVIDIA Jetson Nano. To make our hardware and software compatible with BlueRov2, BlueOS and ArduSub, do you have any suggestions in terms of connection and communication?

  3. What kind of other vision-based functionalities (useful for inspection tasks) can be added to our “smart camera” in addition to the vision-based stabilization?

  4. Do you have other remarks on : depth rating, power consumption, dimensions and shape … of our smart camera system?

4 Likes

Is it really necessary to use a Jetson Nano? Any chance of running on a lower cost microcontroller?

Hi @eyeNavRobotics, welcome to the forum, and thanks for sharing! :slight_smile:

Your project looks quite interesting, although from the videos and descriptions it’s a bit unclear how general it is, and what the requirements would be for running it.

To clarify:

  1. The videos all show positioning relative to an initial position - is the system general enough that it can continue to do positioning once the entire initial video frame is out of view?
    • e.g. would it be able to scan down the length of a dock piling, or along the side of a ship?
  2. Are the camera and IMU reasonably arbitrary (within performance constraints), or would they need to be specific ones as part of a packaged system?
    • The BlueROV2 already includes a low cost camera (which people may wish to swap out with their own alternatives), and an IMU in its flight controller, so perhaps they could be used without additional expense?
  3. Does your controller operate on high level motion axis controls (e.g. forward, vertical, yaw, etc), or does it require control of each thruster individually?

Specifically responding to your questions:

As concepts they can definitely be useful, especially for capturing stable footage and imagery as part of an inspection, although in general vision-based positioning requires visual features to operate off, so can be poorly suited to open water and/or poor visibility situations, which limits the applicable use-cases and operating conditions, and means that most operations would require some period of “other” control where the visioning system is not available.

A USB connection would likely be the most accessible, but it may be difficult to provide both MAVLink commands and an encoded video stream that way. Connection may be simpler on vehicles with an Ethernet Switch, if the camera system connects to a MAVLink endpoint for control while also presenting an encoded video stream which can be forwarded to and received from the topside.

There are many possible ways that a camera stream can be processed and augmented, including things like

  • visibility enhancement (making details easier to see)
  • colour correction (reducing/removing the effects of water on the colours)
  • object / region of interest detection (e.g. animal/coral types and counts, infrastructure defects, etc)
  • localisation and mapping (e.g. extend from local relative positioning to full SLAM)

Power consumption should ideally be as low as possible (as with any electronics), and preferably would operate on the voltage ranges supported by existing components in the BlueROV2 (e.g. 7-26 V, or 5 V). If it becomes a significant impact on battery life then it would need to be backed up by sufficiently useful features as to justify having a larger and/or secondary battery (which would also reduce the number of systems it can run on).

Depth rating and dimensions are somewhat related, in that if the camera system would operate within the existing BlueROV2 enclosure then it would need to fit in the enclosure (and the camera would need to be positionable in the center of the dome) but then it wouldn’t need any kind of depth rating. If the idea is to have the system in an external enclosure then the size is less restricted, but small is still good (to reduce drag), and you would then need to design to a depth rating relevant to the expected use-cases (e.g. a BlueROV2 with an acrylic electronics enclosure is rated to 100 m, and the next step up (default but with an aluminium enclosure) is rated to 300m, with further upgrade limits at 500 m and 950 m).

1 Like

Hi @ljlukis , thanks for your questions!

It is not necessary to use a Jetson Nano. The most computationally expensive part of the station keeping task is the vision. Initially we also started off using a low cost Single Board Computer (Odroid XU4 in our case). With this board we were able to run the vision part at a frequency of 15 Hz and then subsequently the control loop as well at the same frequency which gave a good performance. However in order to have an even better performance we switched to Nvidia jetson GPU (Jetson Nano) where we were able to run in real time at a frequency in between (20-25 Hz).
Also, we have both the CPU and the GPU versions for the vision library.

So in short you should be able to run everything on a low cost microprocessor (having performance comparable to the Odroid XU4 like a raspberry pi 4) but I don’t think you can run everything on a low cost microcontroller due to its limited computational resources.

Hi @EliotBR, thanks for your comments and your questions!

We hope that our following answers could help you better understand our vision-based positioning solution.

Actually, the positioning is done relative to a “reference image”. Imagine that while performing the inspection task, the operator wants to carefully observe an area of the offshore structure. The operator will manually position the vehicle in front of this interested area, where the camera captures an image of this area of interest. The image taken at this instant is considered as the “reference image”. Once this image is captured the operator then orders the ROV to do the task of station keeping (completely autonomous).

If for any reason the initial image (a.k.a reference image) goes outside the field of view of the camera, a new reference image will be chosen automatically right after the moment of losing the current one. The positioning relative to the new reference image will then be carried out.

So to scan down the length of a dock piling, or along the side of a ship etc., the operator can use the joystick to define a new setpoint (up/down, left/right, forward/backward) and that process is done repeatedly so that the ROV will do a full scan. In this case, the operation is in the semi-autonomous mode: while the choosing of setpoint is done manually, the station keeping is carried out autonomously.

We are currently working on the autonomous inspection. As soon as we have some initial experimental results we will post the videos on the forum.

So far, we tested our solution with different cameras and IMUs:

[Basler acA1300-200uc] with 3.6mm focal length lens (C mount)

[oCam] with 3.6mm focal length lens (M12)

[BlueRobotics Low-Light HD USB Camera] with 2.97mm focal length lens (M12)

[withRobot myAHRS+]

[Pixhawk Flight Controller]

In underwater infrastructure inspection, the focal length of the lens is important because it affects the field of view, especially when the camera is close to the observed target. For this reason, lenses with focal lengths of 2.97mm and 3.6 mm are chosen. Image acquisition is preferred at 30 Hz. However, our solution still works at extremely low frequencies, as in the case of the GreenExplorer ROV of 300 kg of dry mass where images were received at 9hz. We do not need the global shutter camera. The IMU is preferred to work at least at 50 Hz.

So we can confirm that the choice of camera and IMU is reasonably arbitrary (within performance constraints).

BlueRobotics Low-Light HD USB Camera and Pixhawk IMU are a total fit to our solution. Infact, the videos that you have seen in our post showing the positioning of our in-house ROV (modified from BlueROV1) are carried out using this sensor suite.

The output of our controller are control forces and torques, each in the form of a vector with 3 components along (or around) longitudinal (aka forward), lateral, vertical directions.

Then by employing an allocation control matrix (taking into account the geometry distribution of the thrusters), the thrust of each thruster can be calculated. Its PWM value is then defined by using a look-up table. For the moment, we used open source PX4 firmware and modified it to carry out that allocation control.

For your question, we confirm that our controller operates on high level motion axis controls and does not require control of each thruster individually.

We assume that the inspection task is carried out when the operator can “observe” underwater structures by his eyes. In poor visibility situations, if no visual features can be detected, logically our solution is not applicable. However, the solution is quite robust in case of turbid water as you can see in our videos (starting from 2:30 mins).

Thanks for your suggestions! We will try the Ethernet connection and MAVLINK endpoint.

In our vision library we do use image processing techniques like histogram equalization to improve the contrast which helps us a lot when the water is extremely turbid.

  • colour correction (reducing/removing the effects of water on the colours)

We recently purchased a DeepWater Exploration camera. We will test it and update the result on the BlueRobotics forum.

  • object / region of interest detection (e.g. animal/coral types and counts, infrastructure defects, etc)
  • localisation and mapping (e.g. extend from local relative positioning to full SLAM)

These are two points which we are currently working on. As soon as we have some initial experimental results we will post the videos on the forum.

In addition, we have developed a vision-based pipeline following controller, shown in

The supported voltage range of BlueROV2 is fit to our camera system since our C++ vision and control libraries run on the Single Board Computer (SBC) (NVIDIA Jetson Nano, NX, …) We will carry out tests for checking the power consumption and update the result on the BlueRobotics forum.

Thanks for your information. So far, we think of several options:

  1. Using the existing camera + pixhawk of BlueROV2.
  2. Using Pixhawk of BlueROV2 and a external camera with enclosure (e.g. the DeepWater Explorer camera)
  3. A separate box (external enclosure) includes a camera, an IMU and a SBC

For the 1st and 2nd option, we need space for the SBC inside the enclosure.

For the 2nd option, the additional camera can be mounted at a relatively arbitrarily position/orientation.

The 3rd option seems more suitable for ROV with larger dimensions.

3 Likes