Lets talk about underwater image and video

Underwater video and image is a topic on its own. There is natural light attenuation with depth, type of light and position relative to the object and camera etc. I don’t go there for now. Here, I try to focus on the camera ideal position to capture underwater image and see how to adapt to ROV.

After some search I found and concluded the following and would like some input / correction from the community if something is not right.

First, it seem rather clear that Dome port are better in every aspect relative to flat port. Dome offers less distortion and less to no chromatic aberration. This video cover this topic: Dome port theory

Second, the bigger the dome, the better. This is du to the border part of the dome which create distortion in the image. If you increase the dome size, a bigger portion of the image (if not all) comes from the center part of the dome and hence you minimize border distortion effect.

Third, the camera lense need to be positioned at the nodal focal point (also named NPP for no parallax point). This point is located inside the dome, at the center of a perfect sphere. Most of the dome on the market are not half spherical but rather a portion of a sphere and NPP need to be estimated by calculation based on the sphere parameters. The actual BR dome is mostly half a sphere. I played with the 3D model and if this is the good theory the NPP is in the center of the dome and exactly align with the middle section of the dome flange.

Fourth, the camera need to be able to focus on the virtual image of the object. This virtual image is located on the external side of the dome at a distance that depend on the real object distance relative to the dome. This online tool help us to visualize this: Oceanity dome port estimator

Lets compared this with the actual BR2 setup. The camera lense is on a gimbal and move. In an ideal world the gimbal should pivot the lense exactly on the NPP to be able to shoot undistorted images because of parallax effect. This is a challenge to achieve in such a small enclosure. The actual BR gimbal is not doing this. At best the lense move at a constant distance relative to the dome face but do not pivot at NPP. This probably impact the image quality and also increase the border distorting effect when the camera is oriented toward the side of the dome. We could recommnd to shoot video with the gimbal at he center at all time and pivot the ROV instead.

The lowlight camera also don’t automatically focus on the virtual image. We need to rotate the lense to improve this but base on point four (above), the virtual image is known to move and we can only rotate the lense to focus on one point at a fix distance relative to the ROV. We need to choose. Will it be at 1m, 45 cm? And then position the object, lense and ROV at that distance.

In an ideal world, one solution that come to mind that address all of these would be to use a fix camera in a dome. The camera should be located at the NPP and should allow to adjust the focus on the go to adapt to the virtual image (and object of interest) variable distance. The gimbal should not be designed to move the camera but rather to move the whole dome + camera setup toward the object of interest! This would allow to maintain the lense at perfect NPP all the time and take full benefit of looking at the dome center at all time.

Obviously this is a challenge! A gimbal outside of the dome! Are you out of your mind? Probably…. I have no solution to suggest, I am just looking to feed discussion and idea sharing about this. Maybe that these improvement only marginaly impact image quality and do not require to modify everything? What do you guys think?

cheers

2 Likes

Hi @Charles,

The ideas you’re discussing here seem to be generally correct, at least from my understanding :slight_smile:

I would note that there are various gimbal designs that are better suited to maintaining a fixed sensor location, and several of them would be simpler and cheaper to implement than rotating the entire dome+camera setup (excluding just rotating the whole vehicle, which is not always practical). The primary challenges are around space management and force distribution - such designs typically involve mechanical linkages, or at least offsets, which can make it harder to fit the relevant equipment into a given space, especially if it needs to be rigid enough to avoid excessive wobbling/shaking when the camera (or vehicle) moves.

It’s also valuable to recognise that engineering design is about tradeoffs, and relatively obvious improvements to one variable may have limited impact when compared to changing other variables in the system. There is discussion of some of the other relevant variables in this forum post if you’re interested :slight_smile:

I expect gimbal sophistication (including image stabilisation) will be one of the weakest links once more advanced cameras become commonplace, at which point this will likely become more of a design priority (at least for high end systems - I imagine some hobbyist-focused / low cost vehicles could go the other way, and try to strip out components to fit in a smaller enclosure).

Zoom and focus control are not available in our current camera products, but are relatively independent variables to other camera system considerations, so can be a great functionality to invest in. That’s particularly relevant if you’re often operating in clear enough conditions that you frequently need to change focus between near and far objects.

That is something we’re factoring in to our next generation(s) of camera designs, while also acknowledging the additional interface/input requirements needed for controlling extra functionalities. Automation can help to some extent, but robust auto-focusing is something of an open problem (e.g. when / how often, and how quickly, should changes be applied, and where does the user actually want to focus in any given moment?), which certainly keeps research and development interesting!