Underwater visibility improvement

What if you had a way to see clearly and easily look around underwater? Our middle school robotics team has been working on an innovation to do exactly that, and we need your advice!

Our idea is to combine:

  • A camera on an ROV that can pan and tilt inside a clear dome
  • Color correction processing to reverse the blue shift that happens when water bocks longer wavelengths
  • Video feed displayed on a VR headset worn by the ROV pilot
  • Motion sensors on the headset detect the pilot’s head movements and the angles are sent back down the tether to servos on the ROV that move the camera, so the camera points wherever the pilot wants to look

For our project we need input from an actual ROV pilot. What do you think of this system, and what else might we consider to improve it?

Hi @LowBat, welcome to the forum! :slight_smile:

Note that fitting these components in the space while keeping the camera in the dome’s focal point can be challenging, and moving the camera closer to the dome wall results in additional distortion that may need to be corrected for.

This is a somewhat complex physical phenomenon, so be aware that doing so accurately may require a camera with scene depth (distance) measurement capabilities, and an estimate of how much light is provided externally (e.g. from the sun) vs reflected from lights on the vehicle.

There are alternative approaches to visibility improvement though, and many of them don’t focus on true physical correction (even if they roughly attempt to approximate it).

As I understand it, latency in a self-controlled video feed is one of the primary sources of nausea when wearing a headset, so while headsets can be great for blocking out ambient light, it’s important to minimise glass (camera) to glass (display) latency for a pleasant/usable operating experience, especially if you’re running a computationally heavy real-time processing algorithm (plus decoding and/or encoding of the stream to do so) as part of the video stream pipeline.

Control latency may well be worse than video latency, especially factoring in the speed of the motors involved (heads can move quite quickly). Vision that lags movement can make you feel inebriated and unbalanced/nauseous, which is something to look out for.

If possible, it may be worth considering a 360 camera and cropping/rotating the field of view in software to where the pilot is looking.

There is also the issue of the ROV being more readily manoeuvrable than the pilot, so consider what happens when the pilot turns the ROV around without turning their head, and the effect that may have on their vision, sense of space, and how they feel.

This may be worth a read, for some approaches others have taken :slight_smile:

Hi @LowBat -
Checkout this design for both pan and tilt control, shared here originally by @Atsuhiro