Hi @eyeNavRobotics, welcome to the forum, and thanks for sharing!
Your project looks quite interesting, although from the videos and descriptions it’s a bit unclear how general it is, and what the requirements would be for running it.
To clarify:
- The videos all show positioning relative to an initial position - is the system general enough that it can continue to do positioning once the entire initial video frame is out of view?
- e.g. would it be able to scan down the length of a dock piling, or along the side of a ship?
- Are the camera and IMU reasonably arbitrary (within performance constraints), or would they need to be specific ones as part of a packaged system?
- The BlueROV2 already includes a low cost camera (which people may wish to swap out with their own alternatives), and an IMU in its flight controller, so perhaps they could be used without additional expense?
- Does your controller operate on high level motion axis controls (e.g. forward, vertical, yaw, etc), or does it require control of each thruster individually?
- ArduSub can accept motion axis commands via MAVLink, but is currently poorly suited to individual thruster overrides
Specifically responding to your questions:
As concepts they can definitely be useful, especially for capturing stable footage and imagery as part of an inspection, although in general vision-based positioning requires visual features to operate off, so can be poorly suited to open water and/or poor visibility situations, which limits the applicable use-cases and operating conditions, and means that most operations would require some period of “other” control where the visioning system is not available.
A USB connection would likely be the most accessible, but it may be difficult to provide both MAVLink commands and an encoded video stream that way. Connection may be simpler on vehicles with an Ethernet Switch, if the camera system connects to a MAVLink endpoint for control while also presenting an encoded video stream which can be forwarded to and received from the topside.
There are many possible ways that a camera stream can be processed and augmented, including things like
- visibility enhancement (making details easier to see)
- colour correction (reducing/removing the effects of water on the colours)
- object / region of interest detection (e.g. animal/coral types and counts, infrastructure defects, etc)
- localisation and mapping (e.g. extend from local relative positioning to full SLAM)
Power consumption should ideally be as low as possible (as with any electronics), and preferably would operate on the voltage ranges supported by existing components in the BlueROV2 (e.g. 7-26 V, or 5 V). If it becomes a significant impact on battery life then it would need to be backed up by sufficiently useful features as to justify having a larger and/or secondary battery (which would also reduce the number of systems it can run on).
Depth rating and dimensions are somewhat related, in that if the camera system would operate within the existing BlueROV2 enclosure then it would need to fit in the enclosure (and the camera would need to be positionable in the center of the dome) but then it wouldn’t need any kind of depth rating. If the idea is to have the system in an external enclosure then the size is less restricted, but small is still good (to reduce drag), and you would then need to design to a depth rating relevant to the expected use-cases (e.g. a BlueROV2 with an acrylic electronics enclosure is rated to 100 m, and the next step up (default but with an aluminium enclosure) is rated to 300m, with further upgrade limits at 500 m and 950 m).