BlueROV + Zed Stereo Camera Calibration Tutorial

Recently had a fun time with @tony-white and a crew from the Sexton corporation in Kailua-Kona, Hawaii, assisting them with their cameras. The calibration they had wasn’t working, so I built some software to assist them in monitoring and recording the BlueROV while it was underwater and calibrating the camera. Really fun underwater computer vision project. I definitely think it’s the start of a larger set of projects I’ll work on in the future.

A full tutorial here: stereographic_depth_estimation_opencv

Would be happy to give more detail or answer questions. Tony invited me to join and post this in the forums. So this is my first post! Mahalo T.

9 Likes

Thanks @jackmead515 !
We’re currently evaluating the ZedHead for the BlueRobotics 3rd party product “Reef” - if this is something you’d be interested in, please share your application and any questions you may have!

1 Like

So good. Thanks for posting a great tutorial!

2 Likes

I have always been aware of the direct application of stereoscopic cameras for underwater photogrammetry, although it can also be done with conventional 4k cameras. It would be interesting to know if it offers advantages for this purpose

1 Like

Hi @juanjepalomeke -
The primary advantages are twofold - a reduction in processing time when generating the 3D photogrammetry model, and that model being generated too-scale. When done with a single camera, the size of objects captured is unknown!

For what you say, for me it is already more than enough for them to be interesting. Is it possible to use that code with any stereoscopic camera?

The code shown is generalized yes, but each camera may have a different calibration matrix and raw video format / resolution, as well as a different baseline camera separation.

Yes to back Tony up, the intrinsic and extrinsic properties of each stereographic camera are highly unique. And given even slight adjustments to resolution, baseline, recording medium (air, fresh water, salt water, etc), will greatly adjust the quality factor.

In addition, the calibration images may have to be collected several times over as getting a quality set of images is an art-form rather than a very applied approach. This is due to the need for the estimation model to have a generalized set of points distributed over the image plane, at all different angles, and depths from the camera, the lighting conditions as they influence the template matching algorithm and how it approximates the the positions of the checkerboard image, and even the checkerboard geometry itself as that has to be configured correctly. (9x6 or 11x8 etc)

In a controlled environment this can be easy! But if you notice in the tutorial, some of the checkerboard points in the video are distorted due to medium noise and lighting conditions. There are various denoising and equalization filters that could be applied to minimize this, but they just introduce more tunable options.

Our checkerboard template we used comes with a high precision and the plane is very smooth. Any slight distortions or warping of the checkerboard plane (for instance say due to printing out a checkerboard pattern from a printer) will also affect the calibration accuracy.

In this case I play with my old gopro3D for test it

Thanks