A heavy aluminum frame makes it more robust. It’s quite slow but very stable underwater, which is exactly what we were aiming for.
Two batteries in separate enclosures are connected with the ON/OFF switch in the middle. Ping sonar for altitude-hold located on the lowest point and attachment for the Waterlinked U1 Locator on the upper front.
And most importantly additional 4K camera for photogrammetric documentation. It is basically a GoPro with an additional power source. It has a cable connection to the Rec button and indicator light. Yes, it is possible not only to control the buttons but also to have indication light thru 150m Fathom Tether as shown here:
Newton Gripper was initially planned for the camera tilt mechanism. Not sure if we will use that but it’s good to have a gripper installed anyway.
Photogrammetry camera is our ROV’s main tool. It lets us collect high-quality footage from which we create detailed 3D models of underwater cultural heritage. The use of Waterlinked navigation makes it a great search-and-identify vehicle. With the ability to hold depth it should also be a good platform for magnetometer which we plan for the future.
I just recently created a profile here but I was reading the forum content for a long time now. I would like to use that opportunity to thank You all for sharing your knowledge and ideas here. If you have any questions regarding our build that may help with your designs feel free to ask.
Also big thanks to BlueRobotics for making it possible for a small company like ours to have our professional ROV!
Thanks for sharing the features of your ROV design, and a bit about how you use it. It’s always great to see the various ways people make use of our equipment, and your use-case is quite fascinating.
This was really interesting! The model was quite detailed, but it was also super cool to see the ROV itself included, with lighting, along with water and a few fish for some extra perspective. Really helps to set the scene
I agree - the community here is great, and it’s something we try hard to foster. Thanks for joining in the sharing; no doubt your post will spur some interesting discussions here and elsewhere, and may serve as inspiration for other people’s designs
My main questions are on the imaging side of things:
The output model seems to have quite natural colours, such that it’s not obvious it’s from underwater images. Is that achieved with some kind of post-processing colour correction, or is the vehicle driven close enough to the target, with strong enough lighting that the colours look relatively normal?
Photogrammetry is notorious for being somewhat finicky. Is there a particular software you recommend, and are you able to provide any more details on the process you follow to get good results? (e.g. successive image overlap, vehicle speed, processing hardware+time required to get from photos to model, etc)
You’re most welcome! Our mission is to enable the future of ocean exploration, and we love it when people like you, and companies like yours, are able to use our equipment/components to solve problems and do awesome work/projects
P.S. I’ve edited your post a bit to make the videos embedded, so that it’s possible to view them without needing to leave the forum. You can learn how to do that and other formatting in the Formatting a Post/Comment section of the How to Use the Blue Robotics Forums post
Thanks for your response Eliot, and also for fixing the videos
Post-processing with underwater images is a must. Generally, photogrammetry software does not like messing around with pictures too much. You cannot crop or go crazy with contrast. With color, however, you can do everything and that’s where the magic happens. If you have a look at one of our old models on Sketchfab you will notice a scale bar laying on the seabed with a gray rectangle on it. That’s a neutral grey sample. It makes setting a natural-looking white balance much easier.
Well that’s a topic for a whole article To answer your questions briefly:
We use Agisoft Metashape Professional Edition as it lets us work on georeferenced data and have other powerful features. For processing big amount of pictures on high settings you need a quite powerful workstation or alternatively a top-shelf gaming computer. When it comes to overlapping it’s generally good to have every feature you want to document on at least three pictures from one angle (another angle- another three). Vehicle speed… well the goal is to have sharp footage. Sharpness and brightness- those are the main goals. So you need to adjust the shutter speed to your vehicle speed (or usually the other way around). For ROV and divers with less experience, I definitely recommend shooting high-res videos instead of pictures. Extruding video frames in post-processing gives the advantage of setting a proper overlap after the “fieldwork” is done. Frames are never as good as still pictures but it does not mean you cannot create detailed models from them.
That is really cool. I would be interested in seeing a lot more video clips of the stuff that you find. I would also be interested in detailed guides about how to setup and use the ROV for photogrammetry. I played with the free version of the Agisoft software once and was impressed with the technology. I would love to make 3D models of my dive sites, but I think my visibility would be a huge problem since it’s less than 10 feet.
Taking screenshots of videos was the way to go for that time I did it. I setup VLC to automatically take a screenshot every 1 second, then I went through the folder of screenshots and manually checked them and deleted bad images.
This kind of thing is where having access to extra information can definitely provide some benefits. I’ve previously extracted video frames by choosing the sharpest frame within a short time period, at regular intervals (so lower chance of bad photos because you’re choosing the best one around each point in time), but that could definitely be improved by aligning the telemetry with the recorded video, and taking account of integrated distance from the accelerometers, as well as things like vibration and speeds to determine the best frames to extract.
Come to think of it, that could make for a decent introductory (ish) programmatic log file analysis tutorial or something. I’ll add it to my list
For photogrammetry purposes it’s of course preferable to use raw images if possible, rather than ones that have already been compressed for video, but that’s generally only possible with some more specialised hardware. I’ll have to see if I can work something like that into StaROV, since I’m already planning to use an OAK camera in it. Hopefully I can set it up to send raw frames on command based on telemetry and timing, in a separate queue to the normal encoded video stream.
First I used BlueRobotics Switch to control the relay powered by a 9v battery. It did work however I came up with an idea to make it a bit simpler. In BR Swith you push a little button by screwing the knob in. I got rid of small button, extended the inside end of the knob, and made it push… the contact of the power relay. So no power going thru the relay coil and no magnetic field that activates the armature, it is just physically pushed by the Swith knob.
Is it elegant? No. Does it work? Yes.
It took some time to properly 3D print an inside of the enclosure to hold it all tightly in place. Works every time and does not need an additional power source.