New research using customized BlueROV2 to conduct kelp forest surveys

Hello Blue Robotics community,

I’m excited to share two emerging Seattle Aquarium conservation research projects with you all, centered around using a customized BlueROV2 to conduct standardized benthic surveys in the nearshore subtidal along the Olympic Peninsula and within Puget Sound, Washington, USA. Given the logistical limitations of conducting scientific SCUBA surveys (there’s only so much “ground” we as divers can cover carrying around life support equipment on our back!), our overarching objective is to expand the spatial extent across which we survey species abundance and distribution for kelp, invertebrates, and fishes, particularly within kelp forests.

A major limitation is the idea of using a tethered vehicle within a canopy-forming kelp forest. However, thanks to the small size, high maneuverability, and responsive handling of the BlueROV2, we’ve developed methods to (1) survey 100m straight into a kelp forest, (2) turn the ROV around and locate the tether, then (3) carefully follow the tether back out of the kelp forest, which requires maneuvering to the left/right, or under/above, bull kelp stipes (see here for examples of us getting the vehicle out of bull kelp forests). Through careful coordination between the tether handler and ROV pilot, we’ve even developed methods of putting tension on the tether then exerting directional force via the ROV, all of which “creates an opening” when the tether is lodged in the thick of a tangle of bull kelp stipes (see this short video).

We have x1 GoPro 10 facing forwards and x1 GoPro 10 facing downwards mounted on a Payload Skid (neither of which are wired into the ROV, i.e., we turn them on at the beginning at let them run). We have a Ping Sonar Altimeter to maintain imagery at a consistent scale (we keep the ROV 1m above the benthos). We’re also using the WaterLinked Underwater GPS with the U1 locator to provide a geospatial record of our surveys. We have four lights facing downwards (our primary illumination) and two facing forwards. See here, here, & here for examples of our benthic transects via the downward-facing GoPro, and here, here, & here for examples of our benthic transects via the forward-facing GoPro.

On the analytical side, we’re using open-source AI programs to extract metrics of abundance and percent-coverage from our imagery. Specifically, we’re using VIAME (see here) for object detection, and CoralNet (see here) for metrics of percent-coverage. See the image attached for an example of using the ROV imagery within the respective GUIs, including an example of mapping a ROV survey atop open-source bathymetry. In short, our workflow involves extracting stills from the 4K GoPro video, annotating those images in VIAME and CoralNet, then appending those community data back to the original ROV telemetry file (containing the Ping data and GPS coordinates), such that we have a spatially-explicit record of species abundances/percent-coverage along with all ROV metadata.

Having developed methods to conduct standardized benthic surveys within kelp forests, it is our hope these methods will provide researchers and those engaged in conservation an additional tool within their long-term benthic monitoring toolkit. As we are a big proponent of open access research, we’ve created two GitHub repositories where we’re maintaining information:

I by no means will claim our analytical approach is THE way, but rather it is simply A way. . . keep in mind we’re field biologists (and not computer engineers!), thus we’re doing the best we can to adopt and implement the technology and software you all have worked so hard to make available.

There are a variety of features and customizations we want to develop further, some of which I see posts about, some of which I haven’t come across any mention of before. I think I’ll save that all for another day however. Thank you, Blue Robotics, for making your technology modular and accessible . . . I’m really excited to develop these survey methodologies further – I genuinely believe this type of technology and the parallel advances in open-source AI have the potential to change the way we survey and understand ecological health and resilience. Please let me know if you have any questions, comments, or ideas.

Take good care,

Zach

|

7 Likes

Hi @zhrandell, thanks for sharing! :smiley:

This looks to be a really cool project, and I’m glad our existing technology and software stack has been useful so far. Hopefully the marine robotics community can continue to advance together, through sharing our successes, and the lessons from our failures.

It’s always exciting to see new and interesting applications that our products have helped to enable, and I have personal interests in both computer vision and data analysis, so there’s lots for me to enjoy here. Hopefully I’m able to finish up my current major documentation projects soon, and get a bit more time to work on my StaROV and data-alignment projects :slight_smile:

I had a quick look at the codebase you linked to. I’m not particularly familiar with R, so kept my attention relatively shallow and high-level. As a few suggestions:

  • the current ‘nth frame extraction’ process is indiscriminate, which means occasional blurry or poor quality frames could be captured even if there are other ‘better’ frames nearby
    • if you’re not overly short on processing time, it may be worth getting the sharpest of ‘k’ frames around each time of interest (similar-ish to this code of mine (which extracts the sharpest of every block of k frames))
  • the issues you’re having with the ping data may be somewhat preventable by turning off automatic mode and setting a manual scan range, assuming you know rough limits for how far away the bottom will be
    • for the already collected data, the correction approach discussed here may help avoid needing to interpolate data that ‘overflows’ when the sonar gets too close to the bottom
  • it may be worth checking out Log Viewer for initial log previewing, analysis, and csv output of selected data
    • Log Viewer is also built into BlueOS, although it’s currently only fully integrated for Navigator-based vehicles
  • you might want to look into the hdf5 data format, particularly as you start to build up collections of data segments from longer dives

Also, please feel free to share any ideas you have about what kinds of hardware or software would make this kind of work easier (at any part of the process). We can’t guarantee we’ll be able to develop everything, but knowing the valuable and desired features across different use-cases helps us to prioritise what we focus our R&D and example creation resources on :slight_smile:

1 Like

Thank you very much, @EliotBR, for your response! Your StaROV project is VERY cool, and the functionality section more or less reads as a wish list for where we’d like to take this work eventually (e.g., integrating additional cameras, incorporating terrain-following via the Ping 1D).

Good call re: the “sharpest ‘k’ frames” . . . we could definitely use that and we will incorporate that code – thank you! And likewise for your suggestions regarding modifying how we both use and analyze the Ping data – all good ideas, thank you!

I do have one question about an additional software feature that is likely fairly niche though would be really helpful for those of us conducting discrete surveys that subsequently require analysis of the .bin and .csv ROV telemetry logs (and this is something I haven’t seen any mention of on the forums):

Would it be possible to manually trigger the creation of the .bin & .csv ROV telemetry logs without removing the battery? That is to say, if I’m about to initiate a 100m ROV survey, I get the ROV into position, and then (via a button on the controller, or something in QGroundControl, or even a function in an API) execute a command which triggers the creation of the up-to-this-point .bin and .csv log files. Then, since the battery is still plugged in, NEW .bin and .csv log files immediately begin to be recorded. At the conclusion of the 100m survey, we’d once more trigger this feature, and the resulting .bin and .csv files would then be a complete record of the survey (for context, right now we’re lining up time stamps from videos, or use ROV behavior (such as flashing the forward lights) which gets recorded in the ROV telemetry log, to differentiate start and end positions for our surveys. I was hoping that arming and disarming the ROV would be associated with log start/stop, but that’s not the case.

What do you think? Would something like that be possible? Lastly, I should note that I’m hoping such a feature would “play nice” with WaterLinked’s system, such that the feed of GPS coordinates to QGroundControl wouldn’t be interrupted.

1 Like

No worries :slight_smile:

Creating a new DataFlash (telemetry .bin) Log on arming can be enforced by enabling the LOG_FILE_DSRMROT parameter - otherwise I think it continues the same log when re-arming, either within the same session or possibly within some timeout period. Note that DataFlash logs are higher frequency than what’s sent to QGC, so it may be worth making your .csvs from those (using mavlogdump).

QGC should also create new telemetry logs (.tlog files, and direct .csv ones if you’ve got that turned on) when arming if you’ve got the “Save log after each flight” option selected in the Application Settings, but it may also have some kind of timeout that joins nearby ones together. I don’t know that there’s a way around that, but it should be possible to automatically segment those (as well as video recordings and Ping .bin sensor logs) by using the timestamps from dataflash logs, or automated detection of something like a pattern in the lights, a movement manoeuvre of the vehicle, or some combination thereof.

2 Likes

Very interesting work, thank you for sharing!

1 Like

Purely for fun . . . check out the attached pictures.

The Seattle Aquarium recently put SCUBA divers in the water with our ROV for the first time, and I’ve attached a few screen grabs from one of the diver’s GoPro camera. This concurrent ROV-diver day in the water was the first in a series of tests that will lead up to a methodological comparison between the two survey platforms. Our objective will be to better understand how the data collected by scientific SCUBA divers and our ROV differ in order to maximize the two platforms (i.e., have the ROV do the things that the ROV does best, and have divers do the tasks that divers do best!). We will conduct this study in Elliott Bay, Seattle, by having the ROV film above the precise same meter tapes that the scientific SCUBA divers use to conduct their benthic surveys.

Cheers, everyone!

Zach






4 Likes

These look great - thanks for sharing! :smiley:

Cool! Hopefully a fun study for everyone involved, and a win for the divers and ROV team to get to focus more on what they’re most effective at afterwards :slight_smile: