Ping360 Bathymetry mapping options

Hi there.

My company is interested in acquiring an ROV equipment for dam and lake inspection. The Ping360 seems like a great option for navigating in low-visibility conditions, however, we’d like to know if it has any bathymetry capabilities. One of our objectives is to be able to use imaging to combine with UAV photogrammetry data to determine water levels and volume at sites before and after any construction.

Thank you all in advance.

Hi @cgandarillas, welcome to the forum! :slight_smile:

That depends a bit on what you mean. The Ping360 does not do any data processing or object recognition / distance estimation, so bathymetry is not built in functionality.

That said, Ping Viewer logs the data, so if you have a positioning sensor/estimate and are willing to align and process the data yourself, it’s potentially possible to get spatial data from that, although there’d be some development required to do so. This thread may be a helpful start, if that’s a path you wish to go down :slight_smile:

Cool project! :smiley:

If you’re doing the photogrammetry yourselves then it may be possible to use a SLAM approach to combine the vehicle’s telemetry with the photogrammetry camera pose estimates to estimate the position over time, without necessarily requiring a positioning sensor (assuming you’re not already using one).

EDIT: I read UAV as AUV - it’s possible you’re not already doing photogrammetry with the underwater component, in which case some form of positioning sensor would be required.

Eliot,

Thank you for the response.

Yes, after some research, that is what I understood. The Ping360 is meant for navigation and the Ping Sonar would allow us to calculate depth, so as far as I understand, the sonar would help us, in combination with the PingViewer log ability, to register depth over a specific area.

SLAM may be helpful. We currently use drones (UAV) to map terrain topography (mainly RTK) through PIX4D mapping software, I’ve seen the same software used in mapping sub-sea vents taken with ROV video, that is why I’m convinced these methods may work. However, the regional lakes are known for very low-visibility water, so video is not an option.

Please correct me if I am mistaken. I understand that for our current project, we would need the BlueROV2, with the Ping360 and Sonar, and a USBL (Cerulean ROV Locator) to log GPS positioning. With this method, we would have to manually translate the depth to our 3D models.

I know there is no easy way to do this, however BlueROV seems to offer the best options for this type of work.

We are a Bolivian company working in Bolivia, this would be a first here, so any advice si highly appreciated. We still do not have an ROV unit.

The Ping360 is “meant for” whatever it’s useful for. The raw data it outputs is mostly helpful for navigation with a visual display, but could be processed for additional functionalities (potentially use-cases like mapping, and object identification, avoidance, and analysis, depending on how and when the data is processed, and how the device is mounted).

The Ping Sonar provides sonar profiles, including an estimate of the distance to a sonar-reflective surface or object, if it can see one. Facing downwards on an ROV or boat, the distance estimate is a measure of the altitude from the vehicle to the water bottom (assuming the vehicle is upright), and can be combined with a depth estimate (from a pressure sensor, or ~0 for a boat) to determine the height of the water column at that point.

If you have estimates or measures of the vehicle position at the time of each altitude estimate then you can combine those to form a bathymetric map of the area where the measurements were taken. The “naive” approach which just outputs a single point per measurement should be reasonably straightforward (this comment shows an example result from a user with a kayak and an Arduino).

A more complex approach that considers...

…the overlap of the ping beam from closely spaced measurements may be able to get additional resolution and accuracy (especially for bumpy surfaces), but would require some software development and an understanding of beam shapes and sonar theory to implement, and likely a decent amount of validation/verification. That’s something I’ve been thinking about, but haven’t had time to work on yet. A plus side of that kind of approach is it would be reasonably device agnostic, as long as the appropriate beam characteristics, and the vehicle and device orientation, can be captured by the processing.

Note that if you don’t want/need to process the profile data to make your own distance estimates then you can turn on the BlueOS MAVLink integration for the ping sonar (available in recent BlueOS 1.1.0-beta versions), which then sends the sonar’s distance estimates directly into the autopilot telemetry logs (which also include the depth and position data), in which case you don’t need Ping Viewer or its sensor logs at all (although you may wish to look at Ping Viewer to see the profile data while operating, which is also fine, and doesn’t prevent the MAVLink integration from working).

The sensor requirements depend on your operating conditions and data requirements. As in the example I linked to, a minimal setup could involve just a small boat, a battery, a Ping Sonar, a GPS sensor, and an Arduino for capturing the data, and a thruster setup (or human) for propulsion and turning.

Regardless of your sensor setup, most likely the data compilation and processing you do will output a point cloud of water bottom position measurements. I’m unsure what you want to do with your data, or what your 3D models involve, so it may be that the model(s) are the data, or that the data can be imported and aligned with the model(s), or that both the model(s) and data are imported into and aligned in a separate application.

1 Like

A post was merged into an existing topic: Retrieve Ping Sonar data for analysis