Sonar profiling and data rendering

Hello everyone
1.- Someone has used the 360 ​​sonar perpendicular to the direction of advance of the rov to outline caves or tunnels.
2.- how the image or sonar data can be processed to make a render.

Hi @adurix,

If you want to create a post-processing software, you can check our binary structure documentation for the log files: Binary file structure - Ping Viewer

Hi alain,

Context

I haven’t tried this, but it should be possible. To get all the walls and floor you would need to make sure the ping360 is close enough to the vertical center of the cave/tunnel so that it isn’t too close to see any of the sides.

Problem

This is quite a difficult problem to solve. Fundamentally it requires knowing where the ping360 is relative to its starting point for every scan, and determining the distance to the walls from the sonar readings.

Possible approaches + sources of uncertainty

Dealing with beamwidth

The large vertical beamwidth (25°) means you either get points with significant uncertainty, or you use some smart processing to combine the position data over time with the scan data to get higher resolution information from the scans, but that’s a very challenging task and may not be possible without significantly more precise and accurate position data.

Position estimation

If you’re using an ROV, determining position can be based off the IMU and depth sensor data, but be aware that the IMU may have significant drift over time, which could lead to significantly incorrect scan results. If you’re able to maintain a consistent forward velocity and minimise lateral and vertical motion then that likely helps considerably with generating reasonable results.

Distance measurements

To determine distance measurements from the scans requires some kind of peak-finding algorithm, and it may be helpful to estimate a confidence (similar to what the Ping1D does). In confined spaces it’s also important to handle echos. Note that the ping360 generally has quite strong signal noise in the first 0.5m or so of each scan, so that data likely needs to be removed/zeroed out.

Avoiding peak detection

If you don’t require precise automated measurements of exactly where a wall is, and you trust that your position measurements are correct, you could plot the conical wedge of each ping reading in 3D space around the sonar position, and set its opacity by the sonar response strength, resulting in effectively a density map where peak density is high likelihood of walls. That avoids the need for peak detection, but still requires handling the close-to-sensor noise and echos. It’s also quite graphics intensive to display that, so it may be preferable to just treat each ping response location as a point, and create a point cloud with fixed-sized points, still with opacity based on response strength (at the cost of no longer having nicely increasing density estimates where the conical slice regions overlap).

Avoiding position estimation

A naive lining up of the slices of the pings can be achieved by assuming the same distance forward has been moved at each subsequent scan (constant velocity), which allows ignoring forward IMU data, and assuming no rotation and relatively constant lateral position holding allows ignoring IMU data entirely, and instead just aligning each scan by depth reading alone. This is likely to have several accuracy issues, but may be helpful as a ‘first pass’ to get a general sense of at least the expansion and contraction of the tunnel/cave along its length. This approach is badly suited to turns and forks.

Hi Eliot thanks for your help

To have the position we have two options
1: use the waterlinked which sometimes works well and after not
2: use a tether meter and index those measurements with an ovelay to the rov video, that has been quite reliable for me.

Dealing with beamwidth

Do you know if it is possible to regulate the angle from 25 to 10 degrees or less to improve precision

Position estimation

The idea at first will be to advance one meter, stop, wait for the sonar to make a 360 degree turn and advance one meter again, so on.
to keep the distance you could use the 1d ping
or the same sonar 360 with a function similar to 1d ping to maintain height but equidistant in 360 degrees
The idea is to use 2 sonar 360, one vertically to be able to advance through the pipeline and the other to profile the pipeline.

I’m unsure how this would work in a cave/tunnel without multiple receivers placed throughout, with known positions.

This seems reasonable for getting an estimate of how far into the cave/tunnel the ROV is, but doesn’t help with sideways position, and any corners/turns would have to be marked out by comparison with the video and timestamps.

It’s not something that the ping360 allows changing through software. You could potentially add some sound-absorbing/reflecting material around the ping360 to only allow a desired angle through, but that would likely be a difficult modification. I wouldn’t recommend modifying the internals to try to do this due to the oil-filled section where the transducer is stored, and I’m not sure whether there’d be space in there anyway to add something that could block more of the sound without having negative effects on the rest of the device’s performance.

This may have some issues. If you’re running both at the same time then you’ll likely run into signal interference - two devices sending and receiving the same sound frequency means noise occurs unless they’re never operating at the same time. I believe it would require using at least one of the pings via the raspberry pi (the default option) and modify the companion software to change the port it sends that Ping360’s data to, because otherwise they’d be competing to send information to the same port/PingViewer instance.

This is still a difficult problem in underwater robotics, as Eliot mentioned. However, there are few works out there that are on this line of research.

Confined environments such as natural caves or artificial tunnels pose different challenges but you may be able to use some heuristics to aid your navigation depending the specifics of the application environment.

My PhD work was around this problem back on 2014, and here you can see some results. Although it was an AUV with different sonars, the same concept can be applied with a BR2 ROV and the Ping. However, is true that if more than one sonar is used, it will require different frequencies or precise timing to avoid interference.

It may help clarifying some basic concepts having a look on this paper.
Also, if you want to have a look to these kind of data, we have published them here.

2 Likes