Hey everyone,
we are using your ‘Ping Sonar Altimeter and Echosounder’ to detect the
underground while measuring on a moving boat.
Back from the site of the measurement campaign we have trouble with the
interpretation of the recorded data. Our intension was to manually
analyse the echo data of the underground where the confidence wasn’t
100%.
To achieve this, we successfully read distance, confidence, scan_length,
scan_start and profile_data and created plots for each ping of the
profile_data. After that, we calculated backwards the index where the
underground was detected. We are using this formula:

scale_factor = (scan_length - scan_start)/199
index_underground = distance / scale_factor
(we converted each length from mm to m, for better usage)

After that, the obtained index is added as a red line to the plot.
Additionally we used a color-map to validate the plotted data with the
given confidence. In our case ‘red’ means 0% confidence and ‘dark blue’
100%. Please see the attached file.
Looking at a row of e.g. 100 pings, we do not see any adjustment of
index_underground even though the intensity of the data clearly does
not match the index_underground. The confidence is always 100%.

Did we do something wrong during the calculation of the
index_underground? Could you please help us to interpret the echo data
correctly?
We are very grateful for any help.

I’m afraid I’m not really sure what you’re describing. The image you attached seems to be a stack of several graphs, and they don’t have axis labels or a legend.

Is this supposed to indicate that you’re using 200 sample points? If I’m understanding what you’re trying to do here correctly then I don’t think you should be taking off scan_start from scan_length, you should be taking it off from distance, e.g.

Are you saying the position of the vertical red line doesn’t move throughout the graph stack? The bottom half shows multiple slightly different positions for it.

Assuming distance is what’s returned by the Ping Sonar, note that the distance estimate is from a target tracking algorithm - it’s calculated from a combination of peak finding and temporal averaging, which means fast changes in actual distance will take a few pings before they show up as a stable and full-confidence result.

thank you @EliotBR
Yes I’m sorry, the graph without any legend is hard to get.
The x-axis displays the samples of profile_data (from 0 to 200). The values on each ping are plotted as a curve within a range of 0 to 255 (0 bottom, 255 up). On the y-axis is only the ping number. In our case, starting from the top with 2700 to the bottom.
To make this clear: every single plot shows sample indices of profile_data vs sample values of profile_data. The graph shows the plots of 100 pings.

You are right, we used 200 sample points per ping. In our calculation we used 199 because there are only 199 gaps between the sample points (instead of 200).

Now we want to calculate the specific sample, where the underground was detected.
By using our formula we get:
scale_factor = 7822 / 199 =~ 39,3
index_underground = 5241 / 39,3 =~ 133
The result of index_underground is in the range of the samples indices in profile_data.

Trying your formula:
measurement_index = (5241 / 200 ) * 7822 =~ 204975
This is far out of the range of samples indices in profile_data.

In this sketch you see the measured data. Based on the target tracking algorithm, we assume that the first really high peak will be chosen as the depth of the underground. But according to our calculation we get the result shown in the sketch.

If you’re using zero-based indexing (which it seems you are) then 199 is the correct number to use in your calculation, because when the scale_factor is 1 the index should be the last sample, and if you’re counting from 0 then 199 is the 200th sample. The “gaps” reasoning doesn’t really make sense to me, but the number should be fine

My bad - I forgot to flip the fraction apparently. I’ve updated the equation in my comment, but it’s the same as yours anyway (just in one step instead of two).

Peak finding finds maximal values, not steep slopes (i.e. the peak is on the signal, not its derivative). Consider that a cliff at the bottom of a mountain may be the steepest portion, but it’s not the peak. Presumably that approach helps avoid large variability when trying to detect soft riverbeds/sea floors that may have a slow and smooth density gradient.

I’m unsure what the algorithm does when faced with a saturated signal (a peak with multiple full intensity values in a row, giving a flat top) - it may choose the first top value, or the middle one, or may use the derivatives to try to estimate where the actual top is most likely to be, or something else. Saturated signals are very difficult to calculate meaningful results from, because there’s a lot of missing data (it’s like looking at a photo of a mountain, but the top is cropped off - how can you tell where the peak is?).

If you’re getting a lot of saturation (which looks to be the case in your plots) then it would be helpful to reduce the gain, and/or reduce the transmit duration. It may be worth turning on auto_mode, but if that’s already on then it seems to be performing somewhat poorly for your use-case, in which case you should try switching to manual mode (disable auto_mode), so you can set a more appropriate gain manually. Note that in manual mode you also need to control the scan range yourself.

As a note, if you’ve found and tested some alternative algorithm that gives more accurate distance estimates, you can of course apply that on the profiles in post-processing and use those results instead - it just won’t necessarily agree with the estimates provided by the sonar. You’re also welcome to share and discuss algorithms here if you’d like to

Thank you @EliotBR for your interesting answer. That explains our trouble very well…

As far as we can tell now, auto_mode was turned on. We will test the manual mode as soon as possible and then I’ll share our results.

Do I understand you correctly that our data is not really reliable for an exact underground detection?
Would it be possible to change the value of the confidence in such a case? 100% confidence in our case is hard to believe with your explanation.

The firmware doesn’t have any configuration options for the confidence, so that’s at least not immediately possible.

The control algorithm is closed source, and I don’t have access to it so unfortunately don’t know how it calculates its confidence estimate. You make a valid point that 100% confidence doesn’t seem very logical when a peak is effectively impossible to find in a region of saturated data, so I’ve raised this internally to see if that can be changed. Until there’s a new firmware available with a change like that though, that won’t be possible for your device except in post-processing with your own algorithm.

Following up on this, that’s something that can be fixed, but because it’s not a particularly common issue I’ve been told it’s not a high priority for us at the moment.

I’ve raised an issue for it here, so you can track any progress on it