I’m using my BlueROV2 equipped with a Ping Sonar Altimeter & Echosounder. I also attached a GoPro camera to film the bottom.
The Ping Sonar works really well for “real-time” information, which is really useful. But I would love to retrieve the bathymetry data and use it for analysis. Ideally, I want to have a value for each image of the bottom I analyse.
Has somebody done this yet?
I have found the documentation for ping-python, but I cannot seam to convert the log file (in Binary format) into something i can use (like a .csv or something). Also, I’m really not a coder.
We have a sensor log decoding example in the Ping Viewer repository (since Ping Viewer creates those logs, and it defines the format).
If you’re specifically after just the distance measurements you could do something like the following (requires downloading “decode_sensor_binary_log.py”, and running this in your own python file from the same directory):
from decode_sensor_binary_log import PingViewerLogReader
from datetime import datetime, timedelta, time
from pathlib import Path
# TODO: put in the path to the log file you want to process
# e.g. "~/My Documents/PingViewer/Sensor_Log/something.bin"
logfile = Path("/path/to/file.bin")
def to_timedelta(time_str: str) -> timedelta:
''' Returns a time delta from an iso-format time string. '''
delta = time.fromisoformat(time_str)
return timedelta(hours=delta.hour, minutes=delta.minute,
log = PingViewerLogReader(logfile)
outfile = Path(logfile.stem).with_suffix(".csv")
# ideally it would be good to localise this to your timezone,
# but in this case it shouldn't cause issues unless you were
# recording the sonar data at the time of a daylight savings
# changeover or something
start_time = datetime.strptime(logfile.stem, "%Y%m%d-%H%M%S%f")
with outfile.open("w") as out:
csv_writer = csv.writer(out)
csv_writer.writerow(("timestamp", "distance [mm]", "confidence [%]"))
for timestamp, message in log.parser():
# convert the 'duration since scanning started' into a local-time timestamp
# the .replace here ensures cross-compatibility between operating systems
timestamp = start_time + to_timedelta(timestamp.replace('\x00',''))
csv_writer.writerow((timestamp, message.distance, message.confidence))
If you’re interested in more involved analysis (or just extraction) of the profiles themselves then the code will need to be more involved, and a csv format may not be suitable. This thread may be of interest.
I’m planning to make a detailed post-processing example including synching Ping Viewer log data with other things like telemetry logs and video, but haven’t had the time to do so yet unfortunately.
Out of interest, how are your videos/frames saved? You’ll need some way to align the video with the sonar scan data, and depending on what you actually want to do with it you may need to
use a program to extract video frames and
include the corresponding distance+confidence in each filename
write the distance+confidence as text on each frame (which could be saved individually or as a new video)
write the distance+confidence information into a subtitle file, which could then be played together with the video (without permanently modifying the frames)
Note also that
the video and sonar likely won’t have the same update rate, so you’ll need some way to get the nearest value for each frame
it may be relevant to correct for tide levels in the distance values, especially for inspections that are happening during a changing tide
if your video is from a reasonably steady boat you may be able to stitch together the frames into one large image of the full inspected area, and potentially plot the bathymetry results on top of that (that would likely be somewhat involved, but is definitely an interesting application)
alternatively it may be possible to use photogrammetry to create a 3D model, which could be cross-referenced with the sonar data
I tried your code, but the “datetime.strptime…” doens’t work at all for me. But I used your code and mixed it with mine and I ended up with something I can use Thank you!!
I’m actually doing this now. I’m trying to gather all the data from the BlueROV sensors and synching everything for analysis.
Actually, I’m extracting images from the video, and I plan to annotate each image (for analysing biodiversity of the bottom). In the end, I aim to have a dataframe with the names of the images and all their associated sensor data. And use this as a metadata file for my biodiversity annotations So with the following column names : imagename, time, altitude, distance_to_bottom, yaw, lat, long, temperature, etc…
Yes, that is one of the problems i encountered, but it’s not impossible to resolve. I have more telemetry and ping data than images, so each image has a value! I don’t really know how to explain but i made it work for me.
That is exactly what i’m planning to do next! Actually, my goal is to create small “maps” of the biodiversity of the sea bottom. I’m using Agisoft Metashape to do this and I have good results with my image alignment. I have not tried yet to add the sonar data, but I’m getting there!
That’s confusing to me, unless you’ve renamed your log files? I’ve used that in a previous project, and also confirmed it was working when before I posted it…
I’ve got working code, but need to refactor it to strip out some stuff that I can’t share. Given you’re actively working on this, I might see if I can prioritise that and get the example out (although it’s very early Saturday morning now, so won’t be for a day or few).
Curious what kind of approach you’re taking for this - is it manual? Every nth frame? I’ve got some code that extracts the sharpest frame from each predefined interval (which I’ve previously used for stitching), but that may not apply to your use-case if you’re selecting frames at more random intervals by what you know to be of interest/relevance to the use case. I’ve also previously considered augmenting the ‘sharpest’ selection approach with something that factors in vibration/acceleration data from the telemetry.
Sounds like a good candidate for a database, although a simple datastore like a csv file could also potentially be sufficient, depending on your storage and processing/querying requirements.
Feel free to raise issues for anything you think would be useful in an example like this, and/or just things that would be useful for your particular use-case (worst that can happen is I decide something is out of scope - best that can happen is it gets implemented ).
Ended up having to sort out some other things, so not as far in as I wanted to be, but I’ve transferred over the ‘telemetry log(s) → csv → dataframe’ part and the initial Python and library requirements. I’m hoping to do the Ping Sonar component tomorrow, and possibly some initial video stuff as well (e.g. extract the sharpest frame at rough time intervals, and ideally also extract frames at specified travel distances).
At this stage I’m working with the equipment and data I’ve got on hand at my desk, but if you happen to have a telemetry log (.tlog), some sonar data (.bin), and some video that you’re happy to share then that could be useful, especially for proper testing of things like positioning data and your video format (assuming it’s not just the timestamp-in-filename that QGC uses).
If you are able to share some data your timezone would also be useful for alignment, since telemetry is saved in UTC while sonar data and QGC video are saved with local time stamps.
I’ve been struggling to make time to fit this in, but basic ping sonar distance extraction is now available, including timezone localisation and csv output.
I’m of course planning to make one or more examples of doing something meaningful with data alignment, but in the interim it’s possible to align telemetry and the sonar distance csvs by their timestamps using pandas.