Hi all,
I have a pair of Omniscan 450 SS. I am able to use SonarView to record and view sonar data on our custom vehicles. However, I am wondering if there is a way to live stream/transfer sonar data in real time for processing/analysis?
Hi all,
I have a pair of Omniscan 450 SS. I am able to use SonarView to record and view sonar data on our custom vehicles. However, I am wondering if there is a way to live stream/transfer sonar data in real time for processing/analysis?
If you are still using SonarView to control and monitor the sonars, you can “listen in” on the data stream by opening a web socket to SonarLink (a component of the SonarView application). This is a new capability and we’d have to get you a beta SonarView. What you do with the data thus captured is up to you to parse and process.
You could also write your own code to connect to the Omniscans to both control them and capture their output.
If interested, let us know some more specifics (platform, network topology etc).
The API documentation is here: Programming API | Cerulean Sonar Docs
Hi Larry,
Thank you for your response. I’m currently using a Raspberry Pi 5 running the SonarView docker image. SonarView was configured to operate in dual scanner mode. I was able to collect data successfully with this setup. In the next step, I want to obtain live data to perform some image recognition with it. The approach would be to stream data of a certain time interval to a buffer to assemble a 2D image and then process that image with OpenCV.
For the Programming API that you shared, is it the format of the data streaming from the Omniscan circuit board to the Ethernet port of the host computer?
I assume you understand that with the setup you have, you can record svlog files and post process them off line to work on your image processing.
Yes the Programming API, and in particular the os_mono_profile packet is the data you will see in the svlog file or in a live connection if you go that way. In addition you will need to get position and heading information from which typically will come from MAVLINK or NMEA messages. Both have packet wrappers defined here General Packet Definitions | Cerulean Sonar Docs
If you do want to set up a live session you can do that by opening a web socket connection to SonarLink (on your RPi) from your program (on the RPi or elsewhere on the network). If you want to do that please submit a support request at https://ceruleansonarhelp.zendesk.com/hc/en-us/requests/new and we will get you going on that. It’s a new feature so we’ve not rolled it out in the docs yet.
Good luck!
Larry
Yes, with the current setup, I was able to collect svlog data and export to XTF for post-processing. However, my current plan is to use the sidescan scanner data to aid in path planning of the vehicle, so I need to process live data of a certain time interval on-line.
About your suggestion above, please correct me if I’m wrong, after setting up the socket with the scanner circuit board IP address, can I just send binary data packet to it and receive return packet? “packet_ID” is the unique ID for the corresponding command, e.g. command 116 to set up speed of sound, and 2197 to start pinging with configuration parameters with the structure describe as in the Universal Packet Format. Will handling reading and unpacking data be just generic UDP streaming?
I have the vehicle position and heading, but they are not yet in the MAVLINK format. If I set my heading data in MAVLINK format then I can wrap it in this MAVLINK WRAPPER right?
I will submit a support request shortly.
Are you planning to be running SonarView during this live data collection, or not? These are 2 different scenarios which will require a different approach for each.
It would be great if I can store the data while accessing to live data as well. However, if that’s not possible, I can priority live data access. Could you please explain the difference in the implementation between two scenarios?
Sorry that I was not clear in my previous comments. I meant to use SonarView to start recording while listening to the socket for my own data parsing and other live processing. My example with the commands 116 to set up speed of sound and 2197 to start pinging is just to validate my understanding of the packet format. I’ve just submitted a request via the link that you shared.
Indeed all of the above are possible approaches. We’ve got to get some documentation together on the web socket approach. To be continued on the zendesk ticket that you opened. Thanks.