Hi there. I am trying to mapping our small water pool in ROS by point cloud data published by Ping 360, but I get some trouble.
First, I download the code from ping360_sonar/src/ping360_sonar at develop · CentraleNantesRobotics/ping360_sonar · GitHub, and run it in ros as a driver for Ping360. It works well and I can get the data including angle and distance from topic in every frame.
Then I just transform the angle and distance data to point cloud data in ROS program, which will be published in rviz, and we can see the result below.
I don’t have experience with the ROS driver, so I’m not sure how it’s handling the Ping360’s profile data. If it’s not taking account of changes to the vehicle orientation and position then each output point will be relative to the Ping360’s pose at the time it was taken, so different points aren’t necessarily in the same coordinate system.
For reference, PingViewer corrects for Ping360 yaw rotation using the vehicle’s orientation data that it provides in the MAVLink telemetry stream.
The Ping360 outputs profile data that is relative to the device, and has no concept of
where the device is
how the device is oriented
which response strengths correspond to object reflections
the speed of sound in the transmit medium
all of which are relevant for generating a point cloud, and need to be determined from external data / provided by the user.
The device_data message the Ping360 uses to communicate its profiles uses gradians for the transducer head angle. There are 400 gradians in a full circle, so most likely your code should be using 400 for the maximum angle, unless it’s doing some internal conversion between degrees and gradians.
The 0 angle for the Ping360 transducer head is at the back, where the cable enters the device body.
Hi @Alex001 , Im also thinking of using ping360 to do underwater mapping. Just wondering if you have managed to improve accuracy of the point clouds from your previous post. Any help would be appreciated!! Thank you.