Hello dear forumers,
We are working on an AUV device that basically using 8 thrusters, a nvidia jetson and a pixhawk 2.4.8. Currently we are working on the autonomous implementation of the vehicle. We have a Ping360 scanning sonar mounted top of our vehicle and we’d like to use it as object detecting, obstacle avoiding and a mapping tool. Our vehicle’s design is currently intending to work on olympic-size swimming pools so for instance are there a way to define the boundings of the pool and set a route around the pool (just like the way robot vacuums do) or detecting an object floating?
Any help will be appreciated, thanks for your time reading this.
NVidia Jetson is a full family of products. Are you using one of the developer kits, or a custom board designed around one of the Jetson modules?
Detecting the position within the pool should be possible, although unless the pool has other objects in it at known locations then the initial location may need to be set manually, since olympic pools are generally rectangular, and the symmetry makes the position at any single point in time ambiguous. To actually do that kind of detection would likely require something like
- correct for vehicle rotation (using telemetry data) in sonar pings
- detect straight lines of intensity peaks in the scan profile data to determine the pool wall locations
- send the position to the autopilot
- repeat, but potentially make use of the known position to track expected wall locations
Detecting floating objects may be possible, but that depends on how far they protrude into the water, how far they are from the sonar, and what depth the vehicle is at.
It’s worth noting that robot vacuums likely have higher resolution LiDaR data than a scanning sonar provides, along with a shorter minimum range, faster scanning/refresh rate, less rotation/vibration issues (since the vehicle is on a flat surface, with wheels), and generally use contact-based push sensors for obstacle ‘avoidance’. Those differences aren’t insurmountable, but they exist and help to make the robot vacuum perception easier to solve and more robust. Vision may be able to help, although that also takes processing power, and adds its own complexities.
You’ll also need to be aware of sonar reflections (particularly when the sonar is close to a wall, in its ‘blind’ region).
With regards to “setting a route around the pool”, this post is likely worth a read.
While it’s likely quite a bit simpler and easier to work from a pre-made map that’s encoded into the control program, if necessary it’s possible to use a full SLAM algorithm to do both localisation and mapping, which may then detect and keep track of arbitrary objects within the pool water, and dynamically determine a path around them. What you decide to do will depend on your project requirements.
@toosat did you do any further developments regarding this task?
I am curious to know more about your setup.
I am working on a similar task with the bluerov2 + ROS, however I am going to use the robot in a pond with really low water visibility.
if necessary it’s possible to use a full SLAM algorithm to do both localisation and mapping
@EliotBR I agree that this is possible in theory, but in practice would this work well? I am having difficulties finding practical examples that show that this would work out of the box with a simple setup. Especially without additional sensors for localization like a DVL.
I found this repo where this person did SLAM using octomap with a sonar in simulation: GitHub - Tim-HW/Tim-HW-BlueRov2_Sonar_based_SLAM-
Didn’t read the thesis to know how well it works and if it could be extrapolated to real world
@toosat if you plan to use ROS it might be worth to check this package for the ping360 GitHub - CentraleNantesRobotics/ping360_sonar: ROS package for Blue Robotics Ping360 Sonar
I expect that’s largely dependent on the area being mapped, the vehicle speed, and the definitions of “work well” and “simple setup”. Most flight controllers include at least one Inertial Measurement Unit (IMU - combination of accelerometer(s), gyroscope(s), magnetometer(s)), and ArduSub uses an Extended Kalman Filter (EKF) for sensor fusion and handling uncertainty. I don’t expect an AUV with a Ping360 would be getting millimetre precision mapping, but a rough map may be possible if it and the environment aren’t moving too quickly.
In terms of the “simple setup”, hardware and software can compensate for each other to some degree, but fundamentally SLAM is a hard problem, so unless there are simplifications that can be made from knowns about the environment (e.g. a pool boundary with 4 straight walls, and perhaps known obstacle shapes and speed limits) then it would likely require either quite high end sensors, or very well written and tuned software, or both. The software side is reasonably well understood in literature, but that doesn’t mean it’s “simple”, especially since environmental simplifications may need to be developed for the specific use-case.
Thanks for your reply!
Yeah, I guess that is a matter of trying and seeing what happens and then thinking about how it can be improved. Well, I will try to collect some data with the bluerov and try this out in the following weeks, maybe I can share my findings after that