Underwater Vehicle Centering and Altitude Control in Tunnels - Looking for Guidance

Hi everyone,

I’m working on a project that involves navigating and centering an underwater vehicle within a tunnel. My goal is to ensure that the vehicle stays centered as it moves through the tunnel. To achieve this, I’m considering using altitude control modes (e.g., depth or altitude hold) to maintain a consistent altitude. However, I’m unsure of the best approach to effectively use these modes for precise positioning in such environments.

Has anyone here worked on a similar task or have suggestions on the best methods, sensors, or configurations to maintain altitude and stay centered in tunnels? Any advice on effective use of altitude modes, sensor fusion (sonar, IMU, etc.), or fine-tuning PID loops for stability would be greatly appreciated!

Thanks in advance for your insights!
Ömer F.

Hi @omer_1

Very interesting application! Maybe you could use a ping360 to stay centered?
or two Ping2 maybe, but those could potentially interfere with each other…

The ping2 solution would be nicer as you could just configure distance sensors in ardupilot and use a Lua script to auto-center in the tunnel. Unfortunately we don’t have (yet) a driver for ping360 that communicates directly with ardupilot.

If the tunnels are perfectly level, that should work, if they are not, you could use the Surftrak mode recently implemented by @clyde !

Hello @williangalvani,

Following your suggestion, I bought a ping2 sonar and placed it in the front of the vehicle. I have a pool and I’m wondering if I can get the vehicle avoid the walls. First goal is when i close the wall, i will turn right or left. Anyway, i completed python scripts about thruster controls via using mavlink protocols. but im curious how to read ping2 sonar data and how to add it on my python script. For example when i close 30 cm i want to turn right? Could you help me?

Thanks

Hello @omer_1,

It seems we are working on similar projects! In my project, I’m using a setup that includes 4x Ping sonars at the front (top, bottom, left, right)and a Profiling sensor at the rear. With this combination, the ROV is capable of performing both centering and automatic yaw control.

As @williangalvani mentioned, when working with multiple sensors, there’s a risk of interference. To address this, I developed a winForms program that parses 4x ping sonar data and operates the sensors sequentially—one at a time every 150ms—rather than simultaneously. While this reduces the data update rate, it still works effectively for the centering algorithm. The 4x ping distance data is sent via UDP to a custom version of QGC (QGroundControl), which I’ve modified to receive this information.

Inside the custom QGC, I’ve implemented a centering algorithm and automatic yaw control using a Kalman filter. This is integrated with the distance data from the profiling sensor for more accurate control.

These features don’t fully take over the pilot’s control. Instead, the centering and automatic yaw control function as assistance tools during operation. The pilot can still manually control the ROV while these features are active.

2 Likes

Hello @ryan354,

Thanks for reply me.
I had previously developed an autonomous vehicle using Python scripts when i was in high-school. However, this time I am experiencing some difficulties because I am trying to develop a semi-autonomous vehicle. For example, the QGC in the QGC screenshot you sent is different from mine. I’ve seen customized control stations, is yours one of them? How can I? My second question is, do I need to make a winForm application like you did or should I transfer the sonar data to my Raspberry via a microprocessor like STM? I did a little research and saw that there was a lot I didn’t know about winForm. What would be the fastest solution for me? What do you recommend? By the way, I used the MiniLidar distance sensor in my previous project, but I do not have detailed information about sonars yet. So is it possible for you to be a little more clear?

Thanks

Yes, the QGC I’m using is a customized version. You can certainly build your own QGC based on your specific requirements and sensors using QtCreator. For a basic guide on how to build and customize QGC, you can easily find tutorials online. In my case, I used for WinForms to create a simple GUI, but you can go with whatever suits your project needs.

You can use other programming languages like Python, which has a library available for parsing ping sensor (here’s the link: Ping Python Library). You can also use C++ or C# to build a parser based on the Ping Protocol (documentation is available here: Ping Protocol). Alternatively, you could use ROS (Robot Operating System) to retrieve the Ping sensor data if you’re familiar with it you can find many ROS package for ping sensor on Github.

I suggest starting by focusing on the sensor and data communication side of things. Consider how you will handle the data from the sensors and gather it at the topside controller—whether via UDP/TCP, RS485 etc. In my case, I’m using optical communication, so I don’t face issues related to network speed or latency. You can gather the serial data from the Ping sensor using a Raspberry Pi (via serial-to-ethernet) and send the data topside through Ethernet.

To get started quickly, I recommend writing a simple program using Python and PyMavlink to receive the distance data, parse it, and perform basic movement avoidance. This might be a faster solution for you than fully customizing QGC at this stage.