We have been working on turning a blueROV2 into an AUV, where hopefully it will carry out certain underwater tasks automatically.
I understand that we may use pyMavlink in an RPi for command, but I wonder how precise can we be with pyMavlink and RPi, would it be neccessay to modify Ardusub?
So, here comes the problems, I have been learning ardusub for quite a while, yet I am still confused about the system. I have learned that new modes may be added, perhaps new control algorithms are possible as well? Although it seems too complex now, to modify Ardusub.
Thanks for your time!
Hi @Lili_Marleen, welcome to the forum
In general, Python is better for writing code/logic quickly and understandably, and C/C++ are better for high-performance low-level control. Accordingly, Pymavlink is likely the easiest way to start learning how MAVLink vehicles/devices work and get something running that does the general actions you’re interested in, and then if you find you need better performance than you’re able to achieve with Python+Pymavlink, then you may want to switch to the C implementation of MAVLink, and if that has too much communication overhead or you need faster sensor updates you would likely need to make a modified version of ArduSub.
Whether or not you need lower level/higher performance control than you can achieve with Pymavlink depends on your use-case and hardware. If you can describe more the “certain underwater tasks” you want to achieve then we may be able to provide clearer guidance on whether modifying ArduSub is likely to be necessary.
I would note that the developer documentation for modifying ArduSub currently isn’t very comprehensive. That’s something we’re planning to improve, but at this stage our documentation is mainly focused on describing existing features and how to use them, rather than where the relevant code is located or how to make changes to it. Much of the existing documentation for lower level control and understanding the codebase is in the more general ArduPilot docs, as discussed here.
Thanks for your cogent advice!
Specifically, we aim to build a residential underwater robot. It is supposed to be able to navigate along some certain routes (in our case, some guidance belts with distinguished color may be an option, then combine with some visual program). We also would like to enable it to return to a base station after a tour.
Genearally, we hope the robot may follow the information given by visual or acoustic feedback and move accordingly, and we also need to adjust its positioning so that it may land on the base properly.
As for what we have got, I have more confidence in using OpenCV and other tools to implement visual tasks. However, I am less familiar with the control plans.
Ok, you can’t directly connect a camera to or run computer vision code on the autopilot board, so that will need to be processed on the companion computer, which can then guide the vehicle on where to move by sending relevant MAVLink messages (using Pymavlink or some other language implementation)
Depending on what you want from this, it may be possible to visually follow something back to the base station. If there’s nothing to follow then it will need some kind of positioning reference/sensor, which could be self-contained on the vehicle (e.g. a DVL), or could use one or more external acoustic beacons that are detected by receivers on the vehicle (possibly a direct acoustic positioning system, but that’s not essential, especially if you only have one location you want/need to get to).