Hello dear forumers,
[Before starting, we are using 8x T100 Thrusters as motors. We are using a Jetson Nano and a Pixhawk 2.4.8 to control our vehicle.]
We are facing with this problem since we using
DEPTH_HOLD modes in our vehicle. For instance, when the AUV decides to go upwards via sending pwm to channel 3 motors, it stabilizes itself and goes upwards smoothly but at this point there is an unexpected problem occuring. The vehicle gliding forward whether vehicle on
STABILIZE mode or
DEPTH_HOLD mode. I am not clearly sure about what might be the issue that causing this problem but i think it might be about the calibrations. I’ll leave a few videos about the problem that we faced in pool tests. Any help will be appreciated and i am waiting for your ideas. Thanks for your time reading this, have a good day!
Hello dear forumers,
I watched the video in the pool, it would seem that the Rov is too heavy in the bow, so it could be the cause of the sliding forward.
Hi @Marco thanks for sharing your thoughts about the situation.
Actually the AUV’s stern is heavier. When we start the vehicle on ‘MANUAL’ mode and send pwm to the channel 3 (which is “THROTTLE” and makes the vehicle go upwards) the AUV tends to lean backwards so its heavier in the stern.
However, if it would have heavier on bow, the vehicle should have stabilizing the weight difference via motors. So my opinion is the problem not about center of weight.
It seems like the mass and buoyancy distributions of your vehicle are sufficiently unbalanced that the BlueROV2-Heavy frame configuration thrust contributions are quite incorrect, which means the control algorithms can’t function properly.
I’d recommend you read through this post to get a deeper understanding, and from there you can determine whether it’s best to adjust the vehicle design or create a frame configuration that better matches your actual vehicle’s distribution (or some combination of the two)
EDIT: It’s been pointed out to me that this proposed functionality does already exist to an extent, as Sensor Position Offset Compensation. It’s not full compensation (apparently doesn’t correct acceleration at this point), but it should at least work better than no compensation.
I was thinking about a way of automatically determining the thruster motion factors and have realised that that’s likely possible (e.g. via a program that controls one thruster at a time and measures the resulting accelerations and rotations before comparing and normalising them), but I’ve also realised that ArduSub/ArduPilot would benefit from including a way of calibrating offset between the flight controller position and the desired control point* of the vehicle, to improve rotation control.
*Note: by “control point” I mean rotation centre, which would be most efficient if set as the centre of mass, or perhaps halfway between the centre of mass and the centre of buoyancy, but could also be convenient somewhere else depending on operating aims/conditions.
As I understand it, vehicle control is currently based on the idea that the flight controller is located at the control point, in which case the measured rotations and accelerations of the flight controller are also correct for the vehicle as a whole. While that’s an intuitive assumption, it’s not necessarily practical in reality.
As an example, most users of the BlueROV2 Heavy likely expect/want to control it around the green circle (the visual centre of the vehicle, and roughly its centre of mass/buoyancy), or perhaps the yellow circle (at the front where the camera is, which is where the pilot ‘sees’ from in the video stream), but in reality the flight controller is offset, at the red circle:
From a control perspective, this likely has minimal effect on direct translation control (e.g. moving forwards will move all circles forwards equally), but a rotation about one circle is a rotation AND a translation for the other circles. That’s problematic because if the autopilot is actively controlling to have a rotation with no translation, about a control point that’s not where the sensor is, then it will be fighting against itself to stop the measured translation when it’s in fact correctly doing the commanded rotation.
Calibrating for an offset between the desired control centre and the IMU sensors requires an accurate method of specifying the offset, so that the required coordinate transform can take place from the sensor measurements to the actual vehicle’s motion.
Likely the simplest approach would be to physically measure the distances (in CAD, or just with a ruler/callipers), and input them manually.
For most use-cases manual measurement would likely suffice, but...
…it could at times also be useful for a more automated/feedback-oriented approach (e.g. if a vehicle is designed to be configured into multiple setups without needing CAD, but measurements are difficult to take, or perhaps for advanced vehicles that are designed to self-identify their control characteristics).
One such approach would be to spin the vehicle about the desired rotation axes, which can be challenging to do, particularly for larger vehicles. For small enough vehicles, one potential accessible method would be to use something like a lazy susan or a desk chair, although it would still be challenging to ensure the vehicle is level and that the rotation axes are aligned correctly.
That could be helped by using computer vision to monitor the rotation, but that then requires a camera setup as well, and the development of the relevant detection and tracking algorithms, which is on the level of something like a thesis project.
An advancement of that idea would be to run the vehicle in water and have it controlled by an algorithm with computer vision feedback (from a camera looking at the vehicle, or the vehicle’s own camera looking in a mirror), that iteratively modifies the position offset and thruster factors until the vehicle is rotating/moving as desired. That could be quite useful for active vehicle designers, although in those cases it’s likely often simpler to determine the same values using CAD measurements and simulations.
Until an offset calibration like that is available, control error can be minimised by placing the flight controller as close as possible to the desired control centre point, and specifying the thruster factors relative to that point. Efficiency can be maximised by having the vehicle’s natural centre of rotation (determined by centre of mass, centre of buoyancy, and to some extent the flow/drag characteristics) be the same as the control centre point.
I’ve raised an issue about this in the ArduPilot github repository, which can be followed for any updates that might occur.