Pymavlink SET_POSITION_TARGET with velocity

ArduSub can make use of arbitrary position control if it has sufficient positioning information - just like ArduCopter. If you try to run ArduCopter without a GPS or other form of positioning (e.g. visual odometry) you will not be able to do position control.

Most ArduSub vehicles have an external pressure sensor, which is what allows vertical (depth-based) positioning to work, and a compass which allows maintaining heading (assuming you don’t go too close to large metallic structures). A three-axis accelerometer combined with a gyroscope can give a sense of the direction of gravity and the rotation rate, which allows maintaining pitch and roll.

At the moment ArduPilot’s position control code relies on an externally-referenced positioning sensor being connected (like those I linked to in my previous comment, or a visual odometry based approach with computer vision). The accelerometers and gyroscopes included in a flight controller’s IMU are only aware of how things are changing at a given point in time.

As an analogy, if you were blindfolded and swimming in a river and were asked to maintain position, or move 1m to the left, it would be impossible for you to know whether you were succeeding because there’s no external positioning feedback. You might have some idea which direction you’re moving in relative to your body (like an accelerometer), and feel whether you’re spinning (like a gyroscope), and the warmth of the sun on one side may give you a sense of orientation (like a compass/magnetometer), but that isn’t enough information to be able to stay still or move to fixed locations.

In the case of electronic sensors (unlike human feelings) it is technically possible to integrate derivative dynamics (like acceleration) into an estimate of velocity, and further integrate that into an estimate of position, but there’s no error-correcting feedback on those estimates, and even discounting electrical noise there is inherent missing data from the fact that there is time between each sample. Integrated error grows quickly, and is worse the larger the sampling period, the noisier the sensor, and the more integrated (e.g. integrating twice from acceleration to position is worse than just once from an acceleration measurement to a velocity estimate, or a velocity measurement to a position estimate). Increasing the precision and sampling rate of inertial sensors can slow error growth, but any error that occurs cannot be corrected for without feedback from an external reference, so growth is inevitable.

As is, ArduSub’s EKF (which fuses its sensor measurements and tries to predict its state) will do the integration already, so in theory you could get a very noisy position estimate from that and try to use it for position control. The current implementation does not permit inertial-based position control though, which is what I was discussing in my earlier comment, where I said

Note also the disclaimers that

You can’t follow a path without a position estimate, and at the moment ArduSub will not give you a position estimate without an externally referenced sensor (like the ones I linked to in my previous comment):


As something of a summary, while we would love every vehicle to have cheap positioning capabilities,

  • ArduSub needs modification to be able to enable inertial-only position or velocity control attempts
  • We don’t know how hard that will be to do yet
    • We may do it if we determine an obvious/reasonably simple way to, because
      • in principle we’d prefer for limitations to come from hardware rather than forced “not allowed” functionality from our software
      • if new hardware comes out, or somehow substantially better state estimation / noise rejection algorithms for the EKF, or someone tries some externally-connected inertial sensors, it’s possible performance could be better / somewhat usable for low precision applications
    • but our software department is small and very low on time at the moment, so if there’s no obvious fix then this won’t get worked on by us
  • Even if it is possible to enable though, it’s very unlikely to work “well”, particularly given when it was tried previously with just a Pixhawk
  • I wouldn’t recommend following this approach, unless you have sufficient time and budget to spend at least a month of development and testing, with a very high probability of failure / unusable results
    • If you do have that availability, feel free to work on it - it’d be cool to have as a possibility for people to try out, if only to definitively show how important it is to have externally referenced sensors for positioning (unfortunate though that requirement may be)
    • If you don’t have that availability, you’ll need to look into other positioning approaches (as discussed and linked to above), or work on projects that don’t require positioning / are operated manually (using a human with some less fully-integrated sensors (e.g. a camera and/or scanning / imaging sonar), or a line of sight, as the external reference).
1 Like