Pymavlink SET_POSITION_TARGET with velocity

I am trying to control the BlueROV2 using position targeting rather than RC override or manual inputs. I used SET_POSITION_TARGET_INT to implement depth hold at a target depth like the example on the Ardusub website Pymavlink · GitBook and it worked perfectly.

master.mav.set_position_target_global_int_send(
        int(1e3 * (time.time() - boot_time)), # ms since boot
        master.target_system, master.target_component,
        coordinate_frame=mavutil.mavlink.MAV_FRAME_GLOBAL_INT,
        type_mask=( # ignore everything except x, y, & z positions
            # DON'T mavutil.mavlink.POSITION_TARGET_TYPEMASK_X_IGNORE |
            # DON'T mavutil.mavlink.POSITION_TARGET_TYPEMASK_Y_IGNORE |
            # DON'T mavutil.mavlink.POSITION_TARGET_TYPEMASK_Z_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_VX_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_VY_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_VZ_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_AX_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_AY_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_AZ_IGNORE |
            # DON'T mavutil.mavlink.POSITION_TARGET_TYPEMASK_FORCE_SET |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_YAW_IGNORE |
            mavutil.mavlink.POSITION_TARGET_TYPEMASK_YAW_RATE_IGNORE
        ), lat_int=0, lon_int=0, alt=depth, # (x, y WGS84 frame pos - not used), z [m]
        vx=0, vy=0, vz=0, # velocities in NED frame [m/s] (not used)
        afx=0, afy=0, afz=0, yaw=0, yaw_rate=0
        # accelerations in NED frame [N], yaw, yaw_rate
        #  (all not supported yet, ignored in GCS Mavlink)
    )

I’m trying to use the ignored parts of this same message to control other parameters of the position besides altitude. It looks like you need GPS to use the lat and long parameters, but according to the MAVLink site and the comments in the code above, velocity and acceleration use a North, East, Down (NED) reference frame, and shouldn’t require GPS input.

I tried to use the same message above but with non-zero vx and vy to make the ROV move laterally. I tried it without changing any of the ignore lines. I tried commenting out the lines that ignore Vx, Vy, and Vz. I tried doing that plus ignoring the x, y, and z positions. None of these attempts made the BlueROV2 move laterally at all.

I even tried using the very similar message, SET_POSITION_TARGET_LOCAL_NED, so I could try all of the above but with a non-zero x and y parameter in the NED reference frame. That didn’t work either.

Any idea what’s wrong with what i’m doing?

Hi @autery, welcome to the forum :slight_smile:

I have some vague recollection that the velocity options of the position target are not implemented in ArduSub, but I’m not certain so I’ve asked internally and will follow up.

I also tried using SET_POSITION_TARGET_LOCAL_NED where I put in non-zero x and y parameters instead of vx and vy, and there was still no lateral motion.

We did a bit of testing and digging today, and found that using SET_POSITION_TARGET_* requires using Guided mode, which in its current implementation requires having both a position estimate and a position target. When setting just velocity targets the initialisation process includes setting the position target to its current position.

In a simulated vehicle (which includes a GPS signal) we were able to use the velocity target mode, but it moved a short distance before slowing and stopping, presumably reaching an equilibrium between trying to maintain a constant non-zero velocity while also trying to maintain a constant position.

We were able to get accelerometer-only position estimates (on a Navigator, not simulated) to at least occur internally by setting EK3_SRC1_POSXY, EK3_SRC1_VELXY and EK3_SRC1_VELZ from “GPS” to “None”, and setting a global origin, but from what we could tell the internal position estimates were not being output over MAVLink (via the AHRS2, GLOBAL_POSITION_INT, and GPS_RAW_INT messages that were being sent), so there’s not an obvious way to feed back the current position as the vehicle moves and allow the position target to be updated.

We’ll try to do some more testing, but at this stage while we in principle agree that it should be at least allowed to try to maintain velocity without a position sensor, it seems there are some aspects of the current implementation that are stopping that from happening. It’s also very possible that even if we manage to enable it, the velocity estimates from integrated accelerometer data will be far too noisy to have usable performance.

If we can find a reasonably simple way of enabling it then we’ll try to do so (it would be nice to at least have the option, in case “poor performance” is still “good enough” for some use-cases), but we’ll have to see how complicated and time consuming that is to do.

1 Like

Thank you for doing all that. Position targeting is really what I’m going for here. I only tried using velocity targets because position targeting wasn’t working

Similar reasoning applies for position control, but it’s even more likely to work poorly than velocity control. In general, positioning requires a positioning-focused sensor.

So do you think SET_POSITION_TARGET_* is not doing anything because of the way it was implemented in Ardusub (as opposed to Arducopter) where basically only the depth control portion is used, or is it not doing anything because integrating the raw IMU acceleration data is so noisy? I would think if the implementation was fine, the Blue ROV2 would still move, but very imprecisely if the position data was noisy. I’m only asking because I need to explain why I can’t pull it off to my boss who really wants me to use position control rather than RC override to path plan for the Blue ROV2.

ArduSub can make use of arbitrary position control if it has sufficient positioning information - just like ArduCopter. If you try to run ArduCopter without a GPS or other form of positioning (e.g. visual odometry) you will not be able to do position control.

Most ArduSub vehicles have an external pressure sensor, which is what allows vertical (depth-based) positioning to work, and a compass which allows maintaining heading (assuming you don’t go too close to large metallic structures). A three-axis accelerometer combined with a gyroscope can give a sense of the direction of gravity and the rotation rate, which allows maintaining pitch and roll.

At the moment ArduPilot’s position control code relies on an externally-referenced positioning sensor being connected (like those I linked to in my previous comment, or a visual odometry based approach with computer vision). The accelerometers and gyroscopes included in a flight controller’s IMU are only aware of how things are changing at a given point in time.

As an analogy, if you were blindfolded and swimming in a river and were asked to maintain position, or move 1m to the left, it would be impossible for you to know whether you were succeeding because there’s no external positioning feedback. You might have some idea which direction you’re moving in relative to your body (like an accelerometer), and feel whether you’re spinning (like a gyroscope), and the warmth of the sun on one side may give you a sense of orientation (like a compass/magnetometer), but that isn’t enough information to be able to stay still or move to fixed locations.

In the case of electronic sensors (unlike human feelings) it is technically possible to integrate derivative dynamics (like acceleration) into an estimate of velocity, and further integrate that into an estimate of position, but there’s no error-correcting feedback on those estimates, and even discounting electrical noise there is inherent missing data from the fact that there is time between each sample. Integrated error grows quickly, and is worse the larger the sampling period, the noisier the sensor, and the more integrated (e.g. integrating twice from acceleration to position is worse than just once from an acceleration measurement to a velocity estimate, or a velocity measurement to a position estimate). Increasing the precision and sampling rate of inertial sensors can slow error growth, but any error that occurs cannot be corrected for without feedback from an external reference, so growth is inevitable.

As is, ArduSub’s EKF (which fuses its sensor measurements and tries to predict its state) will do the integration already, so in theory you could get a very noisy position estimate from that and try to use it for position control. The current implementation does not permit inertial-based position control though, which is what I was discussing in my earlier comment, where I said

Note also the disclaimers that

You can’t follow a path without a position estimate, and at the moment ArduSub will not give you a position estimate without an externally referenced sensor (like the ones I linked to in my previous comment):


As something of a summary, while we would love every vehicle to have cheap positioning capabilities,

  • ArduSub needs modification to be able to enable inertial-only position or velocity control attempts
  • We don’t know how hard that will be to do yet
    • We may do it if we determine an obvious/reasonably simple way to, because
      • in principle we’d prefer for limitations to come from hardware rather than forced “not allowed” functionality from our software
      • if new hardware comes out, or somehow substantially better state estimation / noise rejection algorithms for the EKF, or someone tries some externally-connected inertial sensors, it’s possible performance could be better / somewhat usable for low precision applications
    • but our software department is small and very low on time at the moment, so if there’s no obvious fix then this won’t get worked on by us
  • Even if it is possible to enable though, it’s very unlikely to work “well”, particularly given when it was tried previously with just a Pixhawk
  • I wouldn’t recommend following this approach, unless you have sufficient time and budget to spend at least a month of development and testing, with a very high probability of failure / unusable results
    • If you do have that availability, feel free to work on it - it’d be cool to have as a possibility for people to try out, if only to definitively show how important it is to have externally referenced sensors for positioning (unfortunate though that requirement may be)
    • If you don’t have that availability, you’ll need to look into other positioning approaches (as discussed and linked to above), or work on projects that don’t require positioning / are operated manually (using a human with some less fully-integrated sensors (e.g. a camera and/or scanning / imaging sonar), or a line of sight, as the external reference).
1 Like

Thank you very much Eliot. This was a very good explanation. I appreciate you taking all this time to explain.

1 Like

Just a quick note on top of Eliot’s precious comments. You can do positioning with only IMUs generally but the IMUs you need to be looking for are starting at least 20 times of a Pixhawk in price and much bigger in size. So in theory possible but with the current ArduSub ecosystem not.
Resources are vastly available to search for it.

1 Like

Indeed, valid point :slight_smile:

I did touch on that briefly with my comment that

but I agree that it’s worth clarifying it’s not impossible for IMU measurements to provide usable positioning performance (at least over limited time periods), but most likely the ones readily usable in the ArduSub ecosystem (e.g. those built into flight controllers, or able to be integrated into a small ROV) do not have sufficient accuracy, noise protection, or sampling rates to be usable for it in practice.

Inertial measurements being used for position estimation do have inherent issues with unbounded noise integration though, so long term positioning will almost always involve some form of external feedback to maintain accuracy, if only because it’s generally cheaper to include than substantially improving the IMU’s measurement accuracy. Of course “long term” in this case is dependent on acceptable positioning error and available IMU accuracy, so varies by both application and sensor.

1 Like

A post was split to a new topic: ArduSub Visual Odometry