DVL-A50 Accuracy

Hi Kristian / @wlkh ,
What are the differences between the standard and performance models? From my reading on the Reef page, they physically the same with the standard one made a little dumber in software to avoid export controls?

Could you explain what is meant by the accuracy (1% vs. 0.1%). Can it be related to an accuracy of a point if the ROV travels (say) 100 m. Does a 1% accuracy mean that the final location will be within 1% of the correct location (1 m). Which would follow that a 0.1% accuracy performance model would be within 10 cm?

Hi @GavXYZ,

I’ve moved this here because I think it deserves its own topic.

As some insights from my understanding:

It’s the same hardware, and both firmware versions have the same features and communicate using the same protocol, so it’s not that one is dumber per se, just that it doesn’t maintain as high a level of accuracy.

Accuracy reduction could be achieved by adding random noise, or rounding/truncating every measurement to fewer significant figures.

My understanding is that it applies to the velocity data (which is the primary output of a DVL), and since Water Linked describe it as “long term accuracy” I assume it refers to the average velocity error for several measurements taken over an extended period of time.

As an example of that, if the DVL takes 5000 measurements during a dive then some of them could individually be incorrect from the true velocity by more than the long term accuracy amount, but in aggregate they should average out to be within it.

I would expect that to also be reflected by the “figure of merit” (e.g. \sum_{i=1}^n\frac{fom_i}{|\vec{v_i}|\cdot n}\leq accuracy_\text{stated}), although that’s not guaranteed, and it’s not guaranteed that measurements with a low figure of merit are correct - they’re just higher confidence, as expressed by the device.

Unfortunately that’s not a possible relation to make[1] because even with completely accurate instantaneous velocity measurements there is still time between the measurements where the velocity can change, as well as error in the IMU’s sensors, and both of those get integrated over so the vehicle’s position estimate error is technically unbounded.

From a practical perspective that error growth can be subdued to some extent by taking measurements at a higher frequency (so when the DVL is close to the surface it’s measuring off then the position estimate error should grow less quickly), as well as by avoiding known sources of noise and uncertainty (e.g. having a significant amount of inertia reduces the velocity change between measurements, while going near large metallic structures can cause issues with a compass, which can throw off the heading estimates, and operating over a soft or very jagged bottom surface can make it harder for the DVL to get a strong lock and accurate measurements).

Fundamentally though, any estimate that is determined through integration over measurements will eventually grow a significant “drift” error[2] unless it has some external reference that can correct it, even if only occasionally (e.g. for positioning that would generally be through a surface GPS or USBL/UGPS system of some sort, or through manually specifying the vehicle position based on some external GPS reading (like is normally done at the start of a DVL dive, to set the starting location)).

  1. An error bound for the position estimate could potentially be determined if you specify practical limits on the vehicle’s acceleration and other sensor readings, and a time bound on how long it takes to cover the distance in question, but those are not factored into the accuracy of the velocity measurements. ↩︎

  2. This is a significant reason why DVLs are useful to start with. Accelerometers are self-contained inertial sensors that sit inside the vehicle and have no impact on the surrounding environment (which is incredibly convenient), but because getting from acceleration measurements to a position estimate requires double integration the error grows at a much faster rate than from velocity measurements, which makes it generally unsuitable for practical applications unless the measurements are very precise and taken at a very high frequency (which is prohibitively expensive). ↩︎

1 Like

Hi all

While I can’t speak for every brand of DVL, there are a few key definitions when it comes to DVL specs. Long Term Accuracy (LTA) is one of those. LTA specifically relates to how closely you expect the DVL’s velocity measurements to match true velocities, expressed as a percentage.

You could visualise this by imagining a straight line run at the surface, plotting a track over a given period of time. One plot would use a DVL and clock to work out distance travelled, the other uses a GNSS and clock to do the same. The difference in the length of these two lines can be considered the DVL’s absolute error when measuring velocity.

You’ll regularly come across 1% and 0.1% variants simply because 0.1% LTA is the level required for “survey grade” work, and as a result is export controlled!

While LTA is a major contributor to a navigation system’s overall accuracy, it is not a proxy for overall navigational accuracy, which requires consideration of a heading sensor’s contribution to the process of dead reckoning (among many other things). Navigational accuracy is often expressed as a % of distance travelled, in most INS spec sheets that allow DVL aiding, this is the spec you’ll come across.

If you’re interested, you can find a webinar here that helps explain LTA and other aspects of DVL performance:

Hope this helps! Happy to offer more info if needed (although I admit I probably find this topic a lot more interesting than most people :wink:)