Tether, Gripper Jaws, Camera - Product Improvement Poll

Hi everyone,

As you’re no doubt aware, Blue Robotics is focused on providing high-performance marine robotics components with low costs and at scale. Unfortunately our development resources aren’t infinite, so we’ve got a short poll (3 questions) here to help us understand which product improvements are most important to you, so we can develop those things first.

Feel free to jump straight to the questions if you want, but if you’re interested I’ve provided some additional context and explanations in these click-toggles as to why we’re asking these questions, and some potential implications of each improvement.

tether pairs

Our Fathom Tether and Fathom Slim Tether have 4 and 1 twisted pairs for communication. This question is mostly for us to understand how those are used, and if it could be worth us offering an option in the middle with 2 or 3 pairs, if most of you aren’t getting any use out of some of the extra pairs.

Fewer pairs could allow for a slimmer and lighter tether (which means less tether drag while operating), and should also be a bit cheaper than the 4 pair option, but that’s only feasible if enough people would be able to make use of it that we could produce it at scale.

gripper jaws

Ever since our Newton Gripper was thought up, the plan was always for it to have multiple jaw options. While some of you have already come up with and produced your own awesome designs (see this post for a few), we’re now at a point where we can potentially start designing and manufacturing some options to be readily available to anyone who needs them, instead of just the people with the engineering expertise and manufacturing access to make custom ones.

The main options we’re planning to investigate are:

  • sediment sampler (for capturing a ball of sediment that can be brought to the surface and analysed)
  • 3-jaw (for better grabbing onto round/ball shaped objects)
  • cutter (for cutting through things like ropes, sea-weed, and fishing line)
  • large jaws (for grabbing onto larger, more unwieldy items)

Note that alternative jaws won’t change the strength of the gripper, but will make it more adaptable to different scenarios depending on the operating requirements.


We’re frequently asked things like “when will we have a 4k camera?”, which is a more complex question than it seems. While our Low-Light HD USB Camera can provide great results, we agree that higher image quality is important to have as an option. Quality is about much more than just resolution though, and for vision in particular quality improvements in one area generally cause detriments in others.

There’s a brief comparison table here of some camera improvements that have been posted about on the forum, but this question is about which aspects of quality are most important to you, and what we should focus on when choosing/developing new camera options.

The question discusses the following aspects:

  • higher resolution
    • assuming sufficient lighting and good enough optics, more pixels means clearer fine details, but
    • more pixels means more data to send, so requires more communication bandwidth and storage space, and reduces options for multiple additional cameras/sensors to be run at the same time
    • also generally means the physical sensor for each pixel is smaller, which reduces low-light performance
    • some high resolution cameras support moving the output frame when streaming at lower resolutions, which can allow for optical zooming, panning, and tilting without needing to move the camera itself - that’s most effective with a wide-angle lens
  • higher framerate
    • the “time” equivalent of higher resolution - instead of finer details within an image, higher framerate captures more moments in time, so is better suited for capturing short events, or following fast objects → results are generally perceived as “smoother”
    • less time between frames also reduces the maximum exposure time, which can reduce the amount of light that can be captured (so can reduce low-light performance)
    • more frames means more data, so uses additional communication bandwidth and storage space
  • improved low-light performance
    • primarily comes from a larger physical sensor, which can capture more photons
    • better low-light performance means your lights don’t need to be as strong, which reduces backscatter and bright reflections off bubbles and particles in the water
    • this comment shows the kind of difference it can make
  • wider angle lens
    • a wider viewing angle means you can see more of what’s around you without needing to turn the camera or vehicle
    • a wider view into the same set of pixels means each pixel covers more area, so there’s less fine detail that can be resolved
    • human eyes have their own viewing angle they’re used to seeing, so very wide angle camera views can be a bit disorienting
    • a wider viewing angle means a larger portion of the viewing sphere is mapped onto the flat image plane, so the resulting image often looks quite distorted
  • more efficient encoding (H265)
    • H264 has been the standard high-efficiency stream encoding for many years, because it can be calculated quickly, has low bandwidth requirements, and produces videos that look similar to the captured input - it’s currently the only option that’s available by default in our vehicles
    • H265 is the next generation beyond H264 - it requires some extra computation to encode (which can potentially add some latency), but it’s a more efficient encoding → either less data can be sent to achieve the same quality (so potentially more cameras could be streamed), or the same amount of data could be sent but with a significant quality improvement
    • encoding is important for streaming, but efficient encodings work by removing data that humans aren’t very good at seeing → that’s great if only humans are looking at the output, but makes it harder/less effective to do video processing and computer vision on the results
  • image pre-processing
    • encoding removes large amounts of data that may be helpful for processing, so if processing is going to occur it generally has better results if it can be done in real time on the raw image frames from the camera sensor, before they get encoded
    • pre-processing on the camera makes the camera more complex and expensive, and the real-time requirement can place some limitations on what kinds of processing are feasible, and how much processing can be applied
    • if the stream uses the processed results, the time taken to process any single frame is added to the stream latency, and too much display latency makes the vehicle feel unresponsive and harder to control
    • if pre-processing is applied in the camera module to improve visibility/colour (as in the DWE exploreHD), the video receiver can display better results that are easier to analyse and interpret as they appear
    • some cameras support custom processing, which can be used for visibility improvements but also machine-learning detection of regions or objects of interest
    • custom processing has the benefit that you can choose which processing you want to apply, and people can share results and benefit from each other’s developments, but also means you need resources to develop processing that works well for your use-case, and/or access to others who have developed it already


For me, The low camera quality is by far the biggest weak link to the BlueROV2 system. I have aborted a lot of dives just because I can’t see anything, though the GoPro Hero 4 Silver mounted on the front can see so much better with it’s very similar 1920x1440 resolution. Maybe it’s a bitrate problem?

Here’s a good video example in a low visibility condition. Notice how sharp the GoPro always looks compared to the ROV camera. It can’t always see a farther distance, but what it can see always gives me so much more information.

Also, A gripper improvement that would be nice (but probably isn’t practical) is to be able to extend it and retract it during the dive. Right now I have to choose before the dive and it blocks some of the camera view when it’s out. It would be nice to have it stowed out of the way, and then be able to extend it if I run across a target that I want to grab.


Hi @btrue, thanks for the feedback! :slight_smile:

This could be caused by a number of things, including bitrate, camera sensor wavelength sensitivity, camera settings (exposure, contrast, sharpness, etc), quality of optics (lens, dome, etc), and likely others too.

It’s perhaps worth noting that the data the camera collects often has more value than what is perceivable in the images - post-processing can help with extracting/viewing that, although it’s understandably best if the frames you get contain and show as much information as possible already, which requires tuning the available settings and/or some of the quality improvements discussed in the poll :slight_smile:

Additional actuators add design and control complexity and expense, but they do exist (check out blueprintlab for example). That said, the Newton Gripper’s actuator is already linear, so it may be possible to make an end-effector that extends as it opens, and retracts as it closes, although that would likely reduce the possible gripping force, and the “retract while closing” aspect could cause issues during operation.

The simplest approach would likely be having a separate retraction device, which could be just a servo-controlled sliding mechanism that the gripper (or other things?) can be mounted onto, in which case it could be directly integrated into a servo output (e.g. on the Pixhawk), and set up to operate via a joystick button. The biggest issue with that being separate from the main gripper assembly would be needing to use an additional penetrator (or cable splice) for it instead of just having one extra wire in the existing gripper cable.

Interesting idea at least - no doubt something we’ll keep in mind :slight_smile:

It’s very exciting to hear about all these advancements for the tether, gripper, and the camera.

One stretch advancement I concocted for the gripper is for to the gripper to have an extension capability. More specifically, this upgrade would enable the gripper to extend the arm and pincers out further on command and then retract back on command. This upgrade would enable more reach length. This upgrade is driven by the fact that I have found it harder to grab with the gripper length being fixed. Sometimes I don’t have enough gripper length and maneuvering the BlueROV2 can feel unwieldy.


That sounds the same as what @btrue and I were discussing in the two comments above yours. Good to have some extra confirmation that such a feature would be useful :slight_smile:

Any plans for an external camera tilt mechanism which could mount other small subsea cameras?

I believe it’s mostly the bit-rate difference, gopro records way more data normally by using a much higher bit-rate. It would definitely help to improve this point in the default USB BlueROV camera since on many occasions you can see the compression issues.

Sounds like lots of very useful upgrades are ahead, very cool!

That should be achievable with a waterproof servo and (for example) a 3D printed mount. A waterproof servo is on our to-do list/is something we’re thinking about, but it’s behind a few other things so I don’t have any info in the way of timelines.

1 Like

Higher depth rating for the grabber would be at the top of the list for me.

Noted. Thanks for the feedback :slight_smile: