We made an AUV that builds stuff with cinder blocks

Hi All,

As a part of our research over at the Dartmouth RLab on underwater construction robots, we made an AUV that can build little structures out of cinder blocks! One day robots like this could make coastal infrastructure cheaper and faster to deploy! Couldn’t have done it without the components from BlueRobotics.

The construction robot uses eight T200 thrusters and a compressed-air active ballast system to carry cinder blocks around and place them. For actuation, it uses three of the Blue Trail engineering servos. Two servos move valves that control compressed air in the ballast chambers, and one controls the manipulator (on the bottom of the robot).

Here it is finishing off a little pyramid!

It builds with “error correcting” components (the yellow and orange ones) so it doesn’t have to have a super sophisticated understanding of how it is interacting with the structure to build things accurately.

We designed its manipulator to match the shape of the cement blocks it builds with, so picking up and placing blocks becomes an open loop docking-like process. This makes grasping objects reliable and simple.

To save battery power, it used compressed air from a little SCUBA tank to offset the weight of the blocks

3 Likes

Really cool project.

At my university’s robotics team, we’re trying to build something somewhat similar. Can you explain more in-depth how the autonomous system works? Which parts (if any) of QGC/ArduSub did you utilize?

I’m having a hard time using ready-made AUV components since they seem to be made for long-distance missions utilizing GPS compared to just in-pool.

Hey Hasan,

I started out early in this project trying to use the stack that ships with the BlueROV like three years ago so things might have changed. I found that it was really hard to get the level of control needed to do manipulation and accurate positioning using the QGC / ardusub stack. As you say it definitely seems designed around moving big distances in large places. I ended up building out my own platform – all of the electronics in the main tube are custom for this robot.

I did some blogging about building the platform that discusses the details! In short, I put in a machine vision camera and an x86 main computer that runs ROS melodic. The motors are controlled using a smaller ardupilot FCU that runs a custom firmware which demotes the FCU to more of an io board. You can read more at these links!

Droplet (samlensgraf.com)
Droplet V2 (samlensgraf.com)

The electronics tube and software flow are the same as described in those for this project but the chassis is just bigger and stronger.

My code that runs the robot is available here: droplet_underwater_assembly/assembly_main.py. It is quite ugly research code but implements a finite state machine and PID controllers that run the robot through the construction process based on a compiled mission spec description.

I hope this helps! Happy to answer any more questions. If you all use this please do cite the droplet paper Droplet: Towards Autonomous Underwater Assembly of Modular Structures · Robotics: Science and Systems (roboticsconference.org).

Hi @slensgra,

It’s great to see another update to this project - thanks for sharing! :slight_smile:

The transition from the original custom blocks to these cinder blocks with the coloured connector pieces placed between them is cool, along with the updates you’ve made to your previous gripper design to better grab and hold them. I’m also fascinated by the addition of the dynamic buoyancy from the compressed air in the scuba tank - there’s still lots of exploration to be had in that space :slight_smile:

Both of those are softwares - with sufficiently accurate positioning information from connected sensors they should be able to do quite fine positioning, although I’m not sure whether QGC has a fixed limit on the resolution of input map tiles or whatnot.

Given that seems to have been a sticking point for you, do you have any suggestions for how an autopilot firmware and/or a control station software should behave / what kinds of features they should support to be well-suited to your kind of use-case? :slight_smile:

On the manipulation front, there are definitely limitations from the current “thrusters only” motion control approach. I’ve added your vehicle as an example of compressed gas dynamic buoyancy control to my open issue about advanced control options, but am curious whether you have anything in mind beyond what’s already been mentioned there.

Hey Eliot,

There were a few things that made things hard with the ArduSub / QGC flow. First, it was difficult to tell how it was going to respond to a message. The Ardupilot code is big (really awesome for doing what it does well) and its generality makes it hard to follow when you want to know what exactly is gonna happen in response to a given input.

For example, I started out trying to use a cascaded PID controller over the RCOverride interface with the AUV in DEPTH_HOLD mode. If I remember correctly, the lateral pwm values in the RCOverride message are interpreted into speed set points that are then fed into the model predictive controller and the up/down channel just moves around a depth setpoint. Debugging little details of the AUVs behavior was difficult due to the differences in how different channels were interpreted.

The other thing I tried was controlling the AUV through RCOverride while ArduSub was in the mode where it is interpreting the pwm channels as stick positions and mixing them together to directly control the motor speeds (I think MANUAL). In the code, there is a matrix that maps from pwm channels in that message into thrust values on motors. Since there is a minimum speed the thrusters can spin (maybe a blheli thing), the motion was jerky this way. We ended up wanting to change that allocation of motors to the pwm channels according to the situation. I think in principle you could do this by modifying QGC settings but it would be clunky.

This stuff led to just making a tiny firmware that runs on the board and passes pwm values straight through to blheli and doing the mixing and stuff in higher level code. Probably not the best for making a fault tolerant system but good for manipulation research!

A few other little things:

  • The AUV localizes using visual fiducials. The raspberry pi wasn’t fast enough to process the images in real time. This may be fixed now that it is a raspberry pi 4
  • The bluerov camera seems to have a built-in latency of about 0.1 seconds (happy to show testing I did on this). That is normally fine but for fine positioning it is hard to work with.
  • A power switch was needed to make the robot safe to work with without a tether – relays that can handle the amount of current needed are pretty big so stuff had to be moved around and replaced with more compact components.
  • ArduSub has (had?) a maximum rate that it is willing to spit out sensor data over mavlink. This makes things hard for designing sensor fusion algos or real time response to imu data in higher level code
  • As far as I could tell, you can’t get raw information out of Ardupilot (even if it is labelled as raw) – rather it is going through the EKF first. This again makes things challenging for debugging little details.
  • I tried feeding location info from the fiducials into the ArduPilot sensor fusion stack but never got it to work. Again because of the complexity of the code, it was very hard to figure out what was going wrong. I was going off one years old blog post so maybe it should have been easy to do but I just didn’t know what I was doing wrong.
2 Likes

Could you link where things like this are mentioned in the docs? Or any resources pertaining to AUVs’ autonomous systems that are confined in smaller areas?

Thanks in advance.

There isn’t yet much ArduSub-specific documentation related to position-aware flight modes - this comment describes the relevant context.

This isn’t something I’ve gone looking for - my point was more that ArduPilot’s positioning and control systems are designed to handle a variety of scales, where generally if you have sufficiently fast and accurate positioning sensors you should also be able to autonomously command comparably precise positions and manoeuvres.

Even MAVLink’s GLOBAL_POSITION_INT message supports spherical (horizontal) and vertical resolutions of ~1mm, and LOCAL_POSITION can conceivably be even higher resolution, so positioning accuracy issues are much more likely to be from time delays, sensor inaccuracies, and a lack of actuator fidelity than from a lack of firmware support for storing and handling precise positions.