Sorry for the looong reply…
For some time I have been working on a project (part as a work project and part as a hobby) where the main question has been:
Would it be possible to develop a cheap AUV/ROV hybrid that could perform inspections autonomously 24/7 and report back the findings over long periods of time (i.e. many months)?
Small size and smart construction methods are important to bring down the cost. E.g. 3D printing and using cheaper materials/methods.
Using easily available commodity products instead of one-off/low volume products should also bring down costs.
We either have to find these products or create large enough demand for these products to eventually bring down the cost while ensuring a healthy community of vendors and users
One way that we might easier improve the autonomous capabilities of drones is to look at the drone (AUV/ROV) mainly as a carrier of compute power and sensors to achieve autonomy.
- Include enough low cost compute power to compensate for limited hardware capabilities
- Use computer vision and machine learning to automate tasks
- Improve capability and flexibility through regular software updates
- Start with low complexity automation tasks and gradually expand complexity over time
To achieve 24/7 operations/availability it is necessary to deploy multiple units, to be able to recharge occasionally without downtime, to expand capabilities and to reduce/remove the dependency on a single point of failure.
The idea has been to deploy multiple drones that can create a dynamic acoustic mesh network for (critical) communication but would most of the time rely on operations alone or «emergent behaviour» where the units behave as a swarm, without relying on constant inter-communication).
Possible tasks for these drones could be:
Positioning & navigation
- Map creation (visual SLAM)
- Establish ground truth
- Establish/locate landmarks/navigation points (man-made or natural)
- Map/track position of assets
- Object avoidance
Create 3D models of assets
- Point clouds
- Detect changes
- Verify valve positions
- Leakage detection
- Mapping and tracking corrosion & cracks
Detect and track
- Detect man-made objects
- Detect possible POI
- Track pipeline and other infrastructure
We would like to be able to use/expand upon ArduSub or possibly ROS to control the drones autonomously (sensor fusion from several different sources would be needed) and perform smart missions, i.e. not relying on minutely planned/plotted missions by a human operator.
A lot of the features that would be needed are not implemented yet and we realize that this will not be easy. But if a group of people can find together and team up it will be easier to pull off. The resulting features should then be made available for the community.
Some of the key hardware components we have looked into/are looking into are:
We are currently building prototypes and have a 400w inductive charger up and running (currently artificially limited to 270w) with wireless gigabit Ethernet capabilities.
The receiver unit is connected to a 1.7kWh 10S10P 21700 battery with a smart BMS (we hope to use VESC based BMS eventually).
This charging station/solution should be available to many drone types to ensure proliferation.
Communication and positioning
We have multiple Nanomodem v3s (NM3). These are low cost and low power modems that are capable of approx. 640bps data transfer and positioning at approx. 2km range.
Software needs to be adapted/implemented for these units to enable mesh networking and positioning (needs 3 modems with known position to calculate position for other units within range).
Some of the drones will also need a DVL to navigate outside the reach of the acoustic network.
We plan to use multiple OAK-D (Lite) units for multiple vision based tasks. These cameras have depth capabilities and a Myriad X AI chip embedded and are cheap.
The camera sensors being used are very likely not light sensitive enough for deep water use, but Luxonis and Arducam have shown that they are capable of covering various use cases and I would also love to see official cooperation between Blue Robotics and Luxonis.
We want to be able to use multi beam sonars and side scanning sonars both to collect data for later processing, but also real-time as part of navigation and mission decision making. This requires the data streams to be available for processing and the on-board compute power needs to be sufficient.
As of yet very little work has been done on this area except for identifying possible MB or side scanning sonars that could be used.
We plan to use a CM4 with Coral TPU units (sadly currently not supported by CM4), but we have heard from Luxonis that it might be possible to use the Myriad X processors embedded in the OAK-D units.
Hailo-8 also seems like an interesting AI accelerator that could be used, but more information regarding availability, pricing, compatibility and readiness are needed.
Alternatively Jetson Xavier NX or similar could be used. Or as a last resort a micro PC could be included.
How to get the autonomy features that many of us likely wants?
- We need to cooperate and build upon the great work of others.
- We need to be willing to share competence and knowledge.
- We need to be willing to give and not only take.
But it most likely has to be a coordinated effort.
We need to bring down costs and expand capabilities to create a community of users while ensuring that a thriving community of vendors that are able to provide the solutions can exist.
No companies or persons (or at least very few) would have the capacity/resources needed to pull this off alone, hence it is critical that this can be solved through open source solutions – either utilizing existing solutions or creating new open source solutions – both hardware and software.