We are currently working on a new subsea LIDAR system which we expect will be able to range to the outside walls of our confined operating environment (ballast tanks).
I am wondering if this new system could be used for non-GPS navigation with ROS and either Google Cartographer or HectorSLAM as described in the ArduPilot docs here:
It seems that this has been tested for copters and rovers using a TX2 or Pi 3B+ companion. Does anyone know if this facility exists in the ArduSub code and, if so, would it run on the Pi 3B?
We would be able to create a compatible output from our system to make it look like the RPLidarA2 on the serial port.
I would also be keen to send out a couple of our early prototypes to any interested BR2 users if they would like to play with it. We are keen to gather feedback from different environments.
The first versions will be 360 degree point scanners and available in around 2-3 months. By the end of the year, we should have the 3D scanning version which has 360 degree scanning and +/-25 degree vertical scan. Both systems operate on 420nm wavelength.
Hi Colin,
sorry, i can’t help about Ardupilot/Ros …
But i’m very interested in your Lidar project !
With my scientific partners we are specialized in underwater confined spaces, an particularly in karstic networks exploration ( http://explore.lirmm.fr/?page_id=969 )
We have a test site that we’re planning to modelize with photogrammetric technics ( planned for end 2020 ). It could be very interesting to evaluate your Lidar in such challenging conditions. You can have an idea of the site in these videos : a short one and a long one
Feel free to contact me in private message if you love the idea !
Yeah, I would be delighted to see how it performs in that environment. Very different to the ballast tanks that we will be in.
The device has mainly been designed for navigation at the moment, but of course it could be used for imaging as well.
Our design can pull in up to 72,000 points per second although we would only use that bandwidth for the 3D version later this year.
We have not worked on a solution for bringing the data into point clouds but it seems like the open source PCL (Point Cloud Library) is the way to go. If you have any interest in helping with that we would be interested in collaborating.
It seems to me that you would really need the 3D version which will be ready late this year, but the 2D system mounted on the front of your vehicle (using the vehicle motion as the 3rd axis) might be quite effective.
How ‘clean’ is your environment? Obviously with LIDAR we are effected by dirt in the water. Not an issue for our ballast tanks as they are all clean fresh water. There are some tricks we can do with range gating and multiple returns that would help but we are not at that stage yet.
We will be building a few test systems in 2-3 months and I would be happy to send one over to you. These will be fairly basic, 3D printed from PETG filament with acrylic coating, good for maybe 50m depth. The optical windows will be AR coated fused silica. Final version is likely to be Titanium with sapphire window.
The units will be smaller than any LIDAR on the market and have low power usage so they are highly suitable for use with BR2 type ROVs.
You’re right, the 2D version will fit our needs. At this time we’re using a scanning sonar to map the cavity as the vehicle progress. A 2D lidar should be a great improvement in terms of speed and resolution ( angle and distance ).
Early this year we started thinking about making our own lidar, with a wavelength suitable for underwater use. Alas, at this time, we’ve not enough time to start such design, hence my interest in your project !
Below is a picture of two of our test setup ( ROV and diver with instrumented scooter, you can see the scanning sonar at the front ) and some of the 3D models obtained. ( underwater pictures are from cave divers team PlongéeSout )
You can see that (generally) the environment is pretty clean ( until you disturb the deposed silt ! ). It’s one of the challenge of navigating in such environment, keeping the vehicle at a safe distance from bottom and walls !
Thats really impressive. I would love to see some LIDAR data from that environment.
Our vehicle and LIDAR are commercial but we are keen to help with any research organisations. I cannot show you the vehicle renders here as we are under contract to an oil company. I can send you a link via PM if you are interested.
As i said, our interests are mainly on the nav side at the moment but we would certainly be keen to work on the imaging with you. We might be able to simulate the output of your profiling sonar system, which would allow it to work with your current setup.
I’d like know more about your lidar system. Sounds like an excellent project. We’re looking for a low cost alternative to a scanning sonar to get positioning information within a fish farm using SLAM. This is for a university research project.
We’d be interested in collaborating with you on developing the SLAM based navigation system in ROS. One of my postdocs may be able to assist.
Incidentally, we have previously developed a coverage scanning algorithm for a ballast tank, but that was for a dry environment where the hexapod robot had to move around and inspect the tank. We used a lidar to perform SLAM while detecting variations to the tank enabling the robot to change it’s mission and scan new structure.
The only issue I can see is the distance to the side of the tanks in the fish farm. Our range is the largest unknown in the project at the moment as we have not wet tested yet. This is a real trade off as the larger lasers give us more bulk, take more power and possibly bring us into eye safety issues.
We only have an estimate at the moment, but we are hoping that the original prototype will be Class 3R and capable of maximum 20m range. Would that be suitable for your application? If not, we may be able to push forward with a higher power version (Class 3B, 50m range) as it is on our roadmap anyway.
We will definitely be running ROS for the nav stuff and were hoping that this may be achievable with the existing code (if we build a compatibility mode for something like the RPLidar), if this doesn’t work out then we would be hugely interested in collaborating. If you have a postdoc with skills in this area then we may have more than one project where we could work together.
Your work on the surface tank sounds ideal by the way, it is really just a change in the optical medium and the photons dont care much
Is your current sonar a point scanner (profiler) or is the beam elongated on the vertical? We have 2 versions of the LIDAR planned - a simple 360 point scanner and (later) a version with +/- 25 degrees of tilt (using a MEMS mirror). I would be happy to send you either (or both).
We would love to open source the whole project but it is just not possible on our business model (commercial deep water oil and gas mainly). We are very keen to help with any academic or research work however and ultimately we hope to get the price of the shallow depth versions (<300m is shallow to us) as low as possible so that we can help the BR and ArduSub community who have given us so much for our overall vehicle design.
Colin, Hi. Sounds very interesting.
I’m building a fast-ROV, a 3-axis ‘glider’ towed at speeds of up to 10 kts, at distances as close as 0.5 from the seabed. We’re currently using 2 BR Pings and moving into vision processing to assist with collision avoidance and flightpath planning. Its a little challenging, but coming along nicely (some vids here https://www.youtube.com/user/brettkettle)
We’ve contemplated lidar but thus far put it in the ‘later’ basket. Would love the opportunity to see how it performs in a slightly different use case.
Cheers
Brett Kettle
20 m is in excess of what we need for the fish farm cage, so your system would be ideal. The +/- 25 degree would be suitable for forward mapping.
We’re currently using a Tritech Micron sonar. It’s good but not appropriate for close up work where the target is also moving with water disturbances. A Lidar would be more appropriate depending on the sampling rate.
I can ask one of my postdocs to lend a hand with the code development.
I’d be happy to discuss further. I’ll send you a separate message with my work email address.
I ended my master thesis last summer It was about ship’s hull inspection and I was looking for distance sensors and also a way for 3D reconstruction of the hull. I even considered building my own sensors using the very same wavelength. I guess the very low absortion of 420nm was the key feature for the choice.
I’m still working on this project as a side project so it’s not an ended project yet. I would love to get postet on any upgrades in your project.
Moving at that speed, LIDAR would certainly be an advantage due to the high number of returns.
I have actually been speaking to a contact at a rental company in the last month who is interested in something similar.
We are also working on the vision processing side of things with a dual 720p grey scale global shutter system from Luxonis. The hardware is excellent but there is no code yet available to convert the stereo data into point clouds. Would that be something you are working towards?
Colin, our longer term plan is to put the glider and the data into an open-source community, much like BR or FarmBot.
Our system is working fine in moderate terrain, but we’re anticipating challenges as we move into reefal environments.
With an open-source concept, we’re trying to achieve what we can with modest costs, and at the moment we haven’t reached the limit of what we can do with 2 Pings. We’ve discussed stereovision, and may go there, but optic flow from our 20fps 5MP camera feed (FLIR Blackfly) looks like it will help us cross the next couple gaps. It also serves the machine learning well.
Others in our team are looking at vision-derived point clouds, so there’s possible grounds for sharing ideas.
Whats the physical size of your lidar, and its current draw?
Cheers
Brett
That is very interesting. I believe that there is a market for hull inspections done by ROV close to shore for highlighting corrosion, delamination etc. Further out, I think that an AUV with these capabilities would be useful in the security sector (trafficking of drugs and weapons).
Our new vehicle is a hybrid ROV/AUV so I would be interested to know more about how far you have progressed with your project.
420nm is really the sweet spot for spectral absorption in water. There have been many flawed studies done on this that pointed to a longer wavelength but the best data I have seen is from the Pope and Fry paper which points to 418nm. Many systems have been built around the 532nm green light wavelength but this has an absorbtion 250% worse than the blue area. I believe that particular wavelength is a legacy from the days before solid state lasers when a frequency doubled NdYag (1064nm) was convenient…
We have optical flow on our system using the OpenMV camera and the Kogger sonar (not been tested yet). It’s an interesting technology when coupled to an IMU but of course it will drift due to frame comparison errors. For me, the ultimate solution is the forward scanning LIDAR doing SLAM on seafloor features but I dont have the skill set in my dev team to make that a reality.
Physical size and current draw are hard to estimate at this stage but our targets are 7.5W max when scanning with a body size of 80mm x 40mm. The optical window should be 15mm high for the 2D version and 30mm high for the 3D. These are still educated guesses at this stage.
Definitely interested in talking more on the point cloud stuff.
Colin, I’ll raise it in the morning with the guys in a next-lab team who are much more active in lidar/slam.
what is the angular coverage of the 2D version?
B
We would like the ability to create point clouds from both our upcoming LIDAR(s) and our Luxonis stereo camera system. This is mainly for size and rotational measurement of subsea structures. Our hybrid ROV/AUV system has a dual screen system with QGC running on one and the other used for the various instruments that we have on board.
We would love to have a point cloud being generated in real time on that second screen, either from the cameras or the LIDAR.