I have a high frequency forward-look sonar integrated onto the BlueROV2 frame (data sent directly through tether up to topside computer). I am able to collect sonar “images” at about 5 Hz. My goal is to develop a control loop to hold station on a target that is detected in the sonar image. Is it feasible to try to “hack” the position hold flight mode in ArduSub by simulating a GPS input to the pixhawk based on location of the target in my sonar image?
Do you have access to the data that comprises the image, or do you just get a picture? I would suggest making a new flight mode to hold the distance to the forward object rather than trying to spoof the GPS.
Yes, I have access to the raw sonar data. I am able to determine the physical distance (x,y) of a point of interest from the sonar head.
Where would be a good place to start in creating a new flight mode for this application? Can I utilize the basis of the position hold mode, or should I start from scratch?
There are a lot of ways to approach this, and spoofing GPS may be a good and easy way to do what you want, after all. The problem I see with this is that the navigation filter really doesn’t like discontinuities in the GPS position, so if the object is going in and out of sight, then you might have trouble dealing with that.
What I was initially thinking was this:
- You have a distance and angle to the closest object in front of you.
- Turn to face the object (get the angle to zero).
- Lock in the forward distance to the object by controlling the forward/backward outputs.
- Pilot can override yaw or forward at any point, when the pilot releases manual control, the current forward distance target is reset.
If you were in a current, with no inputs, then the ROV may do a sort of half circle around the target as it is pushed by the current, then hang out down-stream from the pylon. You would need some extra information to control your angle relative to the pylon as well, I don’t know if we have enough information to do that, but it may be possible by combining the sonar data with the compass data from the ROV.
To do this, I would copy depth hold mode, then do something like the rangefinder code (bottom following) found in depth hold mode, or add a P or PI controller instead.
I am trying to develop a similar system with a floating GPS sonar ball. My plan is to tether it to rov with retracting reel. The sonar sends data to phone but I am trying to emulate to laptop. The sonar can see to 80m so rov is seen by sonar constantly. I would like to use sonar/GPS data to guide rov,send GPS data compensated for position. If I could triangulate rov position off sonar which sees rov n it’s depth as well as bottom depth, then there is potential for autonomous missions as well as variable depth control for following bottom contour. My dream of a positioning system under $1000 is way beyond my skills for programming but every day I learn more.
Thanks for the guidance n useful information, Jacob
Hey Sbmech. Are you getting the (X,Y) coordinates out of the sonar autonomously? if so it would seem relatively easy to setup a local coordinate system, just set the object as (0,0).
The hard part of this is getting the data out of the sonar. to tell you range and bearing, and processing it in real time. How are you accomplishing this? and if you can do this why would you use pixhawk? just tack controls on the computer thats giving you that data.
Thank you for the suggestions, Jacob. I was thinking of implementing a PD controller initially. I did consider the object disappearing and reappearing in view, and I think developing a controller then testing it out in this type of scenario will be the best way to determine what can be done to handle it.
The next sensor I hope to add to the ROV is a USBL. Should the sonar data alone not be sufficient for tracking objects, I’ll move on to tagging targets with a position from the USBL and using that for GPS data into the pixhawk if possible.
I’ll update with progress.
Hi Pierson,
I am processing my sonar data to find an object of a certain size/intensity, then grabbing the range/bearing of the center of the object. I can translate this to an (x,y) coordinate corresponding to a pixel in the 2-D sonar “image” that is generated for display. The sonar itself keeps track of the range/bearing for each index in a given ping, and I can pull this data directly in my software. I believe this can be done quickly enough to provide smooth data for ROV motion. Where I am stuck is integrating this function into the current set up for ROV control. I am using QGroundControl to control the vehicle and would like to add the target tracking functionality to a button on the gamepad so that it integrates much like depth hold for use on-demand. Is there an alternate way to integrate a vehicle control loop directly in QGroundControl?
No. You will need to put this into ArduSub on the Pixhawk. You could also implement it outside of the autopilot and modify the joystick inputs programmatically like @Pierson suggested . I suggest you use MAVProxy if you go the second route. It’s also a really good tool for testing during ArduSub development in general.
This is a really cool idea I had never considered. Does the sonar ball give you an x,y image/plot? How wide is the beam, will it need to be directly over the ROV? Can you link the product?
The sonar is dual channel. 90kHz and 290kHz. 290kHz is 15 degree beam and 90kHz has a 55 degree beam. The sonar ball is setup as a wireless hotspot and sends up to 100m. It is made to cast out on a fishing rod and map the bay while quietly wound back in. It sends sonar image back to my android phone. An awesome affordable floating sonar for the BR2, i think.
I have tried bluestacks android emulator on windows to run the app and try and connect but have had no luck. Bluestacks seems sandboxed but has an ip address. So far the app will load in windows but connection has eluded me. I’m not sure if it is because of android/windows or just my inability to link them.
The ultimate would be to find someone who could port their app direct to windows and have it showing in QGC as a moveable tab.
I did email the developers a few months ago but don’t wish to program for computers, just phones.
Kaos & Jacob,
The sonar ball looks really hard to implement. That sonar looks like a sidescan. so it is going to have a very narrow beam. you could get around that by spinning the ball. can you consistently see the vehicle on the sonar image?
In my experience sonar images are so noisy that it is very difficult to process by a computer to pull out this kind of location data in real time. Could be useful to correlate data afterwards.
I have been using a fixed tether in 10m of water and the rov is seeable in sonar. I want to use a constant torque reel to let it out as rov dives. Reel mounted on float on surface so when rov stops,the reel drags sonar back over top. Constant torque should keep sonar close.
Let me know how that goes, it seems like a great way to cross correlate the data after the fact. I suspect the tension is not going to be high enough to be practical. even in a pool with 20lb of tension in the line, there is a lot of drift. can only imagine that same thing in ocean currents, and with typical operation depths.
After spending some time learning how the flight modes in Ardusub work, I have an idea of how to implement a control loop for station keeping with the sonar target. What’s the best way to observe the response of the vehicle (I’m looking for the gyro/IMU data and the joystick/RC inputs to the motors over time)? I have tried looking into the log files from MAVProxy but haven’t found the documentation on the ardupilot site to be very clear. Is the data I am looking for logged automatically when I run the ROV using QGC? How can I access this data for processing after running a trial? Thanks.
There are two types of logs, telemetry logs and dataflash logs.
Everything that shows up in the ‘MAVLink Inspector’ in QGC is saved to a telemetry log on the receiving computer.
The dataflash logs are saved to the Pixhawk SD card.
Both types of log can be analyzed using MAVExplorer.py.
If you need to do some real-time analysis there are some mavproxy features for that, or you can use the Widgets->Analyze feature in QGroundControl.
Let me know where you want to start looking and I can get more specific.
-Jacob
Ok great, this is what I was looking for. In the “Analyze” feature, I see that I can “start logging” selected parameters to a CSV/txt file, is this the easiest way to grab raw data for a particular parameter from the telemetry log for post-processing in excel or Matlab? What about from the dataflash log? For example, I may want to collect the raw IMU, attitude, and rc channels data from the logs.
Interesting, I had not noticed that feature in the Analyze widget before. It appears to work, and yes it is probably the easiest way to get a .csv.
The biggest difference between dataflash logs and telemetry logs, is that dataflash logs have samples at a much higher frequency than dataflash logs because there is no communication overhead. There are also some fields in dataflash logs that do not appear in telemetry logs, but the data you are interested in is in the telemetry log. If you want to use dataflash logs, you need to use the python tools, as QGC does not support them.
mavlogdump.py can extract/export data you are concerned with to a .csv file using conditions like timestamp range, what flight mode the vehicle was in, and message ID. It is similar to the logging function of the Analyze widget, but it is a bit more powerful and the filtering can all be done in post.
-Jacob
The telemetry log data is coming in at about 4Hz which is a little slow for my purposes. Dataflash log is recording at around 25Hz so I’m working with that. I have been using mavlogdump.py to save the data to csv files. One problem I am having is with downloading the dataflash log through QGC. When I go to download some of the larger files through QGC, the download will “time out” and the file seems to be incomplete. I copied the data over from the pixhawk SD card successfully. Any ideas?
Using MAVExplorer.py, I’m comparing the RCOU (servo output) data to the altitude reading (BAR2.Alt) during a ~10 s long dive and yaw reading (ATT.Yaw) during a ~10 s turning maneuver. Looks like the RCOU.C5 and RCOU.C6 parameters correspond to vertical thrusters (5 and 6) on the BlueROV2 frame, so I assume that RCOU.C1 through RCOU.C4 correspond to the remaining four thrusters in the same way. Can you confirm this?