Scanning Sonar Thoughts

I have been experimenting with an old depth sounder and have a few questions:
I assume that a transom transducer sealed in a tube will not work at depth because the water pressure on the face will freeze the transducer much like pushing on the cone of a speaker?
If it was allowed to move with oil behind and a small amount of air would that work?

It seems the Blue Robotics products will save a lot of work. However:
I am using a 19" TV and video goggles to display my COMPOSITE VIDEO sent 300ft on a twisted pair. The Ping360 requires Windows for the display which means adding a computer. Questions:
Can I use Linux EASILY so a RaspberryPie that will fit in my control console can be used? Any other suggestions?
Has anyone got a WORKING system similar to this and how do you view the camera image on the same screen?
I actually have two cameras to make things more complicated. I use a switch at present to select views and was planning on PIP but this is using a TV and composite video which is not compatible with Windows or Linux. Any suggestions?

If I switch to the Blue Robotics camera I have a real problem. I don’t have USB in my ROV and if I add it I still have a problem.
How is the H.264 video transmitted to the surface?
My system uses Composite Video in a twisted pair over 300ft and works. I would consider upgrading if there is an advantage.
Assuming I get the video to the surface and install a RaspberryPi to receive the video how do I add the display from the Ping360? Does the Ping software do this? Does it do text overlay and Picture In Picture? Is it done in the ROV prior to sending?
I am asking if what I want to do is possible if I start buying Blue Robotics sonar and Camera, and just generally how it is done. Do I need to switch to the Blue Robotics ROV computer? This would be a real problem for me as I have 4 PICs running my system in my ROV.
How many wires are in the Blue Robotics tether and what are they assigned to? What drivers are needed? I use CAT5 stranded and tinned wire and it has worked well.

To answer some of my questions:
Blue Robotics uses Ethernet and their Fathom-X Tether Interface Board Set to convert 4 wire Ethernet to two wire.
I would need to convert my entire communications protocol to their system with an Ethernet compatible computer at either end of my tether. I would then have to feed my ROV system to the ROV computer and my Control Console to the surface computer.
It sounds like a lot of work to use the Ping360. The Ping Altimeter would be easier but I could not use the display software without switching to Ethernet.
Is this correct?

I found a compromise.
Leave my ROV circuits as they are. They work.
Leave my control protocol as it is. It works.
Leave my control console alone. It works.
My camera is a simple composite video connected on a separate twisted pair of wires to a TV on the surface and is totally independent of any of the stuff above. I have plans to add a driver in the ROV and a receiver at the surface (in the control console) to improve the video and get rid of ringing from the 300ft cable.

If I buy the pair of Ethernet drivers (https://bluerobotics.com/store/comm-control-power/tether-interface/fathom-x-r1/ ) and add a RaspberryPi in the ROV and in the Console then I can plug a USB camera (https://bluerobotics.com/store/sensors-sonars-cameras/cameras/cam-usb-low-light-r1/ ) into the RPi in the ROV and plug an HDMI monitor into the RPi on the Console.
This is an instant upgrade to HiDef video without modifying my ROV that works.
Once that is working I can add the Sonar (https://bluerobotics.com/store/sensors-sonars-cameras/sonar/ping-sonar-r2-rp/ ) into the system and use the Open source software.
The next step can be adding the Scanning Sonar(https://bluerobotics.com/store/sensors-sonars-cameras/sonar/ping360-sonar-r1-rp/ ) in much the same way.
The challenge is now reduced to getting either of the Sonar screens as a Picture-in-picture overlayed onto the camera video. I assume this is where Python could provide the solution.
Another advantage is that text such as battery state and compass heading can sent from my PIC in the ROV to the RasPi to be integrated into, and overlayed on, the video. Something else I was wanting to do.
I have dusted off my RaspberriPi 3 and will get back to learning Python. I now have a HUGE incentive to learn it.
I am disappointed with the lack of suggestions on this forum but hope this might help someone else struggling for simple solutions.
If anyone has done Picture-in-picture on a RaspberryPi with Python, or any of the things I’m attempting to do, I would really appreciate some help.
Peter

Hi Peter :slight_smile:

I don’t have experience with transom transducers so can’t comment on this.

Not sure what you mean by this - Ping Viewer is cross-platform and works on Mac and Linux too, not just Windows.

If you have an RPi with access to your camera stream(s) and ping connection then you can run it in desktop mode and display PingViewer and the camera stream(s) in separate windows. The ‘normal’ way of operating is to view one video stream in QGroundControl (which is also being used to control the ROV with mavlink messages, and shows telemetry data received from mavlink messages), with additional video streams in OBS/VLC, and PingViewer doing its own thing, and you can organise each application window as desired. Picture in picture functionality is achievable but requires an application that can do it. I believe I’ve seen at least one of the distributers offering a product that can do that, but unfortunately can’t remember who.

The Blue Robotics Low-light camera operates via USB, so requires something to plug into in the ROV. The companion computer software (which is currently only supported on Raspberry Pi) is set up to detect a connected Blue Robotics camera and stream its H264 stream over ethernet through a Fathom-X board pair, as you found. The Ping360 by default is set up to connect via USB, and will also get auto-detected and forwarded over ethernet if connected to the companion computer. Alternatively, the Ping360 can be reconfigured to use an ethernet interface instead, in which case you could connect it directly to a Fathom-X board and avoid needing the companion computer in the ROV (specifically for the Ping360, camera still needs it).

The Fathom tether has 4 twisted pairs, like a standard CAT5 cable, but only uses one of the pairs for ROV communication and video. The remaining three pairs are ‘extra’, and can be used for things like the waterlinked GPS or other ethernet-communicating devices. The Fathom slim tether has just 1 twisted pair, so doesn’t support extra communication interfaces/devices.

While you could use Python to create your own equivalent of PingViewer, and display that in a GUI window with the camera stream as the background to achieve your picture-in-picture effect, this would likely be quite slow due to Python needing to do the parsing and rendering of ping messages. As far as I understand it’s not possible to use Python to just take the expected visualisation of the PingViewer application and use it within your own separate display - you generally don’t have access to what other applications are doing. As mentioned earlier however, you can display your video and telemetry in one window (via QGroundControl, or Python+OpenCV+pymavlink, or something else) and then just put the PingViewer window in front of that when you’re operating and the effect will be almost the same.

Your disappointment seems a little unfair given you were posting over a weekend, and with a relatively unique setup. Including your steps and thoughts to potentially help others who come along afterwards is always appreciated, so thanks! :slight_smile:

2 Likes

Thanks Eliotlnsight for your detailed reply;

I should have written that I am getting really frustrated with ALL forums in trying to get someone interested in adding Sonar to a ROV, any ROV. Sometimes things don’t come out right and I apologize.

You said “The Blue Robotics Low-light camera operates via USB, so requires something to plug into in the ROV. The companion computer software (which is currently only supported on Raspberry Pi) is set up to detect a connected Blue Robotics camera and stream its H264 stream over ethernet through a Fathom-X board pair”. Does this mean if I buy the camera, install two RasPi connected by a pair of Fathom-X and install the supplied software I will have a working camera?

The camera includes software compatible with a RasPi. Simple? But I don’t have a picture on my monitor yet.

Does the Fathom-X pair have RasPi compatible software for the Ethernet connection?

Is it this software that detects the camera?

Does this software also connect to the surface RasPi?

The surface RasPi connects to an HDMI monitor to display the camera image. Simple, and if the rest is true I have upgraded my camera to HiDef.

The Ping sonars include Python libraries for Ping Protocol.

You said they autodetect and connect. So I don’t need to write additional software?

Just plug into the RasPi via the Bluart USB adapter? It sounds too simple.

How do I switch between camera and Sonar displays or are they on split screen or two separate screens? I can live without Picture-in-picture.

Maybe I’m over thinking it all and am confused by your comment “The ‘normal’ way of operating is to view one video stream in QGroundControl (which is also being used to control the ROV with mavlink messages, and shows telemetry data received from mavlink messages), with additional video streams in OBS/VLC, and PingViewer doing its own thing, and you can organise each application window as desired”. I don’t understand any of this and do I need to? I have no idea what or how many displays are being used here. Is the Pixhawk a prerequisite?

Some additional information.
My ROV is entirely home made. The software is written in CCS C. I use distributed processing with a Master PIC controlling Slave PICs via I2C. There are two PICs in my Console and four PICs in my ROV. There are 3 additional Slave PICs in my Robotic Arm which works except I cannot use it until I modify it to achieve neutral buoyancy. The Master PICs do the fast stuff and the Slave PICs handle the chores. My joystick is software coupled to Rudder, Elevator, two side thrusters, and the side thruster rotate motors. For example a left turn moves the rudder, speeds up the right thruster, and slows or reverses the left thruster. Diving lowers the elevator to lift the stern and rotates both side thrusters up to drop the nose. You can see why I don’t want to change what works without good reason.
I can automatically maintain a dive angle up to vertical and for that reason need Sonar to avoid running into the bottom. My ROV will maintain a heading and I would like to be able to automatically maintain a distance from the bottom so I can concentrate on visual and Sonar images. The sewer pipe is a negative keel and is adjusted for horizontal balance, Trim weights on the blue ‘feet’ allow for fine adjustment.

If you install the companion software on the RPi* in the ROV then that setup will send a H264 stream to the top. On the top computer/RPi you can use QGroundControl to receive the video stream, or if you only want the video (no mavlink telemetry and whatnot) you can use a more lightweight streaming application like gstreamer/ffmpeg (CLI), or VLC (GUI).

*Note: companion software is currently only officially supported for RPi3. If you want to use RPi4 in the ROV you can try here, which I’ve managed to get working with a video stream and Ping360 as per my comments at the bottom of the issue, but there isn’t official support for it. The top computer can be whatever you want, and if you’re using a Raspberry Pi for it then I’d strongly suggest using a 4 so you have a bit more processing power to handle the data streams and display.

Think of the fathom-X like a range extender - it takes in an ethernet signal, raises the voltage to reduce losses over the tether, and then the corresponding fathom-X on the other end drops the voltage back down and gives back the output ethernet signal (this is a simplification - it actually uses the HomePlug AV standard to convert to high-frequency AC and then converts back to an ethernet signal at the other end). You can do testing of the communications with just an ethernet cable between the companion RPi and the top computer/RPi, and then add in the fathom-Xs and tether once you have the system working.

Raspberry Pis can send and receive ethernet signals. The Blue Robotics camera plugs in to the companion RPi, and provides a V4L2 (video for linux v2) interface that allows access to a H264-encoded stream at a few different sizes and framerates, as well as a few other encodings (that aren’t as efficient for streaming). The companion software uses a command-line application called gstreamer to connect to the 1080p30fps H264 stream, and converts it to a UDP stream which gets sent over ethernet. I recently covered this process in more detail in the “Existing Pipeline” section of this comment if you’re interested.

The ping sonars communicate via Ping Protocol. The companion software is made to automatically detect one Ping Altimeter, and/or one Ping360, and basically acts as a communication link to the top computer. The top computer can then run the PingViewer, which will automatically detect the connected Ping device(s), and start scanning once you select the device you want to view. If you’re connecting to both an altimeter and a Ping360 you’ll need to open one PingViewer per device.

The Python and C++ libraries for interacting with Ping Protocol are for if you want to manually control when the pings are transmitted, or do real-time analysis on the incoming signal. If you’re just wanting to view the ping scans and change the device settings then you don’t need to use a library - just use the Ping Viewer application :slight_smile:

Ping Viewer is an application, and whatever you choose to view your video in will be a separate application, so they’re in separate windows. If you’re using a RPi you’ll need to have it in desktop mode (instead of command-line/ssh mode), but from there it’s like any other computer so you can plug in a mouse and move around each window as desired. My only concern with using a Raspberry Pi as the top computer is performance, but in the case that it’s a bit too slow at decoding frames or something you can decrease the frame size or frame-rate using the web-interface that the companion computer provides to the top computer.

Hopefully this is clearer now from my comments above. Since the top computer is operating as a ‘normal’ computer, there are multiple windows that you can organise on the one display. You can also have multiple monitors/displays connected if your computer supports it, but that’s not super relevant.

A Pixhawk flight-controller is a prerequisite if you’re controlling your ROV using the mavlink protocol, which is what QGroundControl uses. That doesn’t seem to be the case for you, in which case it shouldn’t be an issue. You just need to make sure your required telemetry and ROV control communication gets through where it needs to, be that through using a separate twisted pair to what the companion computer is sending to (may have issues with noise), or by somehow hooking up the relevant components to the companion computer in the ROV and getting it to send them over ethernet to the top computer, which you’d then need to connect to your controls and whatnot (may be difficult depending on your setup).

Cool - that’s no mean feat! :smiley:

This is much closer to the metal than most users of the Blue Robotics system work. ArduSub runs on the Pixhawk to handle the ‘fast stuff’ and the ‘chores’ (it takes in mavlink messages such as ‘hold position’, or ‘move forward’, or with a connected water-linked GPS ‘move to location’). The companion software runs on the Raspberry Pi to link a video stream and mavlink and ping-protocol messages to the top, while also providing a web interface for easy changing of camera parameters, checking network speeds, seeing connected devices, and some other niceties. QGroundControl (QGC) is normally used to view the video stream and telemetry data, and allows the user to select their ROV frame (thruster locations and orientations) and connect a joystick or controller to control the ROV.

More advanced users make custom builds of ArduSub or QGroundControl, or add scripts to the companion software to connect to additional sensors, cameras, or devices that aren’t already supported by the companion computer or ArduSub, on an as-needs basis. ArduSub+Pixhawk provides a PWM output interface that allows plugging in compatible lights, grippers, and similar to one of the available output ports, which can then be controlled through QGC after changing a couple of settings to map the newly connected output to the desired controller button.

Indeed, you’ve built up quite an impressive system, with very custom firmware, so major changes are a big deal and possibly a lot of work if they can’t work independently from the existing system.

Thanks very much for these details. I will get stated and see if I can get the camera working.
I have been studying Ethernet connections between computers to learn new skills but there is nothing like hands-on to actually learn. I think I can handle it…

No worries - hope it works out! Let us know if you any issues arise that you can’t get through, or that don’t seem to make sense :slight_smile:

Ethernet is part of layer 1 (physical) of the OSI. It’s fundamental enough that most users of it basically just assume that it’s there and works (e.g. the device has a suitable port and network card), and work at a higher level of abstraction, commonly the Transport layer (4) (e.g. TCP/UDP).

For reference, the companion computer uses gstreamer to convert a H264-encoded video stream into UDP packets. The pymavlink library uses UDP as well, and I believe most if not all of the communication done between a ground control station (operator/top computer) and device (ROV, AUV, drone, etc) in the ArduPilot project (which ArduSub is a part of) uses UDP.

I once had UDP described to me as the “Unreliable Data Protocol”, because it sends data to a specified location (IP-address + port number) but makes no guarantees or checks as to whether it actually gets there (as compared to TCP, which checks that each message is received correctly). UDP works well for things like streaming video, where if one frame gets missed/doesn’t arrive then it doesn’t really matter, or for other data streams that are continually updated (e.g. a leak-warning would be expected to send several times, probably repeatedly while the leak is detected, so if one message is missed then that’s not hugely important in the scheme of things. The major benefit of avoiding checks is lower latency - each side can fire off messages with no need to wait for a response, and as long as there’s no reliance that the other end has received any particular packet then it tends to work fine.

Bit of a ramble, but hopefully somewhat interesting and/or useful :slight_smile:

1 Like

Thanks Eliot;
I installed QGroundControl and the RasPi software with a few problems.
The download has to be from Blue Robotics.
I had to change my Ethernet IP to 192.168.2.1 or it will not work.
There is no indication that my Win10 computer was talking to the RasPi companion computer on the Ethernet cable because I had no camera or Pixhawk connected.
Pinging the RasPi worked to prove it was connected.
I used the Companion Webb Interface on http://192.168.2.2:2770/ and when I connected a webcam to the pi it showed as connected even though there was no video (my camera is not compatible).
I’m starting to order parts for video and sonar. I am impressed with Youtubes I have watched. It would take years for me to design a system like those Sonar.
Question: Which raspberryPi 4 should I order for the surface computer? 2, 4, or 8GB?
Question: There is an attitude instrument on the QGC screen. It only shows +/- 10 deg. My ROV can dive vertically so, assuming I can interface my PIC to the pi, can that be changed easily?
I’m starting to understand the software. Hardware I understand.
Peter

My ROV build is alot bigger so i have room for more stuff, but you May be able to fit it in. I built my original ROV the size of a chest freezer after watching alot of jerky videos where the tether and current were yanking people’s tiny lightweight ROV’s back and forth.

I bought a pair of eKL ethernet-via-coax adapters off Amazon for $46. https://www.amazon.com/eKL-Extender-Converter-Ethernet-Security/dp/B07L6X94XS One topside, one in the ROV. Cheaper than Fathom X boards. Then plug in a 4-port hub, plug the ROV’s Raspberry Pi into that, and also plug in my 8-camera security-system DVR ethernet. (removed the case so i’m just putting the circuit board and SSD into my ROV’s WTC). This gives me 8 camera feeds in realtime (not relying on the QGoundControl setup), using cheap $30 FPV micro-cameras potted in clear epoxy, which should be enough for anyone’s uses.

I’m using a used camcorder as my main camera, a 78x optical zoom Panasonic T55 (eBay $50-ish). Why have a 4k camera when most places have so much particulate matter and algae in the water? If i need to zoom in a servo pushing the zoom rocker-switch works. Mounted on a pan/tilt gimbal behind a custom-blown 8" clear hemisphere.

Hope this helps you.

I meant to respond to this a couple of days ago but must have closed the tab - my apologies.

BlueROV users will generally follow the BlueROV2 software setup documentation, which provides the relevant download link for QGroundControl, and specifies the IP requirement. Given you’re not coming from that direction, it’s understandable that was a bit harder to find for yourself - if I’d thought of that I would have linked to those docs in one of my previous comments.

These make sense.

The web interface is really useful for things like that - it displays any named screen sessions running on the RPi, as well as any detected devices :slight_smile:

Cool! And yes, the Blue Robotics R&D and software teams have worked really hard to make the sonars low cost but still high performance and adaptable to different use cases :slight_smile:

More RAM lets you store more temporary data at any given point in time. For your use-case of streaming 1080p video and telemetry data I’d expect you can get away with 2GB, but I’m not sure how RAM-intensive Raspbian or QGroundControl are and unfortunately can’t test them right now (a quick test of PingViewer on my laptop had it using 150MB, so that’s not too big of an issue). Personally I’d err on the side of higher “just in case”/for future-proofing, but my expectation is you can likely get away with 2GB.

Assuming you’re talking about the pitch indication, I believe the numbers change as you tilt further. I’ll ask someone internally to confirm since I’m currently unable to test.

The attitude measurement will be sent up from the companion computer as a mavlink message, so assuming you can get the value from the PIC to the RPi then you should be able to set up a script using pymavlink to send the value up to the top. The mavlink part you can test without the sensor actually being connected - you can make a script that just cycles through values and sends them up as pitch messages, and see what happens on QGroundControl to the attitude display. This post should be helpful in getting a working mavlink connection between the companion computer and QGC. You’ll need to swap out the named_value_float message with an attitude message.

Confirming this. I’ve been told the numbers go to at least -80 before going weird, but the weirdness is likely due to gimbal lock in ArduSub, so I’m expecting it should be fine for you since you’ll be sending custom messages.

Yes, Darrell, everything helps. I had a hard time navigating the Blue Robotics site and have yet to figure out Github to find the source software. I ordered the camera and Fathom X boards as well as the Ping Sonar. I am in unfamiliar territory with the software and it is simpler for me to start with something that works. Blue Robotics really has nice stuff at affordable prices and good support.
Here is a question for anyone (Perhaps if I knew Github I could find a more appropriate group that is using a similar torpedo shaped ROV to mine).
Where should my Sonar be pointing? Ideally I would like a Sonar seeing the bottom and a scanning sonar to sector scan ahead.
With the simple sonar I just designed (on paper) a rotator so when the ROV is level it points down at -90 deg relative to ROV. As I dive at -30 deg the sonar would rotate to -60 deg and diving straight down at -90 deg the sonar would be at 0 deg relative to the ROV, or pointing straight ahead (at the approaching bottom).
However if diving at modest angles (-30 deg) I will know how far to the bottom but I won’t see the vertical rock ahead in the murk! The wide beam may catch it but the software may be too smart and only show the bottom.
What about climbing? Climbing vertically the sonar needs to be straight ahead, and at any other angle it should probably be the same to avoid hitting anything including the bottom of my own boat.
I will start with a manual control until I get used to it. I thought automatically setting it sounded like a great idea until I thought about it.
Because of my ROV shape it can dive vertically at considerable speed hence the need for Sonar.
Peter
Edit: Thanks for the detailed information Eliot.

Bluerobotics has a github account, where all our public repositories can be found.
The main one of particular relevance to you is companion, which sets up the video stream and allows connecting to and controlling the ping sonar devices from the top computer. For actually sending mavlink commands you’ll likely want to use pymavlink, which is part of the ArduPilot project so its source is on their github account. Note that pymavlink is installed automatically as part of the companion setup process, so you’ll just be able to import and use it within scripts you write on the companion Raspberry Pi.

Indeed, to maintain distance from the bottom you’ll need an altimeter (single direction sonar/echosounder) that points downwards when you’re moving, so this is a good approach.

For the ‘Sonar images’ I’m not sure if you’re wanting to image the sea floor or try to find vertical things around you. For the sea floor you’ll want the ping360 at ~10-15 degrees down from horizontal (see this guide), and only care about one direction so the sonar can be under and against something without it being a problem. For around you it’s best to have the sonar horizontal (although some tilt is at least ok), and clear on all sides.

With respect to your angle concerns, I think you might be ok with just fixed positions. When going steeply downwards or upwards you can set the scanning sonar to a small scanning range so it sees the bottom/your boat, and you can effectively ignore the altimeter during such dives/ascents. Less steep you can use a wider scanning range since the slower movement speed should be slow enough that the extra visibility is helpful.

The scanning sonar software doesn’t try to be smart - it just shows you how much response it received at each point in time for a ping, in the direction of that ping. It’s up to you to interpret/determine whether a strong response is from above or below or in the middle of the wide vertical range of the beam - the signal contains no data about that, just response over time.

Once you stabilise and return to horizontal at the bottom you can change to an altitude holding mode that uses the distance estimates from the altimeter - I believe you’d need to turn off the connected PingViewer in this mode so that a script could be controlling it instead and passing the values directly to your control electronics. The scanning sonar is then able to show either around you or the sea floor, depending on how you’ve set it up :slight_smile:

Thanks Eliot.
I just got my RaspberyPi 4 and had it up and running on the internet in minutes.
I downloaded QGroundControl and ran into a brick wall. Apparently it needs Ubuntu not Raspian. I don’t like the messages about video streaming issues in Ubuntu. It seems like a really bad choice to use a Raspberry PI for the topside computer.
Can you suggest an SBC that runs windows and is powerful enough to install in my console. I don’t want a laptop, and prefer not to have a keyboard. Have I made a big mistake?
My order with the Ping Sonar and camera arrives on Wednesday and I would like to put everything together. Peter

I believe to run both QGC and PingViewer on a raspberry pi you’ll need to build them from source (no convenient download unfortunately). I had a quick look around for you and it seems unlikely to be a pleasant process, but it does at least seem possible to achieve given some time and perseverance.

If you choose to do so, you’ll need to

  1. build Qt version 5.12.6, with gstreamer
  2. build QGC (Blue Robotics recommends 4.0.5)
  3. build PingViewer

There’s a guide for getting Qt+QGC on RPi4 here, which is probably a good starting point. It’s somewhat outdated, but may be helpful with the setup. It installs an old version of Qt that won’t work with PingViewer or recent versions of QGC, so you’ll need to swap that out. This tutorial covers building Qt 5.12.X on RPi, so can hopefully be swapped in. For the extras you’ll need gstreamer (video) and the VC4 driver (required for RPi4), and you may also need X11 (for application windows being a thing), and possibly Wayland. I’m unsure about the others. I’d also recommend you install the Ubuntu requirements from the normal QGC build guide (“Install Qt” section, point 3.) - they may fix the missing text-to-speech issue mentioned in the RPi guide, and possibly avoid some other potential bugs.

The build instructions for PingViewer are here - you’ll likely want/need to use the “Building with terminal” section.

Please note this is very much me trying to find the best and most relevant instructions I can, and I can’t make any guarantees about it working. I’m tempted to try the process myself out of curiosity, but don’t have the time or equipment to do so at the moment, and likely won’t for the next couple of weeks at least. This is also outside my working role (it’s the weekend for me, and building QGC and PingViewer on a Raspberry Pi aren’t options officially supported by Blue Robotics).

Unfortunately I don’t have much experience with SBCs outside of the Raspberry Pi, so don’t know enough of what I’m talking about to make such a recommendation, and don’t have the time to research it at the moment.

In saying that, as far as I’m aware it’s possible to run Windows or Ubuntu on a Raspberry Pi 4. I’m unsure what the implications are there with respect to library compatibility when trying to install QGC or PingViewer, but it might be worth a try.

Thanks Eliot; that is what I was starting to suspect. I’m not a software engineer and not prepared to spend the time on this approach. It appears I’m digging my hole in rock, not sand. Please don’t waste your time on this.

Win10 seems to be what people are using (I invite correction) and if this is the case I can build a PC into my console, I have built computers in the past starting with a motherboard. I will continue with my Win10 laptop i7 to make sure the camera and Pingviewer work.
Peter

1 Like