You need to arm the vehicle before it will allow you to control the motors. There are also various failsafes that you’ll need to either account for or disable for continuous control.
This example could be a useful reference, and includes some relevant notes at the bottom.
Are you creating a h264-encoded video stream that’s being served at port 4777? If not, you’ll need to either do that or change which port the code is looking at or which encoding it is trying to parse the stream as.
Hi @akang-211 -
You may have an easier time working with the video stream in OpenCV if you configure the stream to be RTSP, and not a simple UDP stream…
If I burn this docker-image into it, will it overwrite the Raspberry Pi system in my hospital? (My idea is to use the raspberry-oak system officially provided by Oak, and then burn the Blueos docker into it, so that I may be able to use my original code for OAK on the Raspberry Pi)
Hi @akang-211 -
Can you provide more context, like a link to the “official” oak software you’re referring to? If you flash the SD card, it will overwrite any OS - you do this with the img file, the .tar is used for offline update of BlueOS.
Your original code should be executed within a docker container, as part of a BlueOS extension. Have you followed the documentation on this process linked previously?
and you could use an existing OAK-D focused Extension like this one or this other one as a basis for several of the relevant steps.
Note also that processing frames on the Raspberry Pi requires a lot of data bandwidth to it, and generally quite a lot of processing capacity on it that may be better directed to other services and tasks, while typically adding latency to the stream, especially if all you’re doing on the Pi is overwriting image data with annotations and then encoding the stream to send it elsewhere.
If you want to install BlueOS as secondary functionality on an existing operating system image then you likely want to use the install script rather than trying to manually install a BlueOS docker image or something. That said, we’d generally recommend installing a BlueOS image and then running other things in parallel via the BlueOS Extension system, so that it doesn’t interfere with the core BlueOS services, and so the setup can be more easily reproduced and shared to other BlueOS devices.
The Raspberry Pi operating system images that get flashed onto an SD card are not the same as Docker images (which require a base operating system to be already installed, as well as Docker), so this question doesn’t make much sense.
Now I can use the algorithm I mentioned before on the Raspberry Pi, but the inference speed is too low. Is there any way to improve the inference speed? The following is a video of me using the Raspberry Pi for inference (the inference speed is 2 frames/second, and the saved video is 30 frames/second)
The OAK series cameras are made for running ML algorithms, and have low-latency access to the raw frame data (before it loses data in the encoding process, and without needing to decode it). If there’s some way you can run your processing on the camera then that would definitely be preferable, and ideally you can completely avoid decoding or processing the frames on the Raspberry Pi at all, and just pass them through to the visualisation endpoint.
Why did the mapping of this button change after I switched from Raspberry Pi 3 to Raspberry Pi 4? When I operate the handle now, some motors do not rotate
The joystick button functions are determined by your control station software, and possibly your flight controller board’s autopilot parameters, so are not directly related to which onboard computer (e.g. Raspberry Pi) you’re using.
That said, Cockpit does synchronise its settings to the onboard computer, so how the joystick buttons are mapped may need to be manually exported and imported using the buttons at the top right of the joystick configuration page:
If you’re using a Navigator as your flight controller board (instead of an independent flight controller like a Pixhawk) then the autopilot firmware and parameters are also stored on the onboard computer, so you’ll need to make sure you’ve copied the parameters over to your new system if you want them to be the same (e.g. by saving them in the old system, and loading them to the new one, via the BlueOS Autopilot Parameters page).
I want to perform underwater instance segmentation and obstacle ranging, so I need to use a binocular camera. My previous OAK camera has been burned out, and now I want to buy an ordinary binocular camera (not too expensive). Which one do you recommend? Or will any ordinary binocular camera on the market be fine?
Hi @akang-211 -
I’ve used this stereocamera in the past, but you’ll need to point it through a flat end-cap, and calibrate it with a checkerboard (underwater!) It provides a single, very wide image that has both the left and right contained in it, so you don’t need to worry about syncing the streams, just cut it in half!
H.264 is preferable - it is dramatically less bandwidth, and supported natively by Mavlink Camera Manager in BlueOS.
Unfortunately, most binocular cameras on the market now output in MJPG format (including the one you mentioned). How can the MJPG format be pushed to have a lower latency?
When you used the camera you mentioned in the past, did you connect the camera directly to the computer for calibration or install it on the Raspberry Pi with the Ardusub system for calibration? I tried it today, connecting the camera with MJPG output to the Raspberry Pi, and then using opencv on the computer to perform RTSP to obtain the video stream, and it was almost disconnected halfway.
Why is the delay so great when I use opencv to get the rtsp video stream that it is almost disconnected, while the delay is so small when I use QGC to get the video stream?
When I use ffmpeg to obtain the RTSP video stream encoded in MJPG, the videp streams shows a timeout. Then I checked the progress and the result is as shown below. Is there any good way to obtain the RTSP video stream in MJPG encoding format?