You need to arm the vehicle before it will allow you to control the motors. There are also various failsafes that you’ll need to either account for or disable for continuous control.
This example could be a useful reference, and includes some relevant notes at the bottom.
Are you creating a h264-encoded video stream that’s being served at port 4777? If not, you’ll need to either do that or change which port the code is looking at or which encoding it is trying to parse the stream as.
Hi @akang-211 -
You may have an easier time working with the video stream in OpenCV if you configure the stream to be RTSP, and not a simple UDP stream…
If I burn this docker-image into it, will it overwrite the Raspberry Pi system in my hospital? (My idea is to use the raspberry-oak system officially provided by Oak, and then burn the Blueos docker into it, so that I may be able to use my original code for OAK on the Raspberry Pi)
Hi @akang-211 -
Can you provide more context, like a link to the “official” oak software you’re referring to? If you flash the SD card, it will overwrite any OS - you do this with the img file, the .tar is used for offline update of BlueOS.
Your original code should be executed within a docker container, as part of a BlueOS extension. Have you followed the documentation on this process linked previously?
and you could use an existing OAK-D focused Extension like this one or this other one as a basis for several of the relevant steps.
Note also that processing frames on the Raspberry Pi requires a lot of data bandwidth to it, and generally quite a lot of processing capacity on it that may be better directed to other services and tasks, while typically adding latency to the stream, especially if all you’re doing on the Pi is overwriting image data with annotations and then encoding the stream to send it elsewhere.
If you want to install BlueOS as secondary functionality on an existing operating system image then you likely want to use the install script rather than trying to manually install a BlueOS docker image or something. That said, we’d generally recommend installing a BlueOS image and then running other things in parallel via the BlueOS Extension system, so that it doesn’t interfere with the core BlueOS services, and so the setup can be more easily reproduced and shared to other BlueOS devices.
The Raspberry Pi operating system images that get flashed onto an SD card are not the same as Docker images (which require a base operating system to be already installed, as well as Docker), so this question doesn’t make much sense.
Now I can use the algorithm I mentioned before on the Raspberry Pi, but the inference speed is too low. Is there any way to improve the inference speed? The following is a video of me using the Raspberry Pi for inference (the inference speed is 2 frames/second, and the saved video is 30 frames/second)
The OAK series cameras are made for running ML algorithms, and have low-latency access to the raw frame data (before it loses data in the encoding process, and without needing to decode it). If there’s some way you can run your processing on the camera then that would definitely be preferable, and ideally you can completely avoid decoding or processing the frames on the Raspberry Pi at all, and just pass them through to the visualisation endpoint.
I want to perform underwater instance segmentation and obstacle ranging, so I need to use a binocular camera. My previous OAK camera has been burned out, and now I want to buy an ordinary binocular camera (not too expensive). Which one do you recommend? Or will any ordinary binocular camera on the market be fine?
Hi @akang-211 -
I’ve used this stereocamera in the past, but you’ll need to point it through a flat end-cap, and calibrate it with a checkerboard (underwater!) It provides a single, very wide image that has both the left and right contained in it, so you don’t need to worry about syncing the streams, just cut it in half!
H.264 is preferable - it is dramatically less bandwidth, and supported natively by Mavlink Camera Manager in BlueOS.
Unfortunately, most binocular cameras on the market now output in MJPG format (including the one you mentioned). How can the MJPG format be pushed to have a lower latency?
When you used the camera you mentioned in the past, did you connect the camera directly to the computer for calibration or install it on the Raspberry Pi with the Ardusub system for calibration? I tried it today, connecting the camera with MJPG output to the Raspberry Pi, and then using opencv on the computer to perform RTSP to obtain the video stream, and it was almost disconnected halfway.
Why is the delay so great when I use opencv to get the rtsp video stream that it is almost disconnected, while the delay is so small when I use QGC to get the video stream?
When I use ffmpeg to obtain the RTSP video stream encoded in MJPG, the videp streams shows a timeout. Then I checked the progress and the result is as shown below. Is there any good way to obtain the RTSP video stream in MJPG encoding format?
I use pymavlink to control my ROV, but the result is like below
The set_pwm parameters used are (6, 1550); (5, 1450); (4, 1550); and (5, 1550). Normally, channel 5 is for forward direction and channel 4 is for yaw, but this doesn’t seem to be the case. Furthermore, the same situation occurs for all three of them. I tried the example you gave in a previous post, but the ROV didn’t move. Do you have any good examples of using pymavlink to control ROV motion?
Following is my code
"""
如何使用 RC_CHANNEL_OVERRIDE 消息强制 Ardupilot 中的输入通道的示例。这些有效地替换了输入通道(来自操纵杆或无线电),而不是通往推进器和伺服器的输出通道。
"""
# Import mavutil
from pymavlink import mavutil
import time
# Create the connection
master = mavutil.mavlink_connection('udpin:0.0.0.0:14550')
# Wait a heartbeat before sending commands
master.wait_heartbeat()
# Create a function to send RC values
# More information about Joystick channels
# here: https://www.ardusub.com/operators-manual/rc-input-and-output.html#rc-inputs
def set_rc_channel_pwm(channel_id, pwm=1500):
""" Set RC channel pwm value
Args:
channel_id (TYPE): Channel ID
pwm (int, optional): Channel pwm value 1100-1900
"""
if channel_id < 1 or channel_id > 18:
print("Channel does not exist.")
return
# Mavlink 2 supports up to 18 channels:
# https://mavlink.io/en/messages/common.html#RC_CHANNELS_OVERRIDE
rc_channel_values = [65535 for _ in range(18)]
rc_channel_values[channel_id - 1] = pwm
master.mav.rc_channels_override_send(
master.target_system, # target_system
master.target_component, # target_component
*rc_channel_values) # RC channel list, in microseconds.
############################################################## 解锁无人机
master.mav.command_long_send(
master.target_system,
master.target_component,
mavutil.mavlink.MAV_CMD_COMPONENT_ARM_DISARM,
0,
1, 0, 0, 0, 0, 0, 0)
#俯仰控制(通道1,>1500为向上仰,<1500为向下俯)
#set_rc_channel_pwm(1, 1600)
#time.sleep(0.1)
# 侧倾控制(通道2,>1500为右横滚,<1500为左横滚)
#set_rc_channel_pwm(2, 1600)
#偏航控制(通道4,>1500为右偏航,<1500为左偏航)
#while True:
#set_rc_channel_pwm(4, 1550)
#time.sleep(0.2)
#前进控制(通道5,>1500为前进,<1500为后退)
while True:
set_rc_channel_pwm(5, 1550)
time.sleep(0.2)
#左右控制(通道6,>1500为右,<1500为左)
#set_rc_channel_pwm(6, 1550)
I want to use the camera as visual information input, use depth estimation and instance segmentation to obtain the distance, position, and size information of obstacles, and then use this as the basis for obstacle avoidance. Then use pymavlink to control the ROV for autonomous obstacle avoidance, but now I can’t even control the most basic movements
Does the ROV operate manually when driven with game controller, in stabilize and altitude hold modes? It looks to go crazy in the way that vehicles with no calibrates motion sensors, or incorrect thruster direction mappings tend to go when in the modes that automatically hold vehicle pose.