OpenCV AI Kit (OAK) - 4K+stereo ML camera options

EDIT: I’ve changed the title from it’s initial OAK-D-Lite Kickstarter focus, because this post has turned into more general discussion of OAK camera integrations.

Hey everyone,

Just letting people know there’s a kickstarter that’s just begun for a pretty awesome new 4K@30fps + stereo camera with ML capabilities and on-chip encoding (H264/H265/MJPEG). It’s by a partnership between OpenCV and Luxonis, with first units expected to ship in December :smiley:

It builds on the OpenCV-AI Kit Depth (OAK-D) variant they kickstarted for last year and released this year, but is lower cost, lower weight, and smaller, which makes it better suited for applications like robotics. The campaign page has heaps of application videos, along with specs (easiest to find in the FAQs):

If it’s relevant/of interest, they’ve provided a 3D model of it on their github.

A couple of disclaimers:

  • I have personally backed this campaign (it’s already fully funded, so any additional incentive is just to achieve stretch goals).
  • Kickstarter campaigns always come with some risk, particularly given the widespread logistics issues in the technology industry at the moment. For some context, the initial OpenCV AI Kit kickstarter last year had some significant delays in delivery (due pretty much entirely to those logistics issues), so note that while it’s very likely the devices in this campaign will get made and shipped, it’s possible they’ll be later than estimated.
    From the Comments section it seems like they started sourcing components for this at the start of this year, which is why they’re expecting the early backer rewards to be deliverable in December, and once the backer pledge numbers go past what they’ve pre-stocked for they’ll have a new offer for ones that are expected to arrive at some point in 2022 (not yet determined when).

This looks quite interesting. I image that you could place this behind a 4" flat port and scale the results to account for refraction.

I’m curious if there are projects that have used other stereo cameras behind a flat port in projects? How have the results been?

1 Like

That’s what I’m planning to try in my own testing :slight_smile:

I haven’t yet looked into other systems like that, so not sure if they exist / how the results have been if they do. Interested if others have some experience with it :slight_smile:

At this point I’ve played around a bit with simulating the distortion to determine how best to adjust for it, but there’s a few months yet before it’s particularly relevant for that investigating to be completed. It’s also interesting to consider that the refraction is slightly different depending on how salty the water is, so depending on how significant the effects of that are it might need to be calibrated for each dive to get the lowest decimals of accuracy. One positive if salinity refraction is actually significant enough to be detectable is that it would allow visually estimating salinity levels, which would be a super neat application.

1 Like

Are there any updates on this effort? We were thinking about trying to do this since the OAK-D-Lite is now available for purchase.

Hi @lsiemann, welcome to the forum :slight_smile:

I’ve got a few different OAK cameras, but haven’t yet had much of a chance to test them / get them up and running. Longer term I’m thinking it would be really cool (and useful) to have a general OAK camera integration extension for BlueOS, but doing so would likely need to be on my own time, and I don’t currently have much of that to spare.

If you do start working on an integration you’re welcome to discuss that here - I’m happy to try to help out where possible / relevant :slight_smile:

We have had to postpone our activities related to using OAK-D (Lite) for underwater tasks for way longer than I had anticipated, but will hopefully be able to spend a good amount of time with the OAK-Ds in the autumn (a prioritized task).

OAK-D (Lite) are now the standard camera on the newest version of the ROS based Turtlebots, and have gotten a lot of traction lately with a growing community and software modules released.

Initially we will just put the cameras in an enclosure to make them waterproof and pressure tolerant. But I also want to modify the cameras to so that they can be used without an extra enclosure.

Luxonis have proven that they are able to create specialized versions of their cameras if a market is identified – and I believe the underwater market for OAK-D could be very appealing.

They are also actively working together with Arducam, ensuring that various sensor modules can be used.

We are really looking forward to the next generation of OAK-Ds to be released in the second half of this year. They will use the Keem Bay AI module and bring much more on-device capabilities to the cameras. Including an ARM core for general compute.


Like most contributors to this thread I’ve had various Luxonis systems sat on my desk for a while, but it’s finding the time to getting round to do something with them.

I’ve eventually got round to putting together a first prototype drop-camera that incorporates the modular OAK-FFC-3P (RGB only), so thought I’d share a few images and a brief description of the components used:

  • Raspberry Pi Zero 2
  • Luxonis OAK-FFC-3
  • Locking 2" by 150mm aluminium enclosure
  • Front end-cap and 4-hole end-cap
  • GoPro
  • Fathom Tether (4 UTP)
  • MJF printed mount and arm (also works as cable clamp / strain relief)

I’m using a 3mm borosilicate lens instead of the acrylic, so thought I’d design an MJF printed shroud to help protect from any bumps and knocks it would get as a drop-cam system. This went together really well so I also flipped the design round and spaced off a protection ring to provide protection for the BAR30, Celsius, and the ever excellent 6-Pin Hybrid connector from @damonblue.

The 2" enclosure was left over from when I worked at a government agency, so I thought I’d use it up for this project and look at getting a 3" or 4" in the future and add in the mono camera pairs then. Although in the end I spent many an hour in FreeCAD working out exactly how all the components would fit and work together in the 2" (it’s pretty cramped in there!).

The lack of air volume and the fact I had to cut down the heatsink fins on the Luxonis does mean that it gets pretty hot in there, and once the Movidus is running a model there is a lot of heat generated.

To help combat this I’ve used copper squares on the Luxonis, and brazed a ‘T’ shaped copper heatsink for the Raspberry Pi to help as a thermal mass and be as close to the aluminium enclosure to help heat dissipation to the surrounding water. Seems to work well, as with a model running, the temperature of both the Luxonis and Raspberry Pi keeps in the 70 - 75 C range (I think the Pi starts to throttle its performance at 80 C).

I have been limited in the amount of in water testing I’ve been able to do, but pointing the camera at a picture on my laptop (left of image) the model I have loaded onto the unit is giving reasonable classification results (right of image).

Much like @kjetilei I’m going to be interested to see where the Keem Bay (RVC3 - VPU) takes things and will probably hold fire on implementing a 3D depth system until it has been released.

I don’t own a BlueROV 2, but the camera would sit nicely in the payload skid. As I’m using an SPI-Ethernet convertor on the Zero 2 I presume @EliotBR if it fed into the compact 5-port Ethernet Switch, and then via the Fathom-X the camera would be accessible from the topside computer?



Hi @ZeroBubble, welcome to the forum :slight_smile:

Thanks for sharing your current prototype - it’s always cool to see what others are working on.

Yes, it should be accessible assuming you have set up some kind of IP-based interface/stream for it on your RPi Zero2 :slight_smile:

As above, I’m hoping to eventually have an OAK extension for BlueOS, which would then simplify setup for lots of applications, but unfortunately I still haven’t yet had a chance to properly work on that and get it working.

At the moment I’m thinking the simplest approach will be making a service with an API that allows loading a model onto the OAK device, and creating streams (via pyvirtualcam?) that the BlueOS camera manager can detect and share / allow configuration of. I’m curious as to your current software setup :slight_smile:

Following up on this, I realised it may be possible to use USB over IP (extension discussed here), and it worked great! :smiley:

Note that USB over IP does not do any compressing, so it’s still recommended to set up an efficiently encoded stream to avoid excessive bandwidth usage. This also doesn’t integrate with control station software and whatnot that’s expecting the stream to come from the vehicle, which may be a problem for some applications. For all other applications though, this is a super easy way to connect to an OAK device through BlueOS (or just through a Raspberry Pi more generally) :smiley:

It may also be worth checking out roboflow - I haven’t looked into it but it could simplify some setup/configuration.

1 Like

Hi @EliotBR, I went down the route of using the green and orange pairs in the tether as IP, and having the brown and blue pairs as DC voltage. My current WIP topside is:
LAN connection (RPi Zero 2 → Topside RPi), and then I bridge that connection via Wifi. Picture below illustrates the connections, and ideally I’m working towards a topside that’s sealed away in a IP67 case and you connect via wifi with a smartphone or tablet.

I’ll have to look at creating streams / pyvirtualcam, as at the minute I VNC down into unit for control and to view the local video. Not ideal, but seems to work reasonably well, and the SPI-Ethernet conversion is providing the bottleneck in the pipe (think it only gives 3-5MB tops).

I’ve some limited experience with Roboflow, and have retrained their default Aquarium model so I can locally deploy it to my OAK (Roboflow provide instructions on how to do this How to Train and Deploy Custom Models to Your OAK). Video below shows the camera pointing at a video feed (left) and the VNC connection with the model running and inferencing reasonably well on jellyfish. A synthetic example for sure, but I’m hoping to get the camera into a marine / aquarium environment soon and gather some ‘real world’ data.

I’ve not really had time to look at it fully, but Roboflow have also just released semantic segmentation capability: Semantic Segmentation for Labeling, Training, Deployment. The Smart Polygon tool on the fish example looks a great time saver!