Can you tell me how the use of a Docker engine has helped or not, versus simply using systemd style service processes and managing the various components that way. Has the overhead of Docker containers, making and managing them, been worth it for the gains? Have you found the systems clients put together to be more reliable, flexible etc… I have only used containers for managing the development environments I work in and I’m a bit skeptical it makes sense to embed a Docker engine into an embedded device. And if so are there any rules of thumb you apply to where a container is needed or not? Where are the boundaries in a typical architecture that say a container is needed for a subsystem? In my clients case its an underwater rover so we have 7 thrusters of 2 types, lights, implements we need to operate with a closed loop control system, sonar, video feed etc..
Hi @nnewell,
I’ve moved this part of your comment into this new post, because it was on a different topic.
Absolutely. A core reason we switched to a containerised approach was to enable users to install more than one adjustment/expansion at a time, in a replicable and shareable way. We’ve also seen a drastic decrease in the frequency of “something wasn’t working, and I don’t know for sure how to revert the changes I made, so I had to reflash my SD card to fix it”.
While there can be development challenges with getting a container set up as intended, the possibility of dependency isolation and the convenience features for managing the container (including clean, largely independent updates) are significant enablers for end users, particularly those who aren’t inclined to follow a list of terminal commands to start/stop a service or chase down error messages when there’s a conflict or something isn’t working.
It’s also not like Linux terminal programs are consistently less arcane than Docker configuration - some people just happen to be more familiar with them (while others treat them as an insurmountable barrier to entry).
I’d direct you to our development documentation if you’re interested in more details.
It’s definitely heavier than a fully memory- and performance-optimised system could be, focused on an application-specific set of requirements, but we’re not catering to a well-defined set of use-cases, or a userbase with a consistent background/training, so flexibility and accessibility are critically important considerations.
Need is a strong word. I’d be inclined to recommend as few containers as you can reasonably get away with, without needlessly encumbering or overloading a meaningful portion of your userbase. One Extension that includes drivers for half of the globally available sensors and peripherals is very likely overkill and difficult to configure, but if there’s a family of sonars that need similar setup then it probably makes sense to avoid duplicating work and just have a single Extension for them.
From another perspective, if there are subsystems that are unrelated to each other, and likely desirable to update independently of each other, then it may be worth having them in separate containers, though that does depend on the nature of the software involved.
In a typical BlueOS setup this would be ~3 containers:
- Bootstrap to start everything, and handle system failures
- Core for running the autopilot firmware (which manages the control system and outputs), and the video pipeline
- If your autopilot is not running a supported firmware type, it may need to be added separately, in which case you would need another container
- An Extension of some kind for the sonar integration
Hi Eliot,
Thanks for answering so many of my questions. Looking at some of the extensions it looks like data exchange between containers is mostly done through some sort of predefined socket (where would I find the best information on how these sockets should be defined). Is that true and what other methods are used? Specifically is there a database commonly used as an intermediate? For instance a driver might collect data and update a database table field(s). This would allow an application layer to be decoupled from the driver and process the data when ever it changes for its own purposes. This also allows multiple application clients to access data as needed without any interaction other than the database and allows for the overall system to run asynchronously.
Thanks
Because BlueOS has a web interface, it’s common for services (Extensions included) to provide APIs via web technologies (e.g. HTTP requests, and/or websockets), which can then be used by the frontend display, as well as for other services to communicate with more directly.
It would also be possible for some Extensions to bind to the same region of the file system and share specific files directly, though that comes with the normal complications of shared file access between programs.
That I’m not sure on I’m afraid. Our docs on that front are somewhat sparse at the moment, though many of the publicly available Extensions are open source, so can be referred to as examples.
I’m not aware of any cases where that’s being done, but I also don’t see why it couldn’t be.
The closest I can think of is the BlueOS Bag of Holding service, which is used by both internal services and external programs like Cockpit, though it’s more intended for storing configuration details than user data.
It’s perhaps worth noting that most Extensions at the moment are either fully independent, or communicate with BlueOS services but not often with other Extensions, so the space hasn’t been deeply explored yet. We did have many internal discussions when setting up the Extensions system about how to maintain inter-Extension compatibility, with versioning strategies and the like, but there hasn’t yet been a meaningful need to actually implement that.