Conversation
lulf
left a comment
There was a problem hiding this comment.
Nice proposal! Left some comments inline as it feels like there are alternatives that could lead to less coupling.
| In Drogue IoT, we don't have an edge orchestration layer, as we intend to integrate with an existing solution for edge | ||
| workload distribution and management. | ||
|
|
||
| However, we do want to distribute edge workload, and manage this, possibly through Drogue IoT from the cloud side. |
There was a problem hiding this comment.
Why do we want to do both workload configuration and deployment through drogue cloud? What are the alternatives and why is it better than the alternatives? (Just to make sure we got this written down)
As an example, what if we instead created an API in drogue cloud to retrieve the workload configuration, and make agents for the different edge platforms (like kanto?) that connects to drogue cloud to retrieve the configuration and "deploy" the workload as they best see fit?
There was a problem hiding this comment.
I had some thoughts on that in the later sections. So the actual deployment should be handled by some system we integrate with (e.g. OCM/ACM). If we focus on Kubernetes as a deployment model, that will make our life easier, I guess.
If we define "general purpose workload", we would create our own workload model, and would need to reconcile/map this to all the others we would want to support (Kubernetes, Kanto, ioFog, ...). I don't really want to invent the "Drogue IoT deployment" model. Which would probably need to be a subset of all the others.
Instead, I would reconcile this way: [OPC UA, BLE, XYZ] -> Kubernetes --(OCM)--> Edge.
If someone wants to use a different tool than OCM/ACM for edge connectivity, that would only require to re-write the "OCM" part.
If someone wants to use a different deployment model than Kubernetes, that would indeed mean re-writing all the reconcilers.
There was a problem hiding this comment.
Although I understand the current design and agree with the approach, I still think the 'alternatives' at the end doesn't address the 'why do we want to distribute edge workloads using Kubernetes' question, which I think should be answered at the top.
Basically I'd like your reply to my comment to be in the RFC :)
There was a problem hiding this comment.
I elaborated a bit on "why kubernetes" at the end of the document. Linking to that from the top section. Does that go into the right direction?
|
|
||
| The user defines some use case specific edge workload. Like with the OPC UA example: | ||
|
|
||
| ```yaml |
There was a problem hiding this comment.
Nope, that should be part of the device. I will make that more clear.
e542469 to
f027144
Compare
|
Should this be merged or should we think some more about how flotta pattern fits. |
Readable version: https://github.com/drogue-iot/rfcs/blob/feature/edge_workload_1/active/0017-edge-workload.md