You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/patterns/industrial-edge/_index.md
+31-104
Original file line number
Diff line number
Diff line change
@@ -24,34 +24,13 @@ ci: manuela
24
24
25
25
# Industrial Edge Pattern
26
26
27
-
_Red Hat Validated Patterns are detailed deployments created for different use
28
-
cases. These pre-defined computing configurations bring together the Red Hat
29
-
portfolio and technology ecosystem to help you stand up your architectures
30
-
faster. Example application code is provided as a demonstration, along with the
31
-
various open source projects and Red Hat products required for the deployment
32
-
to work. Users can then modify the pattern for their own specific application._
33
-
34
-
**Use Case:** Boosting manufacturing efficiency and product quality with
35
-
artificial intelligence/machine learning (AI/ML) out to the edge of the
36
-
network.
37
-
38
-
**Background:** Microcontrollers and other types of simple computers have long
39
-
been widely used on factory floors and processing plants to monitor and control
40
-
the many machines required to implement the many machines required to implement
41
-
many modern manufacturing workflows. The manufacturing industry has
42
-
consistently used technology to fuel innovation, production optimization, and
43
-
operations. However, historically, control systems were mostly “dumb” in that
44
-
they mostly took actions in response to pre-programmed triggers and heuristics.
45
-
For example, predictive maintenance commonly took place on either a set length
46
-
of time or the number of hours was in service. Supervisory control and data
47
-
acquisition (SCADA) has often been used to collectively describe these hardware
48
-
and software systems, which mostly functioned independently of the company’s
49
-
information technology (IT) systems. Companies increasingly see the benefit of
50
-
bridging these operational technology (OT) systems with their IT. Factory
51
-
systems can be much more flexible as a result. They can also benefit from newer
52
-
technologies such as AI/ML, thereby allowing for tasks like maintenance to be
53
-
scheduled based on multiple real-time measurements rather than simple
54
-
programmed triggers while bringing processing power closer to data.
27
+
_Red Hat Validated Patterns are predefined deployment configurations designed for various use cases. They integrate Red Hat products and open-source technologies to accelerate architecture setup. Each pattern includes example application code, demonstrating its use with the necessary components. Users can customize these patterns to fit their specific applications._
28
+
29
+
**Use Case:** Boosting manufacturing efficiency and product quality with artificial intelligence/machine learning (AI/ML) out to the edge of the network.
30
+
31
+
**Background:** Microcontrollers and other simple computers have long been used in factories and processing plants to monitor and control machinery in modern manufacturing. The industry has consistently leveraged technology to drive innovation, optimize production, and improve operations. Traditionally, control systems operated on fixed rules, responding to pre-programmed triggers and heuristics. For instance, predictive maintenance was typically scheduled based on elapsed time or service hours.
32
+
33
+
Supervisory Control and Data Acquisition (SCADA) systems have historically functioned independently of a company’s IT infrastructure. However, businesses increasingly recognize the value of integrating operational technology (OT) with IT. This integration enhances factory system flexibility and enables the adoption of advanced technologies such as AI and machine learning. As a result, tasks like maintenance can be scheduled based on real-time data rather than rigid schedules, while computing power is brought closer to the source of data generation.
55
34
56
35
57
36
## Solution Overview
@@ -62,23 +41,18 @@ programmed triggers while bringing processing power closer to data.
62
41
_Figure 1. Industrial edge solution overview._
63
42
64
43
65
-
Figure 1 provides an overview of the industrial edge solution. It is applicable
66
-
across a number of verticals including manufacturing.
44
+
Figure 1 provides an overview of the industrial edge solution. It is applicable across a number of verticals including manufacturing.
67
45
68
46
This solution:
69
47
- Provides real-time insights from the edge to the core datacenter
70
48
- Secures GitOps and DevOps management across core and factory sites
71
49
- Provides AI/ML tools that can reduce maintenance costs
72
50
73
-
Different roles within an organization have different concerns and areas of
74
-
focus when working with this distributed AL/ML architecture across two logical
75
-
types of sites: the core datacenter and the factories. (As shown in Figure 2.)
51
+
Different roles within an organization have different concerns and areas of focus when working with this distributed AL/ML architecture across two logical types of sites: the core datacenter and the factories. (As shown in Figure 2.)
76
52
77
-
-**The core datacenter**. This is where data scientists, developers, and
78
-
operations personnel apply the changes to their models, application code, and
53
+
-**The core datacenter**. This is where data scientists, developers, and operations personnel apply the changes to their models, application code, and
79
54
configurations.
80
-
-**The factories**. This is where new applications, updates and operational
81
-
changes are deployed to improve quality and efficiency in the factory..
55
+
-**The factories**. This is where new applications, updates and operational changes are deployed to improve quality and efficiency in the factory..
@@ -91,22 +65,13 @@ _Figure 3. Overall data flows of solution._
91
65
92
66
Figure 3 provides a different high-level view of the solution with a focus on the two major dataflow streams.
93
67
94
-
1. Moving sensor data and events from the operational/shop floor edge towards
95
-
the core. The idea is to centralize as much as possible, but decentralize as
96
-
needed. For example, sensitive production data might not be allowed to leave
97
-
the premises. Think of a temperature curve of an industrial oven; it might
98
-
be considered crucial intellectual property of the customer. Or the sheer
99
-
amount of raw data (maybe 10,000 events per second) might be too expensive
100
-
to transfer to a cloud datacenter. In the above diagram, this is from left
101
-
to right. In other diagrams the edge / operational level is usually at the
102
-
bottom and the enterprise/cloud level at the top. Thus, this is also
103
-
referred to as northbound traffic.
68
+
1. Transmitting sensor data and events from the operational edge to the core aims to centralize processing where possible while decentralizing when necessary. Certain data, such as sensitive production metrics, may need to remain on-premises. For example, an industrial oven’s temperature curve could be considered proprietary intellectual property. Additionally, the high volume of raw data—potentially tens of thousands of events per second—may make cloud transfer impractical due to cost or bandwidth constraints.
69
+
70
+
In the preceding diagram, data movement flows from left to right, while in other representations, the operational edge is typically shown at the bottom, with enterprise or cloud systems at the top. This directional flow is often referred to as northbound traffic.
71
+
72
+
2. Push code, configurations, master data, and machine learning models from the core (where development, testing, and training occur) to the edge and shop floors. With potentially hundreds of plants and thousands of production lines, automation and consistency are essential for effective deployment.
104
73
105
-
2. Push code, configuration, master data, and machine learning models from the
106
-
core (where development, testing, and training is happening) towards the
107
-
edge / shop floors. As there might be 100 plants with 1000s of lines,
108
-
automation and consistency is key. In the above diagram, this is from right
109
-
to left, in a top/down view, it is called southbound traffic.
74
+
In the diagram, data flows from right to left, and when viewed in a top-down orientation, this flow is referred to as southbound traffic.
110
75
111
76
112
77
## Logical Diagrams
@@ -144,76 +109,38 @@ It includes, among other components::
144
109
145
110
_Figure 5: Industrial Edge solution showing messaging and ML components schematically._
146
111
147
-
As shown in Figure 5, data coming from sensors is transmitted over MQTT
148
-
(Message Queuing Telemetry Transport) to Red Hat AMQ, which routes sensor data
149
-
for two purposes: model development in the core data center and live inference
150
-
in the factory data centers. The data is then relayed on to Red Hat AMQ for
151
-
further distribution within the factory datacenter and out to the core
152
-
datacenter. MQTT is the most commonly used messaging protocol for Internet
153
-
of Things (IoT) applications.
154
-
155
-
The lightweight Apache Camel K, a lightweight integration framework built on
156
-
Apache Camel that runs natively on Kubernetes, provides MQTT (Message Queuing
157
-
Telemetry Transport) integration that normalizes and routes sensor data to the
158
-
other components.
159
-
160
-
That sensor data is mirrored into a data lake that is provided by Red Hat
161
-
OpenShift Data Foundation. Data scientists then use various tools from the open
162
-
source Open Data Hub project to perform model development and training, pulling
163
-
and analyzing content from the data lake into notebooks where they can apply ML
164
-
frameworks.
165
-
166
-
Once the models have been tuned and are deemed ready for production, the
167
-
artifacts are committed to git which kicks off an image build of the model
168
-
using OpenShift Pipelines (based on the upstream Tekton), a serverless CI/CD
169
-
system that runs pipelines with all the required dependencies in isolated
170
-
containers.
171
-
172
-
The model image is pushed into OpenShift’s integrated registry running in the
173
-
core datacenter which is then pushed back down to the factory datacenter for
174
-
use in inference.
112
+
As illustrated in Figure 5, sensor data is transmitted via MQTT (Message Queuing Telemetry Transport) to Red Hat AMQ, which routes it for two key purposes: model development in the core data center and live inference at the factory data centers. The data is then forwarded to Red Hat AMQ for further distribution within the factory and back to the core data center. MQTT is the standard messaging protocol for Internet of Things (IoT) applications.
113
+
114
+
Apache Camel K, a lightweight integration framework based on Apache Camel and designed to run natively on Kubernetes, offers MQTT integration to normalize and route sensor data to other components.
115
+
116
+
The sensor data is mirrored into a data lake managed by Red Hat OpenShift Data Foundation. Data scientists utilize tools from the open-source Open Data Hub project to develop and train models, extracting and analyzing data from the lake in notebooks while applying machine learning (ML) frameworks.
117
+
118
+
Once the models are fine-tuned and production-ready, the artifacts are committed to Git, triggering an image build of the model using OpenShift Pipelines (based on the upstream Tekton), a serverless CI/CD system that runs pipelines with all necessary dependencies in isolated containers.
119
+
120
+
The model image is pushed to OpenShift’s integrated registry in the core data center and then pushed back down to the factory data center for use in live inference.
175
121
176
122
[](/images/industrial-edge/edge-mfg-devops-network-sd.png)
As shown in Figure 6, in order to protect the factories and operations
181
-
infrastructure from cyber attacks, the operations network needs to be
182
-
segregated from the enterprise IT network and the public internet. The factory
183
-
machinery, controllers, and devices need to be further segregated from the
184
-
factory data center and need to be protected behind a firewall.
126
+
As shown in Figure 6, to safeguard the factory and operations infrastructure from cyberattacks, the operations network must be segregated from the enterprise IT network and the public internet. Additionally, factory machinery, controllers, and devices should be further isolated from the factory data center and protected behind a firewall.
185
127
186
128
### Edge manufacturing with GitOps
187
129
188
130
[](/images/industrial-edge/edge-mfg-gitops-sd.png)
189
131
190
132
_Figure 7: Industrial Edge solution showing a schematic view of the GitOps workflows._
191
133
192
-
GitOps is an operational framework that takes DevOps best practices used for
193
-
application development such as version control, collaboration, compliance, and
194
-
CI/CD, and applies them to infrastructure automation. Figure 6 shows how, for
195
-
these industrial edge manufacturing environments, GitOps provides a consistent,
196
-
declarative approach to managing individual cluster changes and upgrades across
197
-
the centralized and edge sites. Any changes to configuration and applications
198
-
can be automatically pushed into operational systems at the factory.
134
+
GitOps is an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD, and applies them to infrastructure automation. Figure 6 shows how, for these industrial edge manufacturing environments, GitOps provides a consistent, declarative approach to managing individual cluster changes and upgrades across the centralized and edge sites. Any changes to configuration and applications can be automatically pushed into operational systems at the factory.
199
135
200
136
### Secrets exchange and management
201
137
202
-
Authentication is used to securely deploy and update components across multiple
203
-
locations. The credentials are stored using a secrets management solution like
204
-
Hashicorp Vault on the hub. The external secrets component is used to integrate various
205
-
secrets management tools (AWS Secrets Manager, Google Secrets Manager, Azure
206
-
Key Vault). These secrets are then pulled from the HUB's Vault on to the different
207
-
factory clusters.
138
+
Authentication is used to securely deploy and update components across multiple locations. The credentials are stored using a secrets management solution such as Hashicorp Vault on the hub. The external secrets component is used to integrate various secrets management tools (AWS Secrets Manager, Google Secrets Manager, Azure Key Vault). These secrets are then pulled from the HUB's Vault on to the different factory clusters.
208
139
209
140
## Demo Scenario
210
141
211
-
This scenario is derived from the [MANUela
212
-
work](https://github.com/sa-mw-dach/manuela) done by Red Hat Middleware
213
-
Solution Architects in Germany in 2019/20. The name MANUela stands for
214
-
MANUfacturing Edge Lightweight Accelerator, you will see this acronym in a lot
215
-
of artifacts. It was developed on a platform called
This scenario is derived from the [MANUela work](https://github.com/sa-mw-dach/manuela) done by Red Hat Middleware Solution Architects in Germany in 2019/20. The name MANUela stands for
143
+
MANUfacturing Edge Lightweight Accelerator, you will see this acronym in many of the artifacts. It was developed on a platform called [stormshift](https://github.com/stormshift/documentation).
217
144
218
145
The demo has been updated with an advanced GitOps framework.
# Attach a managed cluster (factory) to the management hub
8
+
9
+
By default, Red Hat Advanced Cluster Management (RHACM) manages the `clusterGroup` applications that are deployed on all clusters.
10
+
11
+
Add a `managedClusterGroup` for each cluster or group of clusters that you want to manage by following this procedure.
12
+
13
+
## Procedure
14
+
15
+
1. By default the `factory` applications defined in the `values-factory.yaml` file are deployed on all clusters imported into ACM and that have the label `clusterGroup=factory`.
16
+
17
+
2. In the left navigation panel of the web console associated with your deployed hub cluster, click **local-cluster**. Select **All Clusters**. The RHACM web console is displayed.
18
+
19
+
3. In the **Managing clusters just got easier** window, click **Import an existing cluster**.
20
+
21
+
- Enter the cluster name (you can get this from the login token string, for example: `https://api.<cluster-name>.<domain>:6443`).
22
+
- You can leave the **Cluster set** blank.
23
+
- In the **Additional labels** dialog box, enter the `key=value` as `clusterGroup=factory`.
24
+
- Choose **KubeConfig** as the "Import mode".
25
+
- In the **KubeConfig** window, paste your KubeConfig content. Click **Next**.
26
+
27
+
4. You can skip the **Automation** screen. Click **Next**.
28
+
29
+
5. Review the summary details and click **Import**.
30
+
31
+
6. Once the data center and the factory have been deployed you will want to check out and test the Industrial Edge 2.0 demo code. You can find that [here](../application/). The Argo applications on the factory cluster appear as follows:
0 commit comments