You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
== About OpenShift cluster sizing for the {med-pattern}
20
-
{aws_node}
21
-
To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster:
22
-
23
-
|===
24
-
| Name | Kind | Namespace | Description
25
-
26
-
| Medical Diagnosis Hub
27
-
| Application
28
-
| medical-diagnosis-hub
29
-
| Hub GitOps management
30
-
31
-
| {rh-gitops}
32
-
| Operator
33
-
| openshift-operators
34
-
| {rh-gitops-short}
35
-
36
-
| {rh-ocp-data-first}
37
-
| Operator
38
-
| openshift-storage
39
-
| Cloud Native storage solution
40
-
41
-
| {rh-amq-streams}
42
-
| Operator
43
-
| openshift-operators
44
-
| AMQ Streams provides Apache Kafka access
45
-
46
-
| {rh-serverless-first}
47
-
| Operator
48
-
| - knative-serving (knative-eventing)
49
-
| Provides access to Knative Serving and Eventing functions
50
-
|===
51
-
52
-
//AI: Removed the following since we have CI status linked on the patterns page
53
-
//[id="tested-platforms-cluster-sizing"]
54
-
//== Tested Platforms
55
11
56
-
//: Removed the following in favor of the link to OCP docs
The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.16/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.16/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].
60
-
61
-
For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.16/installing/installing-preparing.html[{ocp} documentation].
62
-
63
-
//Module to be included
64
-
//:_content-type: CONCEPT
65
-
//:imagesdir: ../../images
66
-
67
-
[id="med-openshift-cluster-size"]
68
-
=== About {med-pattern} OpenShift cluster size
69
-
70
-
The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture.
71
-
72
-
For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators.
73
-
//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure)
74
-
[NOTE]
75
-
====
76
-
You might want to add resources when more developers are working on building their applications.
77
-
====
78
-
79
-
The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes.
80
-
81
-
[cols="^,^,^,^"]
82
-
|===
83
-
| Node type | Number of nodes | Cloud provider | Instance type
The Grafana dashboard offers a visual representation of the AI/ML workflow, including CPU and memory metrics for the pod running the risk assessment application. Additionally, it displays a graphical overview of the AI/ML workflow, illustrating the images being generated at the remote medical facility.
26
31
27
-
== Objectives
32
+
This showcase application is deployed with self-signed certificates, which are considered untrusted by most browsers. If valid certificates have not been provisioned for your OpenShift cluster, you will need to manually accept the untrusted certificates by following these steps:
28
33
29
-
In this demo you will complete the following:
34
+
. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the *Networking* > *Routes* for *All Projects*. Click the URL for the `s3-rgw`.
* Update the pattern repo with your cluster values
33
-
* Deploy the pattern
34
-
* Access the dashboard
42
+
. While still looking at *Routes*, change the project to `xraylab-1`. Click the URL for the `image-server` and ensure that you do not see an access denied error message. You should see a `Hello world` message.
35
43
36
-
[id="getting-started"]
44
+
This showcase application does not have access to a x-ray machine hanging around that we can use for this demo, so one is emulated by creating an s3 bucket and hosting the x-ray images within it. In the "real world" an x-ray would be taken at an edge medical facility and then uploaded to an OpenShift Data Foundations (ODF) S3 compatible bucket in the Core Hospital, triggering the AI/ML workflow.
37
45
38
-
== Getting Started
46
+
To emulate the edge medical facility we use an application called `image-generator` which when scaled up will download the x-rays from s3 and put them in an ODF s3 bucket in the cluster, triggering the AI/ML workflow.
39
47
40
-
* Follow the link:../getting-started[Getting Started Guide] to ensure that you have met all of the pre-requisites
41
-
* Review link:../getting-started/#preparing-for-deployment[Preparing for Deployment] for updating the pattern with your cluster values
48
+
Turn on the image file flow. There are couple of ways to go about this.
42
49
43
-
[NOTE]
44
-
====
45
-
This demo begins after `./pattern.sh make install` has been executed
46
-
====
47
-
48
-
[id="demo"]
49
-
50
-
== Demo
51
-
52
-
Now that we have deployed the pattern onto our cluster, we can begin to discover what has changed, and then move onto the dashboard.
53
-
54
-
[id="admin-view"]
55
-
56
-
=== Administrator View - Review Changes to cluster
57
-
58
-
Login to your cluster's console with the `kubeadmin` user
59
-
60
-
Let's check out what operators were installed - In the accordion menu on the left:
If you started with a new cluster then there were no layered products or operators installed. With the Validated Patterns framework we describe or declare what our cluster's desired state is and the GitOps engine does the rest. This includes creating the instance of the operator and any additional configuration between other API's to ensure everything is working together nicely.
75
-
76
-
77
-
[id="dev-view"]
78
-
79
-
=== Developer View - Review Changes to cluster
80
-
81
-
Let’s switch to the developer context by click on `Administrator` in the top left corner of the accordion menu then click `Developer`
Look at all of the resources that have been created for this demo application. What we see in this interface is the collection of all components required for this AI/ML workflow to properly execute. There are even more resources and configurations that get deployed but because we don't directly interact with them we won't worry too much about them. The take away here is when you utilize the framework you are able to build in automation just like this which allows your developers to focus on their important developer things.
90
-
91
-
92
-
[id="certificate-warn"]
93
-
94
-
=== Invalid Certificates
95
-
96
-
We are deploying this demo using self-signed certificates that are untrusted by our browser. Unless you have provisioned valid certificates for your OpenShift cluster you must accept the invalid certificates for:
97
-
98
-
* s3-rgw | openshift-storage namespace
99
-
* grafana | xraylab-1 namespace
100
-
101
-
[source,shell]
102
-
----
103
-
104
-
S3RGW_ROUTE=https://$(oc get route -n openshift-storage s3-rgw -o jsonpath='{.spec.host}')
105
-
106
-
echo $S3RGW_ROUTE
107
-
108
-
GRAFANA_ROUTE=https://$(oc get route -n xraylab-1 grafana -o jsonpath='{.spec.host}')
109
-
110
-
echo $GRAFANA_ROUTE
111
-
----
112
-
113
-
[WARNING]
114
-
115
-
====
116
-
You must accept the security risks / self signed certificates before scaling the image-generator application
117
-
====
118
-
119
-
[id="scale-up"]
120
-
121
-
=== Scale up the deployment
122
-
123
-
As we mentioned earlier, we don't have an x-ray machine hanging around that we can use for this demo, so we emulate one by creating an s3 bucket and hosting the x-ray images within it. In the "real world" an x-ray would be taken at an edge medical facility and then uploaded to an OpenShift Data Foundations (ODF) S3 compatible bucket in the Core Hospital, triggering the AI/ML workflow.
124
-
125
-
To emulate the edge medical facility we use an application called `image-generator` which (when scaled up) will download the x-rays from s3 and put them in an ODF s3 bucket in the cluster, triggering the AI/ML workflow.
126
-
127
-
Let's scale the `image-generator` deploymentConfig up to start the pipeline
128
-
129
-
[NOTE]
130
-
====
131
-
Make sure that you are in the `xraylab-1` project under the `Developer` context in the OpenShift Console
132
-
====
133
-
134
-
In the Topology menu under the Developer context in the OpenShift Console:
50
+
. Go to the {ocp} web console and change the view from *Administrator* to *Developer* and select *Topology*. From there select the `xraylab-1` project.
135
51
136
-
* Search for the `image-generator` application in the Topology console
52
+
. Right-click on the `image-generator` pod icon and select `Edit Pod count`.
@@ -176,4 +85,4 @@ You did it! You have completed the deployment of the medical diagnosis pattern!
176
85
177
86
The medical diagnosis pattern is more than just the identification and detection of pneumonia in x-ray images. It is an object detection and classification model built on top of Red Hat OpenShift and can be transformed to fit multiple use-cases within the object classification paradigm. Similar use-cases would be detecting contraband items in the Postal Service or even in luggage in an airport baggage scanner.
178
87
179
-
For more information on Validated Patterns visit our link:https://validatedpatterns.io/[website]
88
+
For more information about Validated Patterns, visit our link:https://validatedpatterns.io/[website].
0 commit comments