Skip to content

TELCODOCS-2143 Updating Medical diags documentation rev2 #540

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Feb 14, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 0 additions & 9 deletions content/patterns/medical-diagnosis/_index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -94,19 +94,10 @@ The following diagram shows the components that are deployed with the the data f

image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"]

== Recorded demo

link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]]

== Presentation

View presentation for the Medical Diagnosis Validated Pattern link:https://speakerdeck.com/rhvalidatedpatterns/md-speakerdeck[here]

[id="demo-script"]
== Demo Script

Use this demo script to successfully complete the Medical Diagnosis pattern demo link:demo-script/#demo-intro[here]

[id="next-steps_med-diag-index"]
== Next steps

Expand Down
103 changes: 6 additions & 97 deletions content/patterns/medical-diagnosis/cluster-sizing.adoc
Original file line number Diff line number Diff line change
@@ -1,106 +1,15 @@
---
title: Cluster Sizing
weight: 20
aliases: /medical-diagnosis/cluster-sizing/
weight: 30
aliases: /medical-diagnosis/medical-diagnosis-cluster-sizing/
---


:toc:
:imagesdir: /images
:_content-type: ASSEMBLY
include::modules/comm-attributes.adoc[]

:aws_node: xlarge


//Module to be included
//:_content-type: CONCEPT
//:imagesdir: ../../images
[id="about-openshift-cluster-sizing-med"]
== About OpenShift cluster sizing for the {med-pattern}
{aws_node}
To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster:

|===
| Name | Kind | Namespace | Description

| Medical Diagnosis Hub
| Application
| medical-diagnosis-hub
| Hub GitOps management

| {rh-gitops}
| Operator
| openshift-operators
| {rh-gitops-short}

| {rh-ocp-data-first}
| Operator
| openshift-storage
| Cloud Native storage solution

| {rh-amq-streams}
| Operator
| openshift-operators
| AMQ Streams provides Apache Kafka access

| {rh-serverless-first}
| Operator
| - knative-serving (knative-eventing)
| Provides access to Knative Serving and Eventing functions
|===

//AI: Removed the following since we have CI status linked on the patterns page
//[id="tested-platforms-cluster-sizing"]
//== Tested Platforms

//: Removed the following in favor of the link to OCP docs
//[id="general-openshift-minimum-requirements-cluster-sizing"]
//== General OpenShift Minimum Requirements
The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.16/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.16/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal].

For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.16/installing/installing-preparing.html[{ocp} documentation].

//Module to be included
//:_content-type: CONCEPT
//:imagesdir: ../../images

[id="med-openshift-cluster-size"]
=== About {med-pattern} OpenShift cluster size

The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture.

For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators.
//AI:Removed a few lines from here since the content is updated to remove any ambiguity. We rather use direct links (OCP docs/ GCP/AWS/Azure)
[NOTE]
====
You might want to add resources when more developers are working on building their applications.
====

The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes.

[cols="^,^,^,^"]
|===
| Node type | Number of nodes | Cloud provider | Instance type

| Control plane and worker
| 3 and 3
| Google Cloud
| n1-standard-8

| Control plane and worker
| 3 and 3
| Amazon Cloud Services
| m5.2xlarge

| Control plane and worker
| 3 and 3
| Microsoft Azure
| Standard_D8s_v3
|===
include::modules/comm-attributes.adoc[]
include::modules/medical-diagnosis/metadata-medical-diagnosis.adoc[]

[role="_additional-resources"]
.Additional resource
* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types]
* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure]
* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide]
//Removed section for instance types as we did for MCG
include::modules/cluster-sizing-template.adoc[]
157 changes: 33 additions & 124 deletions content/patterns/medical-diagnosis/demo-script.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Demo Script
weight: 60
title: Verifying the demo
weight: 20
aliases: /medical-diagnosis/demo/
---

Expand All @@ -19,148 +19,57 @@ image::../../images/medical-edge/aiml_pipeline.png[link="/images/medical-edge/ai

[NOTE]
====
We simulate the function of the remote medical facility with an application called `image-generator`
We simulate the function of the remote medical facility with an application called the `image-generator`.
====
//Module to be included
//:_content-type: PROCEDURE
//:imagesdir: ../../../images
[id="viewing-the-grafana-based-dashboard-getting-started"]
== Enabling the Grafana based dashboard

[id="demo-objectives"]
The Grafana dashboard offers a visual representation of the AI/ML workflow, including CPU and memory metrics for the pod running the risk assessment application. Additionally, it displays a graphical overview of the AI/ML workflow, illustrating the images being generated at the remote medical facility.

== Objectives
This showcase application is deployed with self-signed certificates, which are considered untrusted by most browsers. If valid certificates have not been provisioned for your OpenShift cluster, you will need to manually accept the untrusted certificates by following these steps:

In this demo you will complete the following:
. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the *Networking* > *Routes* for *All Projects*. Click the URL for the `s3-rgw`.
+
image::../../images/medical-edge/storage-route.png[s3-rgw route]
+
Ensure that you see XML and not the access denied error message.
+
image::../../images/medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"]

* Prepare your local workstation
* Update the pattern repo with your cluster values
* Deploy the pattern
* Access the dashboard
. While still looking at *Routes*, change the project to `xraylab-1`. Click the URL for the `image-server` and ensure that you do not see an access denied error message. You should see a `Hello world` message.

[id="getting-started"]
This showcase application does not have access to a x-ray machine hanging around that we can use for this demo, so one is emulated by creating an s3 bucket and hosting the x-ray images within it. In the "real world" an x-ray would be taken at an edge medical facility and then uploaded to an OpenShift Data Foundations (ODF) S3 compatible bucket in the Core Hospital, triggering the AI/ML workflow.

== Getting Started
To emulate the edge medical facility we use an application called `image-generator` which when scaled up will download the x-rays from s3 and put them in an ODF s3 bucket in the cluster, triggering the AI/ML workflow.

* Follow the link:../getting-started[Getting Started Guide] to ensure that you have met all of the pre-requisites
* Review link:../getting-started/#preparing-for-deployment[Preparing for Deployment] for updating the pattern with your cluster values
Turn on the image file flow. There are couple of ways to go about this.

[NOTE]
====
This demo begins after `./pattern.sh make install` has been executed
====

[id="demo"]

== Demo

Now that we have deployed the pattern onto our cluster, we can begin to discover what has changed, and then move onto the dashboard.

[id="admin-view"]

=== Administrator View - Review Changes to cluster

Login to your cluster's console with the `kubeadmin` user

Let's check out what operators were installed - In the accordion menu on the left:

* click Operators
* click Installed Operators

[NOTE]

====
Ensure that **All Projects** is selected
====

image::../../images/medical-edge/admin_developer-contexts.png[link="/images/medical-edge/admin_developer-contexts.png"]


If you started with a new cluster then there were no layered products or operators installed. With the Validated Patterns framework we describe or declare what our cluster's desired state is and the GitOps engine does the rest. This includes creating the instance of the operator and any additional configuration between other API's to ensure everything is working together nicely.


[id="dev-view"]

=== Developer View - Review Changes to cluster

Let’s switch to the developer context by click on `Administrator` in the top left corner of the accordion menu then click `Developer`

* Change projects to `xraylab-1`
* Click on `Topology`


image::../../images/medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"]

Look at all of the resources that have been created for this demo application. What we see in this interface is the collection of all components required for this AI/ML workflow to properly execute. There are even more resources and configurations that get deployed but because we don't directly interact with them we won't worry too much about them. The take away here is when you utilize the framework you are able to build in automation just like this which allows your developers to focus on their important developer things.


[id="certificate-warn"]

=== Invalid Certificates

We are deploying this demo using self-signed certificates that are untrusted by our browser. Unless you have provisioned valid certificates for your OpenShift cluster you must accept the invalid certificates for:

* s3-rgw | openshift-storage namespace
* grafana | xraylab-1 namespace

[source,shell]
----

S3RGW_ROUTE=https://$(oc get route -n openshift-storage s3-rgw -o jsonpath='{.spec.host}')

echo $S3RGW_ROUTE

GRAFANA_ROUTE=https://$(oc get route -n xraylab-1 grafana -o jsonpath='{.spec.host}')

echo $GRAFANA_ROUTE
----

[WARNING]

====
You must accept the security risks / self signed certificates before scaling the image-generator application
====

[id="scale-up"]

=== Scale up the deployment

As we mentioned earlier, we don't have an x-ray machine hanging around that we can use for this demo, so we emulate one by creating an s3 bucket and hosting the x-ray images within it. In the "real world" an x-ray would be taken at an edge medical facility and then uploaded to an OpenShift Data Foundations (ODF) S3 compatible bucket in the Core Hospital, triggering the AI/ML workflow.

To emulate the edge medical facility we use an application called `image-generator` which (when scaled up) will download the x-rays from s3 and put them in an ODF s3 bucket in the cluster, triggering the AI/ML workflow.

Let's scale the `image-generator` deploymentConfig up to start the pipeline

[NOTE]
====
Make sure that you are in the `xraylab-1` project under the `Developer` context in the OpenShift Console
====

In the Topology menu under the Developer context in the OpenShift Console:
. Go to the {ocp} web console and change the view from *Administrator* to *Developer* and select *Topology*. From there select the `xraylab-1` project.

* Search for the `image-generator` application in the Topology console
. Right-click on the `image-generator` pod icon and select `Edit Pod count`.

image::../../images/medical-edge/image-generator.png[link="/images/medical-edge/image-generator.png"]
. Up the pod count from `0` to `1` and save.

* Click on the `image-generator` application ( you may have to zoom in on the highlighted application)
* Switch to the `Details` menu in the application menu context
* Click the `^` next to the pod donut
Alternatively, you can have the same outcome on the Administrator console.

image::../../images/medical-edge/image-generator-scale.png[link="/images/medical-edge/image-generator-scale.png"]
. Go to the {ocp} web console under *Workloads*, select *Deployments* for the *Project* `xraylab-1`.

. Click `image-generator` and increase the pod count to 1.

[id="demo-dashboard"]

== Demo Dashboard
== Viewing the Grafana dashboard

Now let’s jump over to the dashboard
Access the Grafana dashboard to view the AI/ML workflow. Carry out the following steps:

* Return to the topology screen
* Select “Grafana” in the drop down for Filter by resource
* Click the grafana icon
* Open url to go open a browser for the grafana dashboard.
. In the {ocp} web console, select the nines menu and right click the *Grafana* icon.

Within the grafana dashboard:
. Within the grafana dashboard click the Dashboards icon.

* click the dashboards icon
* click Manage
* select xraylab-1
* finally select the XRay Lab folder
. Select the `xraylab-1` folder and the XRay Lab menu item.

image::../../images/medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"]

Expand All @@ -176,4 +85,4 @@ You did it! You have completed the deployment of the medical diagnosis pattern!

The medical diagnosis pattern is more than just the identification and detection of pneumonia in x-ray images. It is an object detection and classification model built on top of Red Hat OpenShift and can be transformed to fit multiple use-cases within the object classification paradigm. Similar use-cases would be detecting contraband items in the Postal Service or even in luggage in an airport baggage scanner.

For more information on Validated Patterns visit our link:https://validatedpatterns.io/[website]
For more information about Validated Patterns, visit our link:https://validatedpatterns.io/[website].
Loading