Skip to content

Commit 30647c0

Browse files
authored
Merge pull request #525 from kquinn1204/TELCODOCS-2198-docs-update
TELCODOCS-2198 Docs update for OpenShift AI
2 parents 6bcfe0e + 656b4dc commit 30647c0

14 files changed

+456
-95
lines changed

content/patterns/openshift-ai/_index.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,5 +28,5 @@ include::modules/rhoai-architecture.adoc[leveloffset=+1]
2828
[id="next-steps_rhoai-index"]
2929
== Next steps
3030

31-
* link:getting-started[Deploy the Pattern] using Helm.
31+
* link:getting-started[Deploy the Pattern].
3232

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
title: AI Demo
3+
weight: 20
4+
aliases: /rhoai/ai-demo/
5+
---
6+
7+
:toc:
8+
:imagesdir: /images
9+
:_content-type: ASSEMBLY
10+
include::modules/comm-attributes.adoc[]
11+
12+
include::modules/rhoai-demo-app.adoc[leveloffset=1]
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
---
2+
title: Cluster sizing
3+
weight: 50
4+
aliases: /openshift-ai/openshift-ai-cluster-sizing/
5+
---
6+
7+
:toc:
8+
:imagesdir: /images
9+
:_content-type: ASSEMBLY
10+
11+
include::modules/comm-attributes.adoc[]
12+
include::modules/openshift-ai/metadata-openshift-ai.adoc[]
13+
14+
include::modules/cluster-sizing-template.adoc[]

content/patterns/rag-llm-gitops/getting-started.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -12,13 +12,13 @@ aliases: /rag-llm-gitops/getting-started/
1212

1313
## Procedure
1414

15-
1. Create the installation configuration file using the steps described in [Creating the installation configuration file](https://docs.openshift.com/container-platform/4.17/installing/installing_aws/ipi/installing-aws-customizations.html#installation-initializing_installing-aws-customizations).
15+
1. Create the installation configuration file using the steps described in [Creating the installation configuration file](https://docs.openshift.com/container-platform/latest/installing/installing_aws/ipi/installing-aws-customizations.html#installation-initializing_installing-aws-customizations).
1616

1717
> **Note:**
1818
> Supported regions are `us-east-1` `us-east-2` `us-west-1` `us-west-2` `ca-central-1` `sa-east-1` `eu-west-1` `eu-west-2` `eu-west-3` `eu-central-1` `eu-north-1` `ap-northeast-1` `ap-northeast-2` `ap-northeast-3` `ap-southeast-1` `ap-southeast-2` and `ap-south-1`. For more information about installing on AWS see, [Installation methods](https://docs.openshift.com/container-platform/latest/installing/installing_aws/preparing-to-install-on-aws.html).
1919
>
2020
21-
2. Customize the generated `install-config.yaml` creating one control plane node with instance type `m5a.2xlarge` and 3 worker nodes with instance type `p3.2xlarge`. A sample YAML file is shown here:
21+
2. Customize the generated `install-config.yaml` creating one control plane node with instance type `m5.2xlarge` and 3 worker nodes with instance type `m5.2xlarge`. A sample YAML file is shown here:
2222
```yaml
2323
additionalTrustBundlePolicy: Proxyonly
2424
apiVersion: v1
@@ -29,15 +29,15 @@ aliases: /rag-llm-gitops/getting-started/
2929
name: worker
3030
platform:
3131
aws:
32-
type: p3.2xlarge
32+
type: m5.2xlarge
3333
replicas: 3
3434
controlPlane:
3535
architecture: amd64
3636
hyperthreading: Enabled
3737
name: master
3838
platform:
3939
aws:
40-
type: m5a.2xlarge
40+
type: m5.2xlarge
4141
replicas: 1
4242
metadata:
4343
creationTimestamp: null

modules/rhoai-demo-app.adoc

+185
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
:_content-type: PROCEDURE
2+
:imagesdir: ../../../images
3+
4+
[id="creating-data-science-project"]
5+
= AI Demos
6+
7+
== First AI demo
8+
9+
In this demo, you will configure a Jupyter notebook server using a specified image within a Data Science project, customizing it to meet your specific requirements.
10+
11+
.Procedure
12+
13+
. Click the *Red Hat OpenShift AI* from the nines menu on the OpenShift Console.
14+
15+
. Click *Log in with OpenShift*
16+
17+
. Click on the *Data Science Projects* tab.
18+
19+
. Click *Create project*
20+
21+
.. Enter a name for the project for example `my-first-ai-project` in the *Name* field and click *Create*.
22+
23+
. Click on *Create a workbench*. Now you are ready to move to the next step to define the workbench.
24+
25+
.. Enter a name for the workbench.
26+
27+
.. Select the *Notebook image* from the *image selection* dropdown as *Standard Data Science*.
28+
29+
.. Select the Container size to *Small* under *Deployment size*.
30+
31+
.. Scroll down and in the *Cluster storage* section, create a name for the new persistent storage that will be created.
32+
33+
.. Set the *persistent storage size* to 10 Gi.
34+
35+
.. Click the *Create workbench* button at the bottom left of the page.
36+
+
37+
After successful implementation, the status of the workbench turns to *Running*
38+
39+
.. Click the *Open↗* button, located beside the status.
40+
41+
.. Authorize the access with the OpenShift cluster by clicking on the *Allow selected permissions*. After granting permissions with OpenShift, you will be directed to the Jupyter Notebook page.
42+
43+
== Accessing the current data science project within Jupyter Notebook
44+
45+
The Jupyter Notebook provides functionality to fetch or clone existing GitHub repositories, similar to any other standard IDE. Therefore, in this section, you will clone an existing simple AI/ML code into the notebook using the following instructions.
46+
47+
. From the top, click on the *Git clone* icon.
48+
+
49+
image::rhoai/git-clone-button.png[Git clone button]
50+
51+
. In the popup window enter the URL of the GitHub repository in the *Git Repository URL* field:
52+
+
53+
[source,text]
54+
----
55+
https://github.com/redhat-developer-demos/openshift-ai.git
56+
----
57+
58+
. Click the *Clone* button.
59+
60+
. After fetching the github repository, the project appears in the directory section on the left side of the notebook.
61+
62+
. Expand the */openshift-ai/1_First-app/* directory.
63+
64+
. Open the *openshift-ai-test.ipynb* file.
65+
+
66+
You will be presented with the view of a Jupyter Notebook.
67+
68+
## Running code in a Jupyter notebook
69+
70+
In the previous section, you imported and opened the notebook. To run the code within the notebook, click the *Run* icon located at the top of the interface.
71+
72+
After clicking *Run*, the notebook automatically moves to the next cell. This is part of the design of Jupyter Notebooks, where scripts or code snippets are divided into multiple cells. Each cell can be run independently, allowing you to test specific sections of code in isolation. This structure greatly aids in both developing complex code incrementally and debugging it more effectively, as you can pinpoint errors and test solutions cell by cell.
73+
74+
After executing a cell, you can immediately see the output just below it. This immediate feedback loop is invaluable for iterative testing and refining of code.
75+
76+
[id="interactive-classification-project"]
77+
== Performing an interactive classification with Jupyter notebook
78+
79+
In this section, you will perform an interactive classification using a Jupyter notebook.
80+
81+
.Procedure
82+
83+
. Click the *Red Hat OpenShift AI* from the nines menu on the OpenShift Console.
84+
85+
. Click *Log in with OpenShift*
86+
87+
. Click on the *Data Science Projects* tab.
88+
89+
. Click *Create project*
90+
91+
.. Enter a name for the project for example `my-classification-project` in the *Name* field and click *Create*.
92+
93+
. Click on *Create a workbench*. Now you are ready to move to the next step to define the workbench.
94+
95+
.. Give the workbench a name for example *interactive-classification*.
96+
97+
.. Select the *Notebook image* from the *image selection* dropdown as *TensorFlow*.
98+
99+
.. Select the Container size to *Medium* under *Deployment size*.
100+
101+
.. Scroll down and in the *Cluster storage* section, create a name for the new persistent storage that will be created.
102+
103+
.. Set the *persistent storage size* to 20 Gi.
104+
105+
.. Click the *Create workbench* button at the bottom of the page.
106+
+
107+
After successful implementation, the status of the workbench turns to *Running*
108+
109+
.. Click the *Open↗* button, located beside the status.
110+
111+
.. Authorize the access with the OpenShift cluster by clicking on the *Allow selected permissions*. After granting permissions with OpenShift, you will be directed to the Jupyter Notebook page.
112+
113+
## Obtaining and preparing the dataset
114+
115+
Simplify data preparation in AI projects by automating the fetching of datasets using Kaggle's API following these steps:
116+
117+
. Navigate to the Kaggle website and log in with your account credentials.
118+
119+
. Click on your profile icon at the top right corner of the page, then select Account from the dropdown menu.
120+
121+
. Scroll down to the section labeled API. Here, you'll find a Create New Token button. Click this button.
122+
123+
. A file named `kaggle.json` will be downloaded to your local machine. This file contains your Kaggle API credentials.
124+
125+
. Upload the `kaggle.json` file to your JupyterLab IDE environment. You can drag and drop the file into the file browser of JupyterLab IDE. This step might visually look different depending on your Operating System and Desktop User interface.
126+
127+
. Clone the Interactive Image Classification Project from the GitHub repository using the following instructions:
128+
129+
.. At the top of the JupyterLab interface, click on the *Git Clone* icon.
130+
131+
.. In the popup window, enter the URL of the GitHub repository in the *Git Repository URL* field:
132+
+
133+
[source,text]
134+
----
135+
https://github.com/redhat-developer-demos/openshift-ai.git
136+
----
137+
138+
.. Click the *Clone* button.
139+
140+
.. After cloning, navigate to the *openshift-ai/2_interactive_classification* directory within the cloned repository.
141+
142+
. Open the Python Notebook in the JupyterLab Interface.
143+
+
144+
The JupyterLab interface is presented after uploading `kaggle.json` and cloning the `openshift-ai` repository shown the file browser on the left with `openshift-ai` and `.kaggle.json`.
145+
146+
. Open `Interactive_Image_Classification_Notebook.ipynb` in the `openshift-ai` directory and run the notebook, the notebook contains all necessary instructions and is self-documented.
147+
148+
. Run the cells in the Python Notebook as follows:
149+
150+
.. Start by executing each cell in order by pressing the play button or using the keyboard shortcut "Shift + Enter"
151+
152+
.. Once you run the cell in Step 4, you should see an output as shown in the following screenshot.
153+
+
154+
image::rhoai/predict-step4.png[Interactive Real-Time Data Streaming and Visualization]
155+
156+
.. Running the cell in Step 5, produces an output of two images, one of a cat and one of a dog, with their respective predictions labeled as "Cat" and "Dog".
157+
158+
.. Once the code in the cell is executed in Step 6, a predict button appears as shown in screenshot below. The interactive session displays images with their predicted labels in real-time as the user clicks the *Predict* button. This dynamic interaction helps in understanding how well the model performs across a random set of images and provides insights into potential improvements for model training.
159+
+
160+
image::rhoai/predict.png[Interactive Real-Time Image Prediction with Widgets]
161+
162+
## Addressing misclassification in your AI Model
163+
164+
Misclassification in machine learning models can significantly hinder your model's accuracy and reliability. To combat this, it's crucial to verify dataset balance, align preprocessing methods, and tweak model parameters. These steps are essential for ensuring that your model not only learns well, but also generalizes well, to new, unseen data.
165+
166+
. Adjust the Number of epochs to optimize training speed
167+
+
168+
Changing the number of *epochs* can help you find the sweet spot where your model learns enough to perform well without overfitting. This is crucial for building a robust model that performs consistently.
169+
170+
. Try different values for steps per epoch.
171+
+
172+
Modifying *steps_per_epoch* affects how many batches of samples are used in one epoch. This can influence the granularity of the model updates and can help in dealing with imbalanced datasets or overfitting.
173+
174+
For example make these modifications in your notebook or another Python environment as part of *Step 3: Build and Train the Model*:
175+
176+
[source,text]
177+
----
178+
# Adjust the number of epochs and steps per epoch
179+
model.fit(train_generator, steps_per_epoch=100, epochs=10)
180+
----
181+
182+
[role="_additional-resources"]
183+
.Additional resources
184+
185+
* link:https://developers.redhat.com/learn/openshift-ai[Red Hat OpenShift AI learning]

0 commit comments

Comments
 (0)