Skip to content

Commit 26bb523

Browse files
authored
docs-add-feedback (#64)
* docs-add-feedback * docs-add-feedback updated navbar * more doc updates * docs--peer-review
1 parent cbc9204 commit 26bb523

15 files changed

+71
-47
lines changed

workshop/docs/modules/ROOT/nav.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,8 @@
22
** xref:navigating-to-the-dashboard.adoc[1. The Dashboard]
33
** xref:setting-up-your-data-science-project.adoc[2. Data Science Projects]
44
** xref:storing-data-with-connections.adoc[3. Storage Data Connections]
5-
*** xref:creating-connections-to-storage.adoc[1. Manual]
6-
*** xref:running-a-script-to-install-storage.adoc[2. Scripted Local]
5+
*** xref:running-a-script-to-install-storage.adoc[1. Scripted Local]
6+
*** xref:creating-connections-to-storage.adoc[2. Manual]
77
** xref:enabling-data-science-pipelines.adoc[4. Enable Pipelines]
88
99
* 2. Workbenches

workshop/docs/modules/ROOT/pages/automating-workflows-with-pipelines.adoc

Lines changed: 27 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,11 @@ Your completed pipeline should look like the one in the `6 Train Save.pipeline`
99

1010
To explore the pipeline editor, complete the steps in the following procedure to create your own pipeline. Alternately, you can skip the following procedure and instead run the `6 Train Save.pipeline` file.
1111

12+
== Prerequisites
13+
14+
* You configured a pipeline server as described in xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines].
15+
* If you configured the pipeline server after you created your workbench, you stopped and then started your workbench.
16+
1217
== Create a pipeline
1318

1419
. Open your workbench's JupyterLab environment. If the launcher is not visible, click *+* to open it.
@@ -178,14 +183,31 @@ Upload the pipeline on your cluster and run it. You can do so directly from the
178183
+
179184
image::pipelines/wb-pipeline-run-button.png[Pipeline Run Button, 300]
180185

181-
182186
. Enter a name for your pipeline.
183-
. Verify the *Runtime Configuration:* is set to `Data Science Pipeline`.
187+
. Verify that the *Runtime Configuration:* is set to `Data Science Pipeline`.
184188
. Click *OK*.
185189
+
186-
NOTE: If `Data Science Pipeline` is not available as a runtime configuration, you may have created your notebook before the pipeline server was available. You can restart your notebook after the pipeline server has been created in your data science project.
190+
[NOTE]
191+
====
192+
If you see an error message stating that "no runtime configuration for Data Science Pipeline is defined", you might have created your workbench before the pipeline server was available.
193+
194+
To address this situation, you must verify that you configured the pipeline server and then restart the workbench.
195+
196+
Follow these steps in the {productname-short} dashboard:
197+
198+
. Check the status of the pipeline server:
199+
.. In your Fraud Detection project, click the *Pipelines* tab.
200+
** If you see the *Configure pipeline server* option, follow the steps in xref:enabling-data-science-pipelines[Enabling data science pipelines].
201+
** If you see the *Import a pipeline* option, the pipeline server is configured. Continue to the next step.
202+
. Restart your Fraud Detection workbench:
203+
.. Click the *Workbenches* tab.
204+
.. Click *Stop* and then click *Stop workbench*.
205+
.. After the workbench status is *Stopped*, click *Start*.
206+
.. Wait until the workbench status is *Running*.
207+
. Return to your workbench's JupyterLab environment and run the pipeline.
208+
====
187209

188-
. Return to your data science project and expand the newly created pipeline.
210+
. In the {productname-short} dashboard, open your data science project and expand the newly created pipeline.
189211
+
190212
image::pipelines/dsp-pipeline-complete.png[New pipeline expanded, 800]
191213

@@ -202,4 +224,4 @@ The result should be a `models/fraud/1/model.onnx` file in your S3 bucket which
202224

203225
.Next step
204226

205-
(optional) xref:running-a-pipeline-generated-from-python-code.adoc[Running a data science pipeline generated from Python code]
227+
(Optional) xref:running-a-pipeline-generated-from-python-code.adoc[Running a data science pipeline generated from Python code]

workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
[id='creating-a-workbench']
2-
= Creating a workbench and selecting a notebook image
2+
= Creating a workbench and selecting a workbench image
33

4-
A workbench is an instance of your development and experimentation environment. Within a workbench you can select a notebook image for your data science work.
4+
A workbench is an instance of your development and experimentation environment. When you create a workbench, you select a workbench image (sometimes referred to as a notebook image) that is optimized with the tools and libraries that you need for developing models.
55

66
.Prerequisites
77

88
* You created a `My Storage` connection as described in xref:storing-data-with-connections.adoc[Storing data with connections].
99

10-
* You configured a pipeline server as described in xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines].
10+
* If you intend to complete the pipelines section of this {deliverable}, you configured a pipeline server as described in xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines].
1111

1212

1313
.Procedure
@@ -22,7 +22,7 @@ image::workbenches/ds-project-create-workbench.png[Create workbench button, 800]
2222
+
2323
image::workbenches/create-workbench-form-name-desc.png[Workbench name and description, 600]
2424
+
25-
{org-name} provides several supported notebook images. In the *Notebook image* section, you can choose one of these images or any custom images that an administrator has set up for you. The *Tensorflow* image has the libraries needed for this {deliverable}.
25+
{org-name} provides several supported workbench images. In the *Notebook image* section, you can choose one of the default images or a custom image that an administrator has set up for you. The *Tensorflow* image has the libraries needed for this {deliverable}.
2626

2727
. Select the latest *Tensorflow* image.
2828
+
@@ -57,4 +57,4 @@ image::workbenches/ds-project-workbench-list-edit.png[Workbench list edit, 350]
5757

5858
.Next step
5959

60-
xref:importing-files-into-jupyter.adoc[Importing the {deliverable} files into the Jupyter environment]
60+
xref:importing-files-into-jupyter.adoc[Importing the {deliverable} files into the JupyterLab environment]

workshop/docs/modules/ROOT/pages/creating-connections-to-storage.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ In the *Connections* tab for the project, check to see that your connections are
5555
image::projects/ds-project-connections.png[List of project connections, 500]
5656

5757

58-
.Next steps
58+
.Next step
5959

6060
If you want to complete the pipelines section of this {deliverable}, go to xref:enabling-data-science-pipelines.adoc[Enabling data science pipelines].
6161

workshop/docs/modules/ROOT/pages/enabling-data-science-pipelines.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
11
[id='enabling-data-science-pipelines']
22
= Enabling data science pipelines
33

4-
NOTE: If you do not intend to complete the pipelines section of the workshop you can skip this step and move on to the next section, xref:creating-a-workbench.adoc[Create a Workbench].
4+
NOTE: If you do not intend to complete the pipelines section of this {deliverable} you can skip this step and move on to the next section, xref:creating-a-workbench.adoc[Create a Workbench].
55

66
In this section, you prepare your {deliverable} environment so that you can use data science pipelines.
77

8-
In this {deliverable}, you implement an example pipeline by using the JupyterLab Elyra extension. With Elyra, you can create a visual end-to-end pipeline workflow that can be executed in OpenShift AI.
8+
Later in this {deliverable}, you implement an example pipeline by using the JupyterLab Elyra extension. With Elyra, you can create a visual end-to-end pipeline workflow that can be executed in {productname-short}.
99

10-
.Prerequisite
10+
.Prerequisites
1111

1212
* You have installed local object storage buckets and created connections, as described in xref:storing-data-with-connections.adoc[Storing data with connections].
1313

@@ -34,7 +34,7 @@ image::projects/ds-project-create-pipeline-server-form.png[Selecting the Pipelin
3434
You must wait until the pipeline configuration is complete before you continue and create your workbench. If you create your workbench before the pipeline server is ready, your workbench will not be able to submit pipelines to it.
3535
====
3636
+
37-
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can try to delete the pipeline server and create it again.
37+
If you have waited more than 5 minutes, and the pipeline server configuration does not complete, you can delete the pipeline server and create it again.
3838
+
3939
image::projects//ds-project-delete-pipeline-server.png[Delete pipeline server, 250]
4040
+

workshop/docs/modules/ROOT/pages/importing-files-into-jupyter.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
[id='importing-files-into-jupyter']
2-
= Importing the {deliverable} files into the Jupyter environment
2+
= Importing the {deliverable} files into the JupyterLab environment
33

44
:git-version: main
55

6-
The Jupyter environment is a web-based environment, but everything you do inside it happens on *{productname-long}* and is powered by the *OpenShift* cluster. This means that, without having to install and maintain anything on your own computer, and without disposing of valuable local resources such as CPU, GPU and RAM, you can conduct your Data Science work in this powerful and stable managed environment.
6+
The JupyterLab environment is a web-based environment, but everything you do inside it happens on *{productname-long}* and is powered by the *OpenShift* cluster. This means that, without having to install and maintain anything on your own computer, and without using valuable local resources such as CPU, GPU and RAM, you can conduct your data science work in this powerful and stable managed environment.
77

88
.Prerequisites
99

@@ -15,11 +15,11 @@ You created a workbench, as described in xref:creating-a-workbench.adoc[Creating
1515
+
1616
image::workbenches/ds-project-workbench-open.png[Open workbench]
1717
+
18-
Your Jupyter environment window opens.
18+
Your JupyterLab environment window opens.
1919
+
2020
This file-browser window shows the files and folders that are saved inside your own personal space in {productname-short}.
2121

22-
. Bring the content of this {deliverable} inside your Jupyter environment:
22+
. Bring the content of this {deliverable} inside your JupyterLab environment:
2323

2424
.. On the toolbar, click the *Git Clone* icon:
2525
+
@@ -36,7 +36,7 @@ https://github.com/rh-aiservices-bu/fraud-detection.git
3636
+
3737
image::workbenches/jupyter-git-modal.png[Git Modal, 200]
3838

39-
.. Check the *Include submodules* option, and then click *Clone*.
39+
.. Select the *Include submodules* option, and then click *Clone*.
4040

4141
.. In the file browser, double-click the newly-created *fraud-detection* folder.
4242
+

workshop/docs/modules/ROOT/pages/index.adoc

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,22 +8,20 @@
88

99
Welcome. In this {deliverable}, you learn how to incorporate data science and artificial intelligence and machine learning (AI/ML) into an OpenShift development workflow.
1010

11-
You will use an example fraud detection model to complete the following tasks:
11+
You use an example fraud detection model to complete the following tasks in https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai[{productname-long}] without the need to install anything on your computer:
1212

1313
* Explore a pre-trained fraud detection model by using a Jupyter notebook.
1414
* Deploy the model by using {productname-short} model serving.
1515
* Refine and train the model by using automated pipelines.
1616
* Learn how to train the model by using Ray, a distributed computing framework.
1717
18-
You do not have to install anything on your own computer, thanks to https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-ai[{productname-long}].
19-
2018
== About the example fraud detection model
2119

2220
The example fraud detection model monitors credit card transactions for potential fraudulent activity. It analyzes the following credit card transaction details:
2321

2422
* The geographical distance from the previous credit card transaction.
2523
* The price of the current transaction, compared to the median price of all the user's transactions.
26-
* Whether the user completed the transaction by using the hardware chip in the credit card, entered a PIN number, or for an online purchase.
24+
* Whether the user completed the transaction by using the hardware chip in the credit card, by entering a PIN number, or by making an online purchase.
2725

2826
Based on this data, the model outputs the likelihood of the transaction being fraudulent.
2927

workshop/docs/modules/ROOT/pages/navigating-to-the-dashboard.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
** *If you are using the {org-name} Developer Sandbox*:
99
+
10-
After you log in to the Sandbox, under *Available services*, in the {productname-long} card, click *Launch*.
10+
After you log in to the Sandbox, click *Getting Started* -> *Available services*, and then, in the {productname-long} card, click *Launch*.
1111
+
1212
image::projects/sandbox-rhoai-tile.png[{productname-short} dashboard link]
1313

workshop/docs/modules/ROOT/pages/preparing-a-model-for-deployment.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ image::projects/ds-project-connections.png[Data storage in workbench]
1818

1919
.Procedure
2020

21-
. In your Jupyter environment, open the `2_save_model.ipynb` file.
21+
. In your JupyterLab environment, open the `2_save_model.ipynb` file.
2222

2323
. Follow the instructions in the notebook to make the model accessible in storage and save it in the portable ONNX format.
2424

workshop/docs/modules/ROOT/pages/running-a-pipeline-generated-from-python-code.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ In the previous section, you created a simple pipeline by using the GUI pipeline
55

66
This {deliverable} does not describe the details of how to use the SDK. Instead, it provides the files for you to view and upload.
77

8-
. Optionally, view the provided Python code in your Jupyter environment by navigating to the `fraud-detection-notebooks` project's `pipeline` directory. It contains the following files:
8+
. Optionally, view the provided Python code in your JupyterLab environment by navigating to the `fraud-detection-notebooks` project's `pipeline` directory. It contains the following files:
99
+
1010
* `7_get_data_train_upload.py` is the main pipeline code.
1111
* `get_data.py`, `train_model.py`, and `upload.py` are the three components of the pipeline.

0 commit comments

Comments
 (0)