Skip to content

Commit 7b42799

Browse files
authored
incorporate peer review feedback (#49)
* incorporate peer review feedback * final peer review comments from downstream
1 parent f307763 commit 7b42799

20 files changed

+35
-34
lines changed
8.76 KB
Loading
6.77 KB
Loading
225 Bytes
Loading
11.8 KB
Loading

workshop/docs/modules/ROOT/pages/automating-workflows-with-pipelines.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,13 +65,13 @@ image::pipelines/wb-pipeline-node-1.png[Select Node 1, 150]
6565

6666
. Scroll down to the *File Dependencies* section and then click *Add*.
6767
+
68-
image::pipelines/wb-pipeline-node-1-file-dep.png[Add File Dependency, 400]
68+
image::pipelines/wb-pipeline-node-1-file-dep.png[Add File Dependency, 500]
6969

7070
. Set the value to `data/*.csv` which contains the data to train your model.
7171

7272
. Select the *Include Subdirectories* option and then click *Add*.
7373
+
74-
image::pipelines/wb-pipeline-node-1-file-dep-form.png[Set File Dependency Value, 300]
74+
image::pipelines/wb-pipeline-node-1-file-dep-form.png[Set File Dependency Value, 500]
7575

7676
. Save the pipeline.
7777

@@ -168,7 +168,7 @@ image::pipelines/wb-pipeline-kube-secret-form.png[Secret Form, 300]
168168

169169
== Run the Pipeline
170170

171-
Upload the pipeline on your cluster and run it. You can do so directly from the pipeline editor. You can use your own newly created pipeline for this or `6 Train Save.pipeline`.
171+
Upload the pipeline on your cluster and run it. You can do so directly from the pipeline editor. You can use your own newly created pipeline or the pipeline provided in the `6 Train Save.pipeline` file.
172172

173173
.Procedure
174174

workshop/docs/modules/ROOT/pages/conclusion.adoc

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,10 +5,11 @@
55
[.text-center.strong]
66
== Conclusion
77

8-
Congratulations. In this {deliverable}, you learned how to incorporate data science, artificial intelligence, and machine learning into an OpenShift development workflow.
8+
Congratulations. In this {deliverable}, you learned how to incorporate data science, artificial intelligence, and machine learning into an OpenShift development workflow.
99

1010
You used an example fraud detection model and completed the following tasks:
1111

1212
* Explored a pre-trained fraud detection model by using a Jupyter notebook.
1313
* Deployed the model by using {productname-short} model serving.
1414
* Refined and trained the model by using automated pipelines.
15+
* Learned how to train the model by using Ray, a distributed computing framework.

workshop/docs/modules/ROOT/pages/creating-a-workbench.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ A workbench is an instance of your development and experimentation environment.
1616

1717
. Click the *Workbenches* tab, and then click the *Create workbench* button.
1818
+
19-
image::workbenches/ds-project-create-workbench.png[Create workbench button, 300]
19+
image::workbenches/ds-project-create-workbench.png[Create workbench button, 600]
2020

2121
. Fill out the name and description.
2222
+

workshop/docs/modules/ROOT/pages/creating-data-connections-to-storage.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
[id='creating-data-connections-to-storage']
22
= Creating data connections to your own S3-compatible object storage
33

4-
If you have existing S3-compatible storage buckets that you want to use for this {deliverable}, you must create a data connection to one storage bucket for saving your data and models and, if you want to complete the pipelines section of this {deliverable}, create another data connection to a different storage bucket for saving pipeline artifacts.
4+
If you have existing S3-compatible storage buckets that you want to use for this {deliverable}, you must create a data connection to one storage bucket for saving your data and models. If you want to complete the pipelines section of this {deliverable}, create another data connection to a different storage bucket for saving pipeline artifacts.
55

6-
NOTE: If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this section and follow the steps in xref:running-a-script-to-install-storage.adoc[Running a script to install local object storage buckets and create data connections]. The provided script automatically completes the following tasks for you: creates a Minio instance in your project, creates two storage buckets in that Minio instance, creates two data connections in your project, one for each bucket and both using the same credentials, and installs required network policies for service mesh functionality.
6+
NOTE: If you do not have your own s3-compatible storage, or if you want to use a disposable local Minio instance instead, skip this section and follow the steps in xref:running-a-script-to-install-storage.adoc[Running a script to install local object storage buckets and create data connections]. The provided script automatically completes the following tasks for you: creates a Minio instance in your project, creates two storage buckets in that Minio instance, creates two data connections in your project, one for each bucket and both using the same credentials, and installs required network policies for service mesh functionality.
77

8-
.Prerequisite
8+
.Prerequisites
99

1010
To create data connections to your existing S3-compatible storage buckets, you need the following credential information for the storage buckets:
1111

@@ -27,11 +27,11 @@ If you don't have this information, contact your storage administrator.
2727
+
2828
image::projects/ds-project-add-dc.png[Add data connection]
2929

30-
.. Fill out the *Add data connection* form and name your connection *My Storage*. This connection is for saving your personal work, including data and models.
30+
.. Complete the *Add data connection* form and name your connection *My Storage*. This connection is for saving your personal work, including data and models.
3131
+
3232
NOTE: Skip the *Connected workbench* item. You add data connections to a workbench in a later section.
3333
+
34-
image::projects/ds-project-my-storage-form.png[Add my storage form, 400]
34+
image::projects/ds-project-my-storage-form.png[Add my storage form, 500]
3535

3636
.. Click *Add data connection*.
3737

@@ -41,11 +41,11 @@ NOTE: If you do not intend to complete the pipelines section of the {deliverable
4141

4242
.. Click *Add data connection*.
4343

44-
.. Fill out the form and name your connection *Pipeline Artifacts*.
44+
.. Complete the form and name your connection *Pipeline Artifacts*.
4545
+
4646
NOTE: Skip the *Connected workbench* item. You add data connections to a workbench in a later section.
4747
+
48-
image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts form, 400]
48+
image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts form, 500]
4949

5050
.. Click *Add data connection*.
5151

@@ -54,7 +54,7 @@ image::projects/ds-project-pipeline-artifacts-form.png[Add pipeline artifacts fo
5454

5555
In the *Data connections* tab for the project, check to see that your data connections are listed.
5656

57-
image::projects/ds-project-dc-list.png[List of project data connections, 400]
57+
image::projects/ds-project-dc-list.png[List of project data connections, 500]
5858

5959

6060
.Next steps

workshop/docs/modules/ROOT/pages/deploying-a-model-multi-model-server.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
{productname-short} multi-model servers can host several models at once. You create a new model server and deploy your model to it.
55

6-
.Prerequisite
6+
.Prerequisites
77

88
* A user with `admin` privileges has enabled the multi-model serving platform on your OpenShift cluster.
99

@@ -43,7 +43,7 @@ image::model-serving/deploy-model-form-mm.png[Deploy model from for multi-model
4343

4444
.Verification
4545

46-
Notice the loading symbol under the *Status* section. It will change to a green checkmark when the deployment is completes successfully.
46+
Notice the loading symbol under the *Status* section. The symbol changes to a green checkmark when the deployment completes successfully.
4747

4848
image::model-serving/ds-project-model-list-status-mm.png[Deployed model status]
4949

workshop/docs/modules/ROOT/pages/deploying-a-model-single-model-server.adoc

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,7 @@
33

44
{productname-short} single-model servers host only one model. You create a new model server and deploy your model to it.
55

6-
NOTE: Depending on how model serving has been configured on your cluster, you might see only one model serving platform option.
7-
8-
9-
.Prerequisite
6+
.Prerequisites
107

118
* A user with `admin` privileges has enabled the single-model serving platform on your OpenShift cluster.
129

@@ -27,13 +24,13 @@ NOTE: Depending on how model serving has been configured on your cluster, you mi
2724
.. Type the path that leads to the version folder that contains your model file: `models/fraud`
2825
.. Leave the other fields with the default settings.
2926
+
30-
image::model-serving/deploy-model-form-sm.png[Deploy model from for single-model serving, 400]
27+
image::model-serving/deploy-model-form-sm.png[Deploy model form for single-model serving, 500]
3128

3229
. Click *Deploy*.
3330

3431
.Verification
3532

36-
Notice the loading symbol under the *Status* section. It will change to a green checkmark when the deployment is completes successfully.
33+
Notice the loading symbol under the *Status* section. The symbol changes to a green checkmark when the deployment completes successfully.
3734

3835
image::model-serving/ds-project-model-list-status-sm.png[Deployed model status]
3936

0 commit comments

Comments
 (0)