Skip to content

Commit d7e5202

Browse files
committed
Fix a few mistakes in the workshop instructions
1 parent 6db5b28 commit d7e5202

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

content/modules/ROOT/pages/03-04-comparing-model-servers.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
= Comparing two LLMs
22
include::_attributes.adoc[]
33

4-
So far, for this {ic-lab}, we have used the model https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2[Mistral-7B Instruct v2,window=_blank]. Although lighter than other models, it is still quite heavy and we need a large GPU to run it. Would we get as good results with a smaller model running on a CPU only? Let's try!
4+
So far, for this {ic-lab}, we have used the model https://huggingface.co/ibm-granite/granite-7b-instruct[Granite 7B Instruct,window=_blank]. Although lighter than other models, it is still quite heavy and we need a large GPU to run it. Would we get as good results with a smaller model running on a CPU only? Let's try!
55
66
In this exercise, we'll pitch our previous model against a much smaller LLM called https://huggingface.co/google/flan-t5-large[flan-t5-large,window=_blank]. We'll compare the results and see if the smaller model is good enough for our use case.
77

content/modules/ROOT/pages/05-05-process-claims.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ image::05/05-new-app-claim-unprocessed.jpg[]
99

1010
Of course, we want to execute this processing, and it's even better if it can be fully automated!
1111

12-
For that, we will use a pipeline that can either be run ad-hoc or scheduled just like, the confidence check pipeline. However, in this case, it won't technically be a Data Science Pipeline. It will be more of a raw Tekton Pipeline.
12+
For that, we will use a pipeline that can either be run ad-hoc or scheduled just like, the confidence check pipeline. However, in this case, it won't technically be a Data Science Pipeline. It will be more of a raw Argo Workflow.
1313

1414
// This pipeline is also a good starting point for creating an ArgoCD or Tekton pipeline which can be automatically triggered.
1515

content/modules/ROOT/pages/06-01-potential-imp-ref.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ If you want to read what **we** thought could be improved, read below! (response
3939
** Mismatch in license plate, if visible in the picture.
4040
* We've only scratched the surface with gitops and Data Science pipelines here
4141
** There was no performance testing done. If too many users connect at the same time, it might overwhelm either the app, the database, the LLM, etc...
42-
* Currently, most simple changes would probably end up breaking the application. And the person who, for example decides to change Mistral7B for Flan-T5-Large would not necessarily realize that.
42+
* Currently, most simple changes would probably end up breaking the application. And the person who, for example decides to change Granite-7B for Flan-T5-Large would not necessarily realize that.
4343
** It would be critical to have multiple instances (Dev/Test/UAT/Prod) of the application.
4444
** It would also be required to have integration pipelines run in these environments to confirm that changes made do not break the overall application.
4545
* We could ask the LLM to start writing a response to the customer.

0 commit comments

Comments
 (0)