You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+11-67Lines changed: 11 additions & 67 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,11 @@ The demo demonstrates two usages of GenAI:
10
10
- Unstructured data generation: Generating audio data with ground truth metadata to evaluate the analysis.
11
11
- Unstructured data analysis: Turning audio calls into text and tabular features.
12
12
13
-
The demo contains a single[notebook](./call-center-demo.ipynb)that encompasses the entire demo.
13
+
The demo contains two notebooks[notebook 1](./notebook_1_generation.ipynb)and [notebook 2](./notebook_2_analysis.ipynb).
14
14
15
15
Most of the functions are imported from [MLRun's hub](https://https://www.mlrun.org/hub/), which contains a wide range of functions and modules that can be used for a variety of use cases. See also the [MLRun hub documentation](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html). All functions used in the demo include links to their source in the hub. All of the python source code is under [/src](./src).
16
16
17
-
> **⚠️ Important This demo can take an hour to complete when running without GPUs.**
17
+
> **⚠️ Important This demo can take up to couple of hours to complete when running without GPUs.**
18
18
19
19
## Prerequisites
20
20
@@ -26,9 +26,8 @@ This demo uses:
26
26
*[**Vizro**](https://vizro.mckinsey.com/)— To view the call center DB and transcriptions, and to play the generated conversations.
27
27
*[**MLRun**](https://www.mlrun.org/)— the orchestrator to operationalize the workflow. MLRun 1.9 and higher, Python 3.11, with CPU or GPU.
28
28
*[**SQLAlchemy**](https://www.sqlalchemy.org/)— Manage the MySQL DB of calls, clients and agents. Installed together with MLRun.
29
-
-MySQL database. Installed together with MLRun. (SQLite is not currently supported.)
29
+
- SQLite
30
30
31
-
<aid="installation"></a>
32
31
## Installation
33
32
34
33
This project can run in different development environments:
@@ -54,7 +53,8 @@ Make sure you open the notebooks and select the `mlrun` conda environment
54
53
55
54
The MLRun service and computation can run locally (minimal setup) or over a remote Kubernetes environment.
56
55
57
-
If your development environment supports Docker and there are sufficient CPU resources, run:
56
+
57
+
If your development environment supports Docker and there are sufficient CPU resources (support for Docker setup will be deprecated), run:
58
58
59
59
make mlrun-docker
60
60
@@ -72,31 +72,11 @@ in this repo); see [mlrun client setup](https://docs.mlrun.org/en/stable/install
72
72
> Note: You can also use a remote MLRun service (over Kubernetes): instead of starting a local mlrun:
73
73
edit the [mlrun.env](./mlrun.env) and specify its address and credentials.
74
74
75
-
### Install SQAlchemy
76
-
77
-
```
78
-
!pip install SQLAlchemy==2.0.31 pymysql dotenv
79
-
```
80
-
### Setup
81
-
Set the following configuration: choose compute device: CPU or GPU; choose the language of the calls; and whether to skip the calls generation workflow and use pre-generated data. For example:
82
75
83
-
```
84
-
# True = run with GPU, False = run with CPU
85
-
run_with_gpu = False
86
-
use_sqlite = False
87
-
engine = "remote
88
-
language = "en" # The languages of the calls, es - Spanish, en - English
89
-
skip_calls_generation = False
90
-
```
76
+
#### Setup
91
77
92
-
#### Setup in Platform McK
93
-
94
-
Differences between installing on Iguazio cluster and Platform McKinsey:
-**Description**: Generate the call data. (You can choose to skip this step ans use call data that is already generated and available in the demo.)
161
105
-**Key steps**: To generate data, run: Agents & clients data generator, Insert agents & clients data to DB, Get agents & clients from DB, Conversation generation, Text to Audio, and Batch Creation. and Batch creation. Then run the workflow.
-**Description**: Insert the call data to the DB, use diarization to analyze when each person is speaking, transcribe and translate the calls into text and save them as text files, recognice and remove any PII
175
119
, anaylze text (call center conversation) with an LLM, postprocess the LLM's answers before updating them into the DB. Then run the all analysis workflow.
176
120
-**Key steps**: Insert the calls data to the DB, perform speech diarization, transcribe, recognize PII, analysis. Then run the workflow.
-**Description**: View the data and features, as they are collected, in the MLRun UI. Deploy [Vizro](https://vizro.mckinsey.com/) to visualize the data in the DB.
Copy file name to clipboardExpand all lines: notebook_2_analysis.ipynb
+14-25Lines changed: 14 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -29,40 +29,29 @@
29
29
},
30
30
{
31
31
"cell_type": "markdown",
32
-
"id": "0537e242-f14b-4d68-a8ca-b45357849f4c",
32
+
"id": "c01ac25f-2ba8-4d3a-a22d-975178ca9e53",
33
33
"metadata": {},
34
34
"source": [
35
-
"___\n",
36
-
"<a id=\"get_the_project\"></a>\n",
37
-
"## Get the project "
35
+
"> **⚠️ Important Depending on the size of the data, this demo can take up to couple of hours to complete.**"
38
36
]
39
37
},
40
38
{
41
39
"cell_type": "markdown",
42
-
"id": "330ea34f-2d34-472c-995f-9f171afb03cf",
43
-
"metadata": {},
44
-
"source": [
45
-
"- This demo is limited to Python 3.11, with CPU, and run the pipeline with `engine = \"remote\"`.\n",
46
-
"- GPU is not supported at the moment.\n",
47
-
"- Please set `run_with_gpu = False`, `engine = \"remote\"`\n",
48
-
"- .env include OPENAI_API_KEY, OPENAI_API_BASE"
49
-
]
50
-
},
51
-
{
52
-
"cell_type": "markdown",
53
-
"id": "6c44bb2b-5ded-49d6-ae4d-7fa9ca67d1e4",
40
+
"id": "0537e242-f14b-4d68-a8ca-b45357849f4c",
54
41
"metadata": {},
55
42
"source": [
56
-
"#### Setup in Platform McK"
43
+
"___\n",
44
+
"<a id=\"get_the_project\"></a>\n",
45
+
"## 1. Get the project "
57
46
]
58
47
},
59
48
{
60
49
"cell_type": "markdown",
61
-
"id": "05e3b4b8-d370-4fcf-944b-e8f3b478fa24",
50
+
"id": "330ea34f-2d34-472c-995f-9f171afb03cf",
62
51
"metadata": {},
63
52
"source": [
53
+
"- This demo is limited to Python 3.11, with CPU, and run the pipeline with `engine = \"remote\"`.\n",
64
54
"- GPU is not supported at the moment.\n",
65
-
"- sqlite is supported.\n",
66
55
"- Please set `run_with_gpu = False`, `engine = \"remote\"`\n",
67
56
"- .env include OPENAI_API_KEY, OPENAI_API_BASE"
68
57
]
@@ -72,7 +61,7 @@
72
61
"id": "b5eb3156-4dba-4ef7-a406-13e89772e700",
73
62
"metadata": {},
74
63
"source": [
75
-
"### Fill the tokens and URL\n",
64
+
"### 1.1 Fill the tokens and URL\n",
76
65
"\n",
77
66
"> **⚠️ Important** Please fill the following variables in your `.env` file.\n",
78
67
"\n",
@@ -130,7 +119,7 @@
130
119
"id": "1ea33fee-ec95-48e3-aae8-9247ae182481",
131
120
"metadata": {},
132
121
"source": [
133
-
"### Get the current project\n",
122
+
"### 1.2 Get the current project\n",
134
123
"\n",
135
124
"The MLRun project is created by running the function [`mlrun.get_or_create_project`](https://docs.mlrun.org/en/latest/api/mlrun.projects.html#mlrun.projects.get_or_create_project). This creates the project (or loads it if previously created) and sets it up automatically according to the [project_setup.py](./project_setup.py) file located in this repo. \n",
136
125
"\n",
@@ -181,7 +170,7 @@
181
170
"source": [
182
171
"___\n",
183
172
"<a id=\"calls_analysis\"></a>\n",
184
-
"## 3. Calls analysis\n",
173
+
"## 2. Calls analysis\n",
185
174
"\n",
186
175
"The workflow includes multiple steps for which all of the main functions are imported from the **[MLRun Function Hub](https://www.mlrun.org/hub/)**. You can see each hub function's docstring, code, and example, by clicking the function name in the following list:\n",
187
176
"\n",
@@ -227,7 +216,7 @@
227
216
"id": "c68f2edd-0c0b-403a-8fc6-156b299dc8f1",
228
217
"metadata": {},
229
218
"source": [
230
-
"### 3.1. Run the workflow\n",
219
+
"### 2.1. Run the workflow\n",
231
220
"\n",
232
221
"Now, run the workflow using the following parameters:\n",
233
222
"* `batch: str` — Path to the dataframe artifact that represents the batch to analyze. \n",
@@ -303,7 +292,7 @@
303
292
"source": [
304
293
"___\n",
305
294
"<a id=\"view_the_data\"></a>\n",
306
-
"## 4. View the data\n",
295
+
"## 3. View the data\n",
307
296
"\n",
308
297
"While the workflow is running, you can view the data and features as they are collected.\n",
309
298
"\n",
@@ -375,7 +364,7 @@
375
364
"source": [
376
365
"___\n",
377
366
"<a id=\"future_work\"></a>\n",
378
-
"## 5. Future work\n",
367
+
"## 4. Future work\n",
379
368
"\n",
380
369
"This demo is a proof of concept for LLM's feature-extraction capabilities, while using MLRun for the orchestration from development to production. The demo continues to be developed. You are welcome to track and develop it with us:\n",
0 commit comments