Skip to content

Commit 739f3d2

Browse files
Merge pull request #40 from xsqian/main
add note that depending on the size of the data, this demo can take up to couple of hours to complete.
2 parents 6763d39 + a018944 commit 739f3d2

File tree

4 files changed

+35
-589
lines changed

4 files changed

+35
-589
lines changed

README.md

Lines changed: 11 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ The demo demonstrates two usages of GenAI:
1010
- Unstructured data generation: Generating audio data with ground truth metadata to evaluate the analysis.
1111
- Unstructured data analysis: Turning audio calls into text and tabular features.
1212

13-
The demo contains a single [notebook](./call-center-demo.ipynb) that encompasses the entire demo.
13+
The demo contains two notebooks [notebook 1](./notebook_1_generation.ipynb) and [notebook 2](./notebook_2_analysis.ipynb).
1414

1515
Most of the functions are imported from [MLRun's hub](https://https://www.mlrun.org/hub/), which contains a wide range of functions and modules that can be used for a variety of use cases. See also the [MLRun hub documentation](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html). All functions used in the demo include links to their source in the hub. All of the python source code is under [/src](./src).
1616

17-
> **⚠️ Important This demo can take an hour to complete when running without GPUs.**
17+
> **⚠️ Important This demo can take up to couple of hours to complete when running without GPUs.**
1818
1919
## Prerequisites
2020

@@ -26,9 +26,8 @@ This demo uses:
2626
* [**Vizro**](https://vizro.mckinsey.com/) — To view the call center DB and transcriptions, and to play the generated conversations.
2727
* [**MLRun**](https://www.mlrun.org/) — the orchestrator to operationalize the workflow. MLRun 1.9 and higher, Python 3.11, with CPU or GPU.
2828
* [**SQLAlchemy**](https://www.sqlalchemy.org/) — Manage the MySQL DB of calls, clients and agents. Installed together with MLRun.
29-
- MySQL database. Installed together with MLRun. (SQLite is not currently supported.)
29+
- SQLite
3030

31-
<a id="installation"></a>
3231
## Installation
3332

3433
This project can run in different development environments:
@@ -54,7 +53,8 @@ Make sure you open the notebooks and select the `mlrun` conda environment
5453

5554
The MLRun service and computation can run locally (minimal setup) or over a remote Kubernetes environment.
5655

57-
If your development environment supports Docker and there are sufficient CPU resources, run:
56+
57+
If your development environment supports Docker and there are sufficient CPU resources (support for Docker setup will be deprecated), run:
5858

5959
make mlrun-docker
6060

@@ -72,31 +72,11 @@ in this repo); see [mlrun client setup](https://docs.mlrun.org/en/stable/install
7272
> Note: You can also use a remote MLRun service (over Kubernetes): instead of starting a local mlrun:
7373
edit the [mlrun.env](./mlrun.env) and specify its address and credentials.
7474

75-
### Install SQAlchemy
76-
77-
```
78-
!pip install SQLAlchemy==2.0.31 pymysql dotenv
79-
```
80-
### Setup
81-
Set the following configuration: choose compute device: CPU or GPU; choose the language of the calls; and whether to skip the calls generation workflow and use pre-generated data. For example:
8275

83-
```
84-
# True = run with GPU, False = run with CPU
85-
run_with_gpu = False
86-
use_sqlite = False
87-
engine = "remote
88-
language = "en" # The languages of the calls, es - Spanish, en - English
89-
skip_calls_generation = False
90-
```
76+
#### Setup
9177

92-
#### Setup in Platform McK
93-
94-
Differences between installing on Iguazio cluster and Platform McKinsey:
95-
- SQLite is supported
9678
- Set `run_with_gpu = False`, `use_sqlite = True`, `engine = "remote"`.
97-
- `.env` must include `OPENAI_API_KEY`, `OPENAI_API_BASE`, and `S3_BUCKET_NAME`.
98-
* [S3 Bucket]() &mdash;
99-
* `S3_BUCKET_NAME`
79+
- `.env` must include `OPENAI_API_KEY`, `OPENAI_API_BASE`
10080

10181
### Configure the tokens and URL
10282

@@ -108,55 +88,19 @@ Tokens are required to run the demo end-to-end:
10888
* [OpenAI ChatGPT](https://chat.openai.com/) &mdash; To generate conversations, two tokens are required:
10989
* `OPENAI_API_KEY`
11090
* `OPENAI_API_BASE`
111-
* [MySQL](https://www.mysql.com/) &mdash; A URL with username and password for collecting the calls into the DB.
112-
* `MYSQL_URL`
113-
114-
> If you want to install mysql using helm chart, use this command:
115-
> * `helm install -n <"namesapce"> myrelease bitnami/mysql --set auth.rootPassword=sql123 --set auth.database=mlrun_demos --set primary.service.ports.mysql=3111 --set primary.persistence.enabled=false`
116-
> Example for MYSQL_URL if you use the above command:</br>
117-
`mysql+pymysql://root:sql123@myrelease-mysql.<"namesapce">.svc.cluster.local:3111/mlrun_demos`
118-
119-
120-
### Import
121-
122-
```
123-
import dotenv
124-
import os
125-
import sys
126-
import mlrun
127-
dotenv_file = ".env"
128-
sys.path.insert(0, os.path.abspath("./"))
129-
130-
dotenv.load_dotenv(dotenv_file)
131-
```
132-
133-
```
134-
assert not run_with_gpu
135-
assert os.environ["OPENAI_API_BASE"]
136-
assert os.environ["OPENAI_API_KEY"]
137-
```
138-
139-
```
140-
if not mlrun.mlconf.is_ce_mode():
141-
assert os.environ["MYSQL_URL"]
142-
use_sqlite = False
143-
else:
144-
use_sqlite = True
145-
```
14691

14792
## Demo flow
14893

14994
1. Create the project
150-
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
95+
- **Notebook**: [notebook_1_generation.ipynb](notebook_1_generation.ipynb)
15196
- **Description**:
15297
- **Key steps**: Create the MLRun project.
15398
- **Key files**:
154-
- [project.yaml](./project.yaml)
15599
- [project_setup.py](./project_setup.py)
156100

157101
2. Generate the call data
158102

159-
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
103+
- **Notebook**: [notebook_1_generation.ipynb](notebook_1_generation.ipynb)
160104
- **Description**: Generate the call data. (You can choose to skip this step ans use call data that is already generated and available in the demo.)
161105
- **Key steps**: To generate data, run: Agents & clients data generator, Insert agents & clients data to DB, Get agents & clients from DB, Conversation generation, Text to Audio, and Batch Creation. and Batch creation. Then run the workflow.
162106

@@ -170,7 +114,7 @@ else:
170114

171115
3. Calls analysis
172116

173-
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
117+
- **Notebook**: [notebook_2_analysis.ipynb](notebook_2_analysis.ipynb)
174118
- **Description**: Insert the call data to the DB, use diarization to analyze when each person is speaking, transcribe and translate the calls into text and save them as text files, recognice and remove any PII
175119
, anaylze text (call center conversation) with an LLM, postprocess the LLM's answers before updating them into the DB. Then run the all analysis workflow.
176120
- **Key steps**: Insert the calls data to the DB, perform speech diarization, transcribe, recognize PII, analysis. Then run the workflow.
@@ -187,5 +131,5 @@ else:
187131

188132
4. View the data
189133

190-
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
134+
- **Notebook**: [notebook_2_analysis.ipynb](notebook_2_analysis.ipynb)
191135
- **Description**: View the data and features, as they are collected, in the MLRun UI. Deploy [Vizro](https://vizro.mckinsey.com/) to visualize the data in the DB.

call-center-demo.ipynb

Lines changed: 0 additions & 472 deletions
This file was deleted.

notebook_1_generation.ipynb

Lines changed: 10 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,15 @@
2222
"## Table of contents:\n",
2323
"\n",
2424
"1. [Create the project](#create_the_project)\n",
25-
"2. [Generate the call data](#generate_the_call_data)\n",
26-
"3. [Calls analysis](#calls_analysis)\n",
27-
"4. [View the data](#view_the_data)\n",
28-
"5. [Future work](#future_work)"
25+
"2. [Generate the call data](#generate_the_call_data)\n"
26+
]
27+
},
28+
{
29+
"cell_type": "markdown",
30+
"id": "481122ad-a858-4edf-8c63-9fbf3c8df518",
31+
"metadata": {},
32+
"source": [
33+
"> **⚠️ Important Depending on the size of the data, this demo can take up to couple of hours to complete.**"
2934
]
3035
},
3136
{
@@ -81,7 +86,7 @@
8186
"id": "c1a3e034-151f-46d0-b6f7-38876c113a8b",
8287
"metadata": {},
8388
"source": [
84-
"#### Setup in Iguazio Cluster"
89+
"#### Setup"
8590
]
8691
},
8792
{
@@ -91,26 +96,6 @@
9196
"source": [
9297
"- This demo is limited to run Python 3.11, with CPU, and run the pipeline with `engine = \"remote\"`.\n",
9398
"- GPU is not supported at the moment.\n",
94-
"- Need to setup a mysql database for the demo. sqlite is not supported now.\n",
95-
"- Please set `run_with_gpu = False`, `engine = \"remote\"`\n",
96-
"- .env include OPENAI_API_KEY, OPENAI_API_BASE"
97-
]
98-
},
99-
{
100-
"cell_type": "markdown",
101-
"id": "6c44bb2b-5ded-49d6-ae4d-7fa9ca67d1e4",
102-
"metadata": {},
103-
"source": [
104-
"#### Setup in Platform McK"
105-
]
106-
},
107-
{
108-
"cell_type": "markdown",
109-
"id": "05e3b4b8-d370-4fcf-944b-e8f3b478fa24",
110-
"metadata": {},
111-
"source": [
112-
"- GPU is not supported at the moment.\n",
113-
"- sqlite is supported.\n",
11499
"- Please set `run_with_gpu = False`, `engine = \"remote\"`\n",
115100
"- .env include OPENAI_API_KEY, OPENAI_API_BASE"
116101
]

notebook_2_analysis.ipynb

Lines changed: 14 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -29,40 +29,29 @@
2929
},
3030
{
3131
"cell_type": "markdown",
32-
"id": "0537e242-f14b-4d68-a8ca-b45357849f4c",
32+
"id": "c01ac25f-2ba8-4d3a-a22d-975178ca9e53",
3333
"metadata": {},
3434
"source": [
35-
"___\n",
36-
"<a id=\"get_the_project\"></a>\n",
37-
"## Get the project "
35+
"> **⚠️ Important Depending on the size of the data, this demo can take up to couple of hours to complete.**"
3836
]
3937
},
4038
{
4139
"cell_type": "markdown",
42-
"id": "330ea34f-2d34-472c-995f-9f171afb03cf",
43-
"metadata": {},
44-
"source": [
45-
"- This demo is limited to Python 3.11, with CPU, and run the pipeline with `engine = \"remote\"`.\n",
46-
"- GPU is not supported at the moment.\n",
47-
"- Please set `run_with_gpu = False`, `engine = \"remote\"`\n",
48-
"- .env include OPENAI_API_KEY, OPENAI_API_BASE"
49-
]
50-
},
51-
{
52-
"cell_type": "markdown",
53-
"id": "6c44bb2b-5ded-49d6-ae4d-7fa9ca67d1e4",
40+
"id": "0537e242-f14b-4d68-a8ca-b45357849f4c",
5441
"metadata": {},
5542
"source": [
56-
"#### Setup in Platform McK"
43+
"___\n",
44+
"<a id=\"get_the_project\"></a>\n",
45+
"## 1. Get the project "
5746
]
5847
},
5948
{
6049
"cell_type": "markdown",
61-
"id": "05e3b4b8-d370-4fcf-944b-e8f3b478fa24",
50+
"id": "330ea34f-2d34-472c-995f-9f171afb03cf",
6251
"metadata": {},
6352
"source": [
53+
"- This demo is limited to Python 3.11, with CPU, and run the pipeline with `engine = \"remote\"`.\n",
6454
"- GPU is not supported at the moment.\n",
65-
"- sqlite is supported.\n",
6655
"- Please set `run_with_gpu = False`, `engine = \"remote\"`\n",
6756
"- .env include OPENAI_API_KEY, OPENAI_API_BASE"
6857
]
@@ -72,7 +61,7 @@
7261
"id": "b5eb3156-4dba-4ef7-a406-13e89772e700",
7362
"metadata": {},
7463
"source": [
75-
"### Fill the tokens and URL\n",
64+
"### 1.1 Fill the tokens and URL\n",
7665
"\n",
7766
"> **⚠️ Important** Please fill the following variables in your `.env` file.\n",
7867
"\n",
@@ -130,7 +119,7 @@
130119
"id": "1ea33fee-ec95-48e3-aae8-9247ae182481",
131120
"metadata": {},
132121
"source": [
133-
"### Get the current project\n",
122+
"### 1.2 Get the current project\n",
134123
"\n",
135124
"The MLRun project is created by running the function [`mlrun.get_or_create_project`](https://docs.mlrun.org/en/latest/api/mlrun.projects.html#mlrun.projects.get_or_create_project). This creates the project (or loads it if previously created) and sets it up automatically according to the [project_setup.py](./project_setup.py) file located in this repo. \n",
136125
"\n",
@@ -181,7 +170,7 @@
181170
"source": [
182171
"___\n",
183172
"<a id=\"calls_analysis\"></a>\n",
184-
"## 3. Calls analysis\n",
173+
"## 2. Calls analysis\n",
185174
"\n",
186175
"The workflow includes multiple steps for which all of the main functions are imported from the **[MLRun Function Hub](https://www.mlrun.org/hub/)**. You can see each hub function's docstring, code, and example, by clicking the function name in the following list:\n",
187176
"\n",
@@ -227,7 +216,7 @@
227216
"id": "c68f2edd-0c0b-403a-8fc6-156b299dc8f1",
228217
"metadata": {},
229218
"source": [
230-
"### 3.1. Run the workflow\n",
219+
"### 2.1. Run the workflow\n",
231220
"\n",
232221
"Now, run the workflow using the following parameters:\n",
233222
"* `batch: str` &mdash; Path to the dataframe artifact that represents the batch to analyze. \n",
@@ -303,7 +292,7 @@
303292
"source": [
304293
"___\n",
305294
"<a id=\"view_the_data\"></a>\n",
306-
"## 4. View the data\n",
295+
"## 3. View the data\n",
307296
"\n",
308297
"While the workflow is running, you can view the data and features as they are collected.\n",
309298
"\n",
@@ -375,7 +364,7 @@
375364
"source": [
376365
"___\n",
377366
"<a id=\"future_work\"></a>\n",
378-
"## 5. Future work\n",
367+
"## 4. Future work\n",
379368
"\n",
380369
"This demo is a proof of concept for LLM's feature-extraction capabilities, while using MLRun for the orchestration from development to production. The demo continues to be developed. You are welcome to track and develop it with us:\n",
381370
"\n",

0 commit comments

Comments
 (0)