Skip to content

Commit 6a18ff0

Browse files
Merge pull request #37 from jillnogold/standardize
standardize format
2 parents 74f6375 + 9c2d6e0 commit 6a18ff0

File tree

2 files changed

+246
-323
lines changed

2 files changed

+246
-323
lines changed

README.md

Lines changed: 144 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,33 @@
1-
# <img src="https://uxwing.com/wp-content/themes/uxwing/download/business-professional-services/boy-services-support-icon.png" style="height: 40px"/> MLRun's Call Center Demo
1+
# Call center demo
22

3-
<img src="./images/call-center-readme.png" alt="huggingface-mlrun" style="width: 600px"/>
3+
This demo showcases how to use LLMs to turn audio files from call center conversations between customers and agents into valuable data, all in a single workflow orchestrated by MLRun. It illustrates the potential power of LLMs for feature extraction, and the simplicity of working with MLRun.
44

5-
This demo showcases how to use LLMs to turn audio files from call center conversations between customers and agents into valuable data, all in a single workflow orchestrated by MLRun.
5+
## Overview
66

77
MLRun automates the entire workflow, auto-scales resources as needed, and automatically logs and parses values between the different workflow steps.
88

9-
By the end of this demo you will see the potential power of LLMs for feature extraction, and how easily you can do this with MLRun!
9+
The demo demonstrates two usages of GenAI:
10+
- Unstructured data generation: Generating audio data with ground truth metadata to evaluate the analysis.
11+
- Unstructured data analysis: Turning audio calls into text and tabular features.
1012

11-
This demo uses:
12-
* [**OpenAI's Whisper**](https://openai.com/research/whisper) &mdash; To transcribe the audio calls into text.
13-
* [**Flair**](https://flairnlp.github.io/) and [**Microsoft's Presidio**](https://microsoft.github.io/presidio/) - To recognize PII so it can be filtered out.
14-
* [**HuggingFace**](https://huggingface.co/) &mdash; The main machine-learning framework to get the model and tokenizer for the features extraction.
15-
* and [**MLRun**](https://www.mlrun.org/) &mdash; as the orchestrator to operationalize the workflow.
13+
The demo contains a single [notebook](./call-center-demo.ipynb) that encompasses the entire demo.
1614

17-
The demo contains a single [notebook](./notebook.ipynb) that encompasses the entire demo.
15+
Most of the functions are imported from [MLRun's hub](https://https://www.mlrun.org/hub/), which contains a wide range of functions and modules that can be used for a variety of use cases. See also the [MLRun hub documentation](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html). All functions used in the demo include links to their source in the hub. All of the python source code is under [/src](./src).
1816

17+
> **⚠️ Important This demo can take an hour to complete when running without GPUs.**
1918
20-
Most of the functions are imported from [MLRun's function hub](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html), which contains a wide range of functions that can be used for a variety of use cases. All functions used in the demo include links to their source in the hub. All of the python source code is under [/src](./src).
21-
Enjoy!
19+
## Prerequisites
20+
21+
This demo uses:
22+
23+
* [**OpenAI's Whisper**](https://openai.com/research/whisper) &mdash; To transcribe the audio calls into text.
24+
* [**Flair**](https://flairnlp.github.io/) and [**Microsoft's Presidio**](https://microsoft.github.io/presidio/) &mdash; To recognize PII so it can be filtered out.
25+
* [**HuggingFace**](https://huggingface.co/) &mdash; The main machine-learning framework to get the model and tokenizer for the features extraction.
26+
* [**Vizro**](https://vizro.mckinsey.com/) &mdash; To view the call center DB and transcriptions, and to play the generated conversations.
27+
* [**MLRun**](https://www.mlrun.org/) &mdash; the orchestrator to operationalize the workflow. MLRun 1.9 and higher, Python 3.11, with CPU or GPU.
28+
* [**SQLAlchemy**](https://www.sqlalchemy.org/) &mdash; Manage the MySQL DB of calls, clients and agents. Installed together with MLRun.
29+
- MySQL database. Installed together with MLRun. (SQLite is not currently supported.)
2230

23-
___
2431
<a id="installation"></a>
2532
## Installation
2633

@@ -29,7 +36,7 @@ This project can run in different development environments:
2936
* Inside GitHub Codespaces
3037
* Other managed Jupyter environments
3138

32-
### Install the code and the mlrun client
39+
### Install the code and the MLRun client
3340

3441
To get started, fork this repo into your GitHub account and clone it into your development environment.
3542

@@ -41,7 +48,7 @@ If you prefer to use Conda, use this instead (to create and configure a conda en
4148

4249
make conda-env
4350

44-
> Make sure you open the notebooks and select the `mlrun` conda environment
51+
Make sure you open the notebooks and select the `mlrun` conda environment
4552

4653
### Install or connect to the MLRun service/cluster
4754

@@ -51,16 +58,134 @@ If your development environment supports Docker and there are sufficient CPU res
5158

5259
make mlrun-docker
5360

54-
> MLRun UI can be viewed in: http://localhost:8060
61+
The MLRun UI can be viewed in: http://localhost:8060
5562

5663
If your environment is minimal, run mlrun as a process (no UI):
5764

5865
[conda activate mlrun &&] make mlrun-api
5966

60-
For MLRun to run properly you should set your client environment. This is not required when using **codespaces**, the mlrun **conda** environment, or **iguazio** managed notebooks.
67+
For MLRun to run properly, set up your client environment. This is not required when using **codespaces**, the mlrun **conda** environment, or **iguazio** managed notebooks.
6168

6269
Your environment should include `MLRUN_ENV_FILE=<absolute path to the ./mlrun.env file> ` (point to the mlrun .env file
63-
in this repo); see [mlrun client setup](https://docs.mlrun.org/en/latest/install/remote.html) instructions for details.
70+
in this repo); see [mlrun client setup](https://docs.mlrun.org/en/stable/install/remote.html) instructions for details.
6471

6572
> Note: You can also use a remote MLRun service (over Kubernetes): instead of starting a local mlrun:
66-
> edit the [mlrun.env](./mlrun.env) and specify its address and credentials.
73+
edit the [mlrun.env](./mlrun.env) and specify its address and credentials.
74+
75+
### Install SQAlchemy
76+
77+
```
78+
!pip install SQLAlchemy==2.0.31 pymysql dotenv
79+
```
80+
### Setup
81+
Set the following configuration: choose compute device: CPU or GPU; choose the language of the calls; and whether to skip the calls generation workflow and use pre-generated data. For example:
82+
83+
```
84+
# True = run with GPU, False = run with CPU
85+
run_with_gpu = False
86+
use_sqlite = False
87+
engine = "remote
88+
language = "en" # The languages of the calls, es - Spanish, en - English
89+
skip_calls_generation = False
90+
```
91+
92+
#### Setup in Platform McK
93+
94+
Differences between installing on Iguazio cluster and Platform McKinsey:
95+
- SQLite is supported
96+
- Set `run_with_gpu = False`, `use_sqlite = True`, `engine = "remote"`.
97+
- `.env` must include `OPENAI_API_KEY`, `OPENAI_API_BASE`, and `S3_BUCKET_NAME`.
98+
* [S3 Bucket]() &mdash;
99+
* `S3_BUCKET_NAME`
100+
101+
### Configure the tokens and URL
102+
103+
> **⚠️ Important** Fill in the following variables in your `.env` file.
104+
105+
> Note: The requirement for the OpenAI token will be removed soon in favor of an open-source LLM.
106+
107+
Tokens are required to run the demo end-to-end:
108+
* [OpenAI ChatGPT](https://chat.openai.com/) &mdash; To generate conversations, two tokens are required:
109+
* `OPENAI_API_KEY`
110+
* `OPENAI_API_BASE`
111+
* [MySQL](https://www.mysql.com/) &mdash; A URL with username and password for collecting the calls into the DB.
112+
* `MYSQL_URL`
113+
114+
> If you want to install mysql using helm chart, use this command:
115+
> * `helm install -n <"namesapce"> myrelease bitnami/mysql --set auth.rootPassword=sql123 --set auth.database=mlrun_demos --set primary.service.ports.mysql=3111 --set primary.persistence.enabled=false`
116+
> Example for MYSQL_URL if you use the above command:</br>
117+
`mysql+pymysql://root:sql123@myrelease-mysql.<"namesapce">.svc.cluster.local:3111/mlrun_demos`
118+
119+
120+
### Import
121+
122+
```
123+
import dotenv
124+
import os
125+
import sys
126+
import mlrun
127+
dotenv_file = ".env"
128+
sys.path.insert(0, os.path.abspath("./"))
129+
130+
dotenv.load_dotenv(dotenv_file)
131+
```
132+
133+
```
134+
assert not run_with_gpu
135+
assert os.environ["OPENAI_API_BASE"]
136+
assert os.environ["OPENAI_API_KEY"]
137+
```
138+
139+
```
140+
if not mlrun.mlconf.is_ce_mode():
141+
assert os.environ["MYSQL_URL"]
142+
use_sqlite = False
143+
else:
144+
use_sqlite = True
145+
```
146+
147+
## Demo flow
148+
149+
1. Create the project
150+
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
151+
- **Description**:
152+
- **Key steps**: Create the MLRun project.
153+
- **Key files**:
154+
- [project.yaml](./project.yaml)
155+
- [project_setup.py](./project_setup.py)
156+
157+
2. Generate the call data
158+
159+
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
160+
- **Description**: Generate the call data. (You can choose to skip this step ans use call data that is already generated and available in the demo.)
161+
- **Key steps**: To generate data, run: Agents & clients data generator, Insert agents & clients data to DB, Get agents & clients from DB, Conversation generation, Text to Audio, and Batch Creation. and Batch creation. Then run the workflow.
162+
163+
- **Key files**:
164+
- [Insert agents & clients data to the DB and Get agents & clients from the DB](.src/calls_analysis/data_management.py)
165+
- [Conversation generation and Batch creation](./src/calls_generation/conversations_generator.py)
166+
167+
- **MLRun hub functions:**
168+
- [Agents & Clients Data Generator](https://www.mlrun.org/hub/functions/master/structured_data_generator/)
169+
- [Text to audio](https://www.mlrun.org/hub/functions/master/text_to_audio_generator/)
170+
171+
3. Calls analysis
172+
173+
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
174+
- **Description**: Insert the call data to the DB, use diarization to analyze when each person is speaking, transcribe and translate the calls into text and save them as text files, recognice and remove any PII
175+
, anaylze text (call center conversation) with an LLM, postprocess the LLM's answers before updating them into the DB. Then run the all analysis workflow.
176+
- **Key steps**: Insert the calls data to the DB, perform speech diarization, transcribe, recognize PII, analysis. Then run the workflow.
177+
178+
- **Key files:**
179+
- [Insert the calls data to the DB](.src/calls_analysis/db_management.py)
180+
- [Postprocess analysis answers](.src/postprocess.py)
181+
182+
- **MLRun hub functions:**
183+
- [Perform speech diarization](https://www.mlrun.org/hub/functions/master/silero_vad)
184+
- [Transcribe](https://www.mlrun.org/hub/functions/master/transcribe)
185+
- [Recognize PII](https://www.mlrun.org/hub/functions/master/pii_recognizer)
186+
- [Analysis](https://www.mlrun.org/hub/functions/master/question_answering)
187+
188+
4. View the data
189+
190+
- **Notebook**: [call-center-demo.ipynb](call-center-demo.ipynb)
191+
- **Description**: View the data and features, as they are collected, in the MLRun UI. Deploy [Vizro](https://vizro.mckinsey.com/) to visualize the data in the DB.

0 commit comments

Comments
 (0)