You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This demo showcases how to use LLMs to turn audio files from callcenter conversations between customers and agents into valuable data, all in a single workflow orchestrated by MLRun. It illustrates the potential power of LLMs for feature extraction, and the simplicity of working with MLRun.
4
4
5
-
This demo showcases how to use LLMs to turn audio files from call center conversations between customers and agents into valuable data, all in a single workflow orchestrated by MLRun.
5
+
## Overview
6
6
7
7
MLRun automates the entire workflow, auto-scales resources as needed, and automatically logs and parses values between the different workflow steps.
8
8
9
-
By the end of this demo you will see the potential power of LLMs for feature extraction, and how easily you can do this with MLRun!
9
+
The demo demonstrates two usages of GenAI:
10
+
- Unstructured data generation: Generating audio data with ground truth metadata to evaluate the analysis.
11
+
- Unstructured data analysis: Turning audio calls into text and tabular features.
10
12
11
-
This demo uses:
12
-
*[**OpenAI's Whisper**](https://openai.com/research/whisper)— To transcribe the audio calls into text.
13
-
*[**Flair**](https://flairnlp.github.io/) and [**Microsoft's Presidio**](https://microsoft.github.io/presidio/) - To recognize PII so it can be filtered out.
14
-
*[**HuggingFace**](https://huggingface.co/)— The main machine-learning framework to get the model and tokenizer for the features extraction.
15
-
* and [**MLRun**](https://www.mlrun.org/)— as the orchestrator to operationalize the workflow.
13
+
The demo contains a single [notebook](./call-center-demo.ipynb) that encompasses the entire demo.
16
14
17
-
The demo contains a single [notebook](./notebook.ipynb) that encompasses the entire demo.
15
+
Most of the functions are imported from [MLRun's hub](https://https://www.mlrun.org/hub/), which contains a wide range of functions and modules that can be used for a variety of use cases. See also the [MLRun hub documentation](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html). All functions used in the demo include links to their source in the hub. All of the python source code is under [/src](./src).
18
16
17
+
> **⚠️ Important This demo can take an hour to complete when running without GPUs.**
19
18
20
-
Most of the functions are imported from [MLRun's function hub](https://docs.mlrun.org/en/stable/runtimes/load-from-hub.html), which contains a wide range of functions that can be used for a variety of use cases. All functions used in the demo include links to their source in the hub. All of the python source code is under [/src](./src).
21
-
Enjoy!
19
+
## Prerequisites
20
+
21
+
This demo uses:
22
+
23
+
*[**OpenAI's Whisper**](https://openai.com/research/whisper)— To transcribe the audio calls into text.
24
+
*[**Flair**](https://flairnlp.github.io/) and [**Microsoft's Presidio**](https://microsoft.github.io/presidio/)— To recognize PII so it can be filtered out.
25
+
*[**HuggingFace**](https://huggingface.co/)— The main machine-learning framework to get the model and tokenizer for the features extraction.
26
+
*[**Vizro**](https://vizro.mckinsey.com/)— To view the call center DB and transcriptions, and to play the generated conversations.
27
+
*[**MLRun**](https://www.mlrun.org/)— the orchestrator to operationalize the workflow. MLRun 1.9 and higher, Python 3.11, with CPU or GPU.
28
+
*[**SQLAlchemy**](https://www.sqlalchemy.org/)— Manage the MySQL DB of calls, clients and agents. Installed together with MLRun.
29
+
- MySQL database. Installed together with MLRun. (SQLite is not currently supported.)
22
30
23
-
___
24
31
<aid="installation"></a>
25
32
## Installation
26
33
@@ -29,7 +36,7 @@ This project can run in different development environments:
29
36
* Inside GitHub Codespaces
30
37
* Other managed Jupyter environments
31
38
32
-
### Install the code and the mlrun client
39
+
### Install the code and the MLRun client
33
40
34
41
To get started, fork this repo into your GitHub account and clone it into your development environment.
35
42
@@ -41,7 +48,7 @@ If you prefer to use Conda, use this instead (to create and configure a conda en
41
48
42
49
make conda-env
43
50
44
-
> Make sure you open the notebooks and select the `mlrun` conda environment
51
+
Make sure you open the notebooks and select the `mlrun` conda environment
45
52
46
53
### Install or connect to the MLRun service/cluster
47
54
@@ -51,16 +58,134 @@ If your development environment supports Docker and there are sufficient CPU res
51
58
52
59
make mlrun-docker
53
60
54
-
>MLRun UI can be viewed in: http://localhost:8060
61
+
The MLRun UI can be viewed in: http://localhost:8060
55
62
56
63
If your environment is minimal, run mlrun as a process (no UI):
57
64
58
65
[conda activate mlrun &&] make mlrun-api
59
66
60
-
For MLRun to run properly you should set your client environment. This is not required when using **codespaces**, the mlrun **conda** environment, or **iguazio** managed notebooks.
67
+
For MLRun to run properly, set up your client environment. This is not required when using **codespaces**, the mlrun **conda** environment, or **iguazio** managed notebooks.
61
68
62
69
Your environment should include `MLRUN_ENV_FILE=<absolute path to the ./mlrun.env file> ` (point to the mlrun .env file
63
-
in this repo); see [mlrun client setup](https://docs.mlrun.org/en/latest/install/remote.html) instructions for details.
70
+
in this repo); see [mlrun client setup](https://docs.mlrun.org/en/stable/install/remote.html) instructions for details.
64
71
65
72
> Note: You can also use a remote MLRun service (over Kubernetes): instead of starting a local mlrun:
66
-
> edit the [mlrun.env](./mlrun.env) and specify its address and credentials.
73
+
edit the [mlrun.env](./mlrun.env) and specify its address and credentials.
74
+
75
+
### Install SQAlchemy
76
+
77
+
```
78
+
!pip install SQLAlchemy==2.0.31 pymysql dotenv
79
+
```
80
+
### Setup
81
+
Set the following configuration: choose compute device: CPU or GPU; choose the language of the calls; and whether to skip the calls generation workflow and use pre-generated data. For example:
82
+
83
+
```
84
+
# True = run with GPU, False = run with CPU
85
+
run_with_gpu = False
86
+
use_sqlite = False
87
+
engine = "remote
88
+
language = "en" # The languages of the calls, es - Spanish, en - English
89
+
skip_calls_generation = False
90
+
```
91
+
92
+
#### Setup in Platform McK
93
+
94
+
Differences between installing on Iguazio cluster and Platform McKinsey:
-**Description**: Generate the call data. (You can choose to skip this step ans use call data that is already generated and available in the demo.)
161
+
-**Key steps**: To generate data, run: Agents & clients data generator, Insert agents & clients data to DB, Get agents & clients from DB, Conversation generation, Text to Audio, and Batch Creation. and Batch creation. Then run the workflow.
162
+
163
+
-**Key files**:
164
+
-[Insert agents & clients data to the DB and Get agents & clients from the DB](.src/calls_analysis/data_management.py)
165
+
-[Conversation generation and Batch creation](./src/calls_generation/conversations_generator.py)
166
+
167
+
-**MLRun hub functions:**
168
+
-[Agents & Clients Data Generator](https://www.mlrun.org/hub/functions/master/structured_data_generator/)
169
+
-[Text to audio](https://www.mlrun.org/hub/functions/master/text_to_audio_generator/)
-**Description**: Insert the call data to the DB, use diarization to analyze when each person is speaking, transcribe and translate the calls into text and save them as text files, recognice and remove any PII
175
+
, anaylze text (call center conversation) with an LLM, postprocess the LLM's answers before updating them into the DB. Then run the all analysis workflow.
176
+
-**Key steps**: Insert the calls data to the DB, perform speech diarization, transcribe, recognize PII, analysis. Then run the workflow.
177
+
178
+
-**Key files:**
179
+
-[Insert the calls data to the DB](.src/calls_analysis/db_management.py)
-**Description**: View the data and features, as they are collected, in the MLRun UI. Deploy [Vizro](https://vizro.mckinsey.com/) to visualize the data in the DB.
0 commit comments