|
2 | 2 |
|
3 | 3 | This documentation provides the end-to-end flow of the application, helps you initiate the setup, and guides you in observing and validating the results. |
4 | 4 |
|
5 | | -*Application can be initiated either by using **Upload Files** or by clicking **Start Recording**.This documentation will guide you with upload files.* |
| 5 | +The application can be initiated either by using **Upload Files** or by clicking **Start Recording**. This documentation will guide you with upload files. |
6 | 6 |
|
7 | | -## 1) Upload Files |
| 7 | +## Step 1: Upload Files |
8 | 8 |
|
9 | | -By clicking any one of the upload file buttons, will open a modal for audio and video files inputs |
| 9 | +Clicking any one of the upload file buttons will open a modal for audio and video files inputs. |
10 | 10 |
|
11 | | -**Base Directory Path should be folder path of video files (user should manually add/copy the path)** |
| 11 | +> **Note:** Base Directory Path should be folder path of video files (user should manually add/copy the path). |
12 | 12 |
|
13 | | -Audio → *.mp3 or .wav or .m4a* \ |
14 | | -Video → *.mp4* |
| 13 | +Accepted file formats: |
15 | 14 |
|
16 | | -<p> |
17 | | - <img src="../user-guide/_assets/uploadbutton.png" width="45%" /> |
18 | | - <img src="../user-guide/_assets/uploadmodal.png" width="45%" /> |
19 | | -</p> |
| 15 | +- Audio: *.mp3 or .wav or .m4a* |
| 16 | +- Video: *.mp4* |
20 | 17 |
|
21 | | -**After successful upload click Apply&Start Processing** \ |
22 | | -*Note: Search is enabled only after content segmentation* |
| 18 | + |
| 19 | + |
23 | 20 |
|
24 | | -## 2) Audio Analysis and Video Streaming |
| 21 | +**After successful upload click Apply & Start Processing** |
| 22 | + |
| 23 | +> **Note:** Search is enabled only after content segmentation. |
| 24 | +
|
| 25 | +## Step 2: Audio Analysis and Video Streaming |
25 | 26 |
|
26 | 27 | Application will start transcription after analyzing the audio and videos will get stream parallelly as below. |
27 | 28 |
|
28 | | -### Right Panel: |
29 | | -**Configuration Metrics** - Deatails about the platform and software configuration along with performance metrics of summarization\ |
30 | | -**Resource Utilization** - Live monitoring of CPU, GPU, NPU, Memory and Power Utilization \ |
31 | | -**Class Engagement** - Statistics of student engagement and speaker's timeline during the class (real-time) \ |
32 | | -**Pre-Validated Models** - Shows the models being used for transcription and summarization |
| 29 | +### Right Panel |
33 | 30 |
|
| 31 | +- **Configuration Metrics** - Details about the platform and software configuration along with performance metrics of summarization |
| 32 | +- **Resource Utilization** - Live monitoring of CPU, GPU, NPU, Memory and Power Utilization |
| 33 | +- **Class Engagement** - Statistics of student engagement and speaker's timeline during the class (real-time) |
| 34 | +- **Pre-Validated Models** - Shows the models being used for transcription and summarization |
34 | 35 |
|
35 | 36 |  |
36 | 37 |
|
37 | | -## 3) Tabs Switch |
38 | | - |
39 | | -User can switch between tabs as shown below |
| 38 | +## Step 3: Tabs Switch |
40 | 39 |
|
41 | | -The Room View toggle allows the user to switch between full audio–video mode and audio-only mode. When disabled, the video component is hidden and only the audio panel remains visible. |
| 40 | +The user can switch between tabs as shown below. |
42 | 41 |
|
43 | 42 |  |
44 | 43 |
|
45 | | -## 4) Transcription and Speaker Timeline |
| 44 | +The Room View toggle allows the user to switch between full audio–video mode and audio-only mode. When disabled, the video component is hidden and only the audio panel remains visible. |
| 45 | + |
| 46 | +## Step 4: Transcription and Speaker Timeline |
46 | 47 |
|
47 | | -*Once Teacher is identified, labels are updated accordingly* |
| 48 | +*Once the Teacher is identified, labels are updated accordingly* |
48 | 49 |
|
49 | 50 |  |
50 | 51 |
|
51 | | -## 5) Content-Segmentation |
| 52 | +## Step 5: Content-Segmentation |
52 | 53 |
|
53 | 54 | *After mindmap is generated and video processing completed, Content segmentation starts and video playback is enabled for video search* |
54 | 55 |
|
55 | | -Audio+Video -> content segmentation is enabled after mindmap is generated and video processing completed |
| 56 | +- Audio+Video: content segmentation is enabled after the MindMap is generated and video processing completed. |
56 | 57 |
|
57 | 58 |  |
58 | 59 |
|
59 | | -## 6) Final State |
| 60 | +## Step 6: Final State |
60 | 61 |
|
61 | | -Audio → After transcription and post summary, MindMap gets generated \ |
62 | | -Video → After video Processing playbackMode is enabled and based on the topic-search the results are shown \ |
63 | | -VideoSearch -> Based on search results the video timeline is highlighted on the respective time-stamps of topic |
| 62 | +- Audio: After transcription and post summary, MindMap gets generated |
| 63 | +- Video: After video Processing playbackMode is enabled and based on the topic-search the results are shown |
| 64 | +- VideoSearch: Based on search results the video timeline is highlighted on the respective time-stamps of topic |
64 | 65 |
|
65 | 66 |  |
0 commit comments