Implemented a simple Voice assistant using Whisper model. The main focus was to understand the basic underlying architectute behind the voice system such as Alexa. Used TFIDFVectorizer to convert sentences into vector form as machine understand in numeric form. Then training it using MLP (Multi-Layer Perceptron) to learn pattern from the data to predict the intent of the user. Based on the intent of user, the action engine will get executed to perform the action.
Steps
- Clone this github repository
- Install the required packages using pip
pip install -r requirements.txt - Open Jupyter notebook and execute all the cell
Note
While running cells in Jupyter, please make sure that the terminal is pointing to the current working directory to avoid any issues
Thank you 😄