AI workflow automation to handle and respond to customer emails using an internal knowledge base.
Businesses often receive many emails from customers inquiring about products, reporting issues, or requesting assistance. Responding to or escalating inquiries efficiently requires significant effort. This project leverages AI to automate email management, providing timely responses and improving customer satisfaction.
The goal is to develop an AI workflow that can:
- Read and classify emails based on intent or actionable categories.
- Generate responses using a company knowledge base.
- Escalate by creating tickets when necessary.
- Log actions and maintain reports.
Check out the detailed system architecture here
The AI workflow checks emails at regular intervals using a scheduler. Once new emails are received, they are fetched from the client. After fetching new email(s), the reasoning engine is activated. The reasoning engine read the email, take actions, and generates a response email. Then the AI workflow replies to the customer
- Read the email body to classify intent and reason from the input data.
- Decide actions using an LLM:
- Determine if knowledge from the vector database (Chromadb) is required to generate a reply email.
- Identify if a ticket should be created and create tickets in a remote SQL Database.
- Gather relevant ticket information, including problem description, intent class, reason, and email metadata.
- Generate a response email using LLM based on the email body, intent, and gathered context (including ticket numbers if created).
- Log activities in Google Sheets.
- Create tickets via a remote database connection if required.
- Send customer response emails, including the gathered context and ticket number if applicable.
- Log activities into a database for tracking and reporting.
- Email client: fetching emails and replying to them.
- Vector Database: Extracting context from the knowledge base.
- Remote SQL Database: Create tickets.
- Google Sheets: Log AI workflow activities.
- Python 3.10 and related libs.
- FastAPI: REST API Application.
- Docker: Deploy the containerized application.
- LangChain: To create a chain to run LLM, prompt and tools.
- OpenRouter API (keys) for LLMs: LLM invokation are done with API endopoints (Llama 3.2 30b).
- Google APIs: Build connections to the Google Gmail client and app API to Google Sheets.
-
Install Docker:
-
Build the Docker image:
docker build -t crm-ai-agent . -
Run the Docker container:
docker run -p 8000:8000 --env-file .env --name crm-ai-agent crm-ai-agent
-
Access the application at
http://localhost:8000.
- Install dependencies:
pip install -r requirements.txt
- Run the application:
uvicorn app.main:app --host 0.0.0.0 --port 8000 --reload
- AWS EC2 Instance: Ensure you have an EC2 instance running with SSH access.
- SSH Key Pair: A key pair (.pem file) associated with your EC2 instance.
- Docker and Docker Compose: Installed on your local machine and EC2 instance.
- Configure SSH access: Get the Key Pair credentials file (.pem) and add it as a GitHub secret named
EC2_SSH_KEY - Setup GitHub Secrets: Add
EC2_HOST(Public IP or DNS of the EC2 instance) andEC2_USER(SSH username such asubuntufor Ubuntu instance) - When changes are pushed to the
deploybranch (or your desired branch), the GitHub Actions workflow is triggered (located in.github/workflows/deploy.yml).
- Check out to code/repo.
- Set up SSH access to EC2 instance.
- Copy files to the EC2 instance (Volumes).
- Install Docker on the EC2 instance.
- Deploy the AI Agent by building (image) and running the Docker container.
- Create an IAM Role with
AmazonEC2ContainerRegistryFullAccesspolicy and attach with the EC2 instance. - Configure the local machine: Adjust permissions of the Key Pair file (navigate to its location).
chmod 400 your-key.pem
- Connect to the EC2 instance using SSH.
ssh -i "your-key.pem" ubuntu@<EC2-Public-IP>
- Navigate to the project in your local machine and transfer files to the EC2 instance.
scp -i "your-key.pem" -r <my_project> ubuntu@<EC2-Public-IP>:/home/ubuntu/
- Verify transferred files by SSH back to the EC2 instance and check in
/home/ubuntudir. - Build and Run the Dockerized App: Build the Docker images and run the Docker container (port 80 is the default HTTP port in EC2 instance).
docker build -t crm-ai-agent . docker run -p 80:8000 --env-file .env --name crm-ai-agent crm-ai-agent - The AI Agent is now running in the EC2 instance as a Docker container. Access it via
http://<EC2-Public-IP>:80
- Enhance the knowledge base with additional sources.
- Integrate with advanced ticketing systems.
- Add a dashboard for analytics and reporting.
- Add memory functions to maintain email conversations with back-and-forth communication.
- Add a control panel to monitor agent activities.
Remarks: This project was conducted as a learning exercise to build a basic AI Agent. Feel free to create an issue if you find any problems or feedback (both positive and negative). Feel free to reach out for any clarifications or suggestions. Thank you for checking this repository out
