|
1 | 1 | \section{Demonstration and Usage} |
2 | 2 | \label{sec:demo} |
3 | 3 |
|
4 | | -This section provides comprehensive instructions for deploying, running, and testing the Personal Financial AI Agent system. |
| 4 | +This section outlines how to deploy and use the Personal Financial AI Agent, including system requirements, installation procedures, and the integrated testing infrastructure. |
5 | 5 |
|
6 | | -\subsection{System Requirements} |
7 | 6 |
|
8 | | -\subsubsection{Hardware Requirements} |
| 7 | +\subsection{Software Requirements} |
9 | 8 |
|
10 | | -\begin{itemize} |
11 | | - \item \textbf{CPU}: Dual-core processor or better (quad-core recommended) |
12 | | - \item \textbf{RAM}: 4GB minimum (8GB recommended for local LLM inference) |
13 | | - \item \textbf{Storage}: 10GB free space (20GB if running Ollama with large models) |
14 | | - \item \textbf{GPU}: Optional but recommended for Ollama inference |
15 | | -\end{itemize} |
16 | | - |
17 | | -\subsubsection{Software Requirements} |
18 | | - |
19 | | -\begin{itemize} |
20 | | - \item \textbf{Python}: 3.11 or higher |
21 | | - \item \textbf{Docker}: Version 20.10+ (for containerized deployment) |
22 | | - \item \textbf{Docker Compose}: Version 1.29+ (for multi-container setup) |
23 | | - \item \textbf{Git}: For repository cloning |
24 | | -\end{itemize} |
| 9 | +You will need \textbf{Python 3.11} or higher installed on your system. For containerized deployment, \textbf{Docker version 20.10+} is required, along with \textbf{Docker Compose version 1.29+} for managing multi-container setups. Finally, \textbf{Git} is necessary for cloning the repository from GitHub. |
25 | 10 |
|
26 | 11 | \subsection{Installation} |
27 | 12 |
|
28 | | -\subsubsection{Step 1: Clone Repository} |
29 | | - |
30 | | -\begin{lstlisting}[language=bash] |
31 | | -git clone https://github.com/merendamattia/personal-financial-ai-agent.git |
32 | | -cd personal-financial-ai-agent |
33 | | -\end{lstlisting} |
34 | | - |
35 | | -\subsubsection{Step 2: Create Python Environment} |
36 | | - |
37 | | -Using Conda (recommended): |
38 | | - |
39 | | -\begin{lstlisting}[language=bash] |
40 | | -conda create --name personal-financial-ai-agent python=3.11.13 |
41 | | -conda activate personal-financial-ai-agent |
42 | | -\end{lstlisting} |
43 | | - |
44 | | -Or using venv: |
45 | | - |
46 | | -\begin{lstlisting}[language=bash] |
47 | | -python3 -m venv venv |
48 | | -source venv/bin/activate # On Windows: venv\Scripts\activate |
49 | | -\end{lstlisting} |
50 | | - |
51 | | -\subsubsection{Step 3: Install Dependencies} |
52 | | - |
53 | | -\begin{lstlisting}[language=bash] |
54 | | -python -m pip install --upgrade pip |
55 | | -pip install -r requirements.txt |
56 | | -\end{lstlisting} |
57 | | - |
58 | | -\subsubsection{Step 4: Configure Environment Variables} |
59 | | - |
60 | | -\begin{lstlisting}[language=bash] |
61 | | -cp .env.example .env |
62 | | -# Edit .env with your settings |
63 | | -\end{lstlisting} |
64 | | - |
65 | | -Example `.env` configuration for local Ollama: |
66 | | - |
67 | | -\begin{lstlisting} |
68 | | -# LLM Provider |
69 | | -LLM_PROVIDER=ollama |
70 | | -AGENT_NAME=FinancialAdvisor |
71 | | - |
72 | | -# Ollama Configuration |
73 | | -OLLAMA_API_URL=http://localhost:11434 |
74 | | -OLLAMA_MODEL=mistral |
75 | | - |
76 | | -# Logging |
77 | | -LOG_LEVEL=INFO |
78 | | - |
79 | | -# Monte Carlo Parameters |
80 | | -MONTECARLO_SIMULATION_SCENARIOS=1000 |
81 | | -MONTECARLO_SIMULATION_YEARS=20 |
82 | | -MONTECARLO_DEFAULT_INITIAL_INVESTMENT=1000 |
83 | | -MONTECARLO_DEFAULT_MONTHLY_CONTRIBUTION=100 |
84 | | -\end{lstlisting} |
85 | | - |
86 | | -\subsubsection{Step 5: Extract Dataset} |
87 | | - |
88 | | -\begin{lstlisting}[language=bash] |
89 | | -cd dataset |
90 | | -unzip ETFs.zip |
91 | | -cd .. |
92 | | -\end{lstlisting} |
93 | | - |
94 | | -\subsection{Running Locally} |
95 | | - |
96 | | -\subsubsection{With Ollama (Recommended)} |
97 | | - |
98 | | -First, install and run Ollama: |
99 | | - |
100 | | -\begin{lstlisting}[language=bash] |
101 | | -# Install Ollama from https://ollama.com |
102 | | -# Start Ollama server (runs in background) |
103 | | -ollama serve |
104 | | - |
105 | | -# In another terminal, pull a model |
106 | | -ollama pull mistral # Or llama2, neural-chat, etc. |
107 | | -\end{lstlisting} |
108 | | - |
109 | | -Then run the Streamlit app: |
110 | | - |
111 | | -\begin{lstlisting}[language=bash] |
112 | | -streamlit run app.py |
113 | | -\end{lstlisting} |
114 | | - |
115 | | -The application opens at `http://localhost:8501`. |
116 | | - |
117 | | -\subsubsection{With Cloud LLM Providers} |
118 | | - |
119 | | -For Google Gemini: |
120 | | - |
121 | | -\begin{lstlisting}[language=bash] |
122 | | -# In .env file: |
123 | | -# LLM_PROVIDER=google |
124 | | -# GOOGLE_MODEL=gemini-pro |
125 | | -# GOOGLE_API_KEY=your_api_key_here |
126 | | - |
127 | | -streamlit run app.py |
128 | | -\end{lstlisting} |
129 | | - |
130 | | -For OpenAI: |
131 | | - |
132 | | -\begin{lstlisting}[language=bash] |
133 | | -# In .env file: |
134 | | -# LLM_PROVIDER=openai |
135 | | -# OPENAI_MODEL=gpt-4 |
136 | | -# OPENAI_API_KEY=your_api_key_here |
137 | | - |
138 | | -streamlit run app.py |
139 | | -\end{lstlisting} |
140 | | - |
141 | | -\subsection{Running with Docker} |
142 | | - |
143 | | -\subsubsection{Docker Compose (Recommended)} |
144 | | - |
145 | | -The simplest method with all dependencies: |
146 | | - |
147 | | -\begin{lstlisting}[language=bash] |
148 | | -# Start application with Ollama |
149 | | -docker compose up |
150 | | - |
151 | | -# Access at http://localhost:8501 |
152 | | - |
153 | | -# Stop all containers |
154 | | -docker compose down |
155 | | -\end{lstlisting} |
156 | | - |
157 | | -The `docker-compose.yml` includes: |
158 | | - |
159 | | -\begin{itemize} |
160 | | - \item Streamlit web application (port 8501) |
161 | | - \item Ollama service with Mistral model (port 11434) |
162 | | - \item Persistent volumes for model caching |
163 | | -\end{itemize} |
164 | | - |
165 | | -\subsubsection{Docker Standalone} |
166 | | - |
167 | | -Build and run Docker image: |
168 | | - |
169 | | -\begin{lstlisting}[language=bash] |
170 | | -# Build image |
171 | | -docker build --no-cache -t financial-ai-agent:local . |
172 | | - |
173 | | -# Run with environment file |
174 | | -docker run -p 8501:8501 --env-file .env financial-ai-agent:local |
175 | | - |
176 | | -# Access at http://localhost:8501 |
177 | | -\end{lstlisting} |
178 | | - |
179 | | -Or use pre-built image from Docker Hub: |
180 | | - |
181 | | -\begin{lstlisting}[language=bash] |
182 | | -docker pull merendamattia/personal-financial-ai-agent:latest |
183 | | -docker run -p 8501:8501 --env-file .env \ |
184 | | - merendamattia/personal-financial-ai-agent:latest |
185 | | -\end{lstlisting} |
186 | | - |
187 | | -\subsection{Testing the System} |
188 | | - |
189 | | -\subsubsection{Unit Tests} |
190 | | - |
191 | | -Run unit tests for core components: |
192 | | - |
193 | | -\begin{lstlisting}[language=bash] |
194 | | -# Run all unit tests |
195 | | -pytest tests/unit/ |
196 | | - |
197 | | -# Run specific test module |
198 | | -pytest tests/unit/test_financial_profile.py |
199 | | - |
200 | | -# Run with coverage report |
201 | | -pytest tests/unit/ --cov=src/ --cov-report=html |
202 | | -\end{lstlisting} |
203 | | - |
204 | | -\subsubsection{Integration Tests} |
205 | | - |
206 | | -Test complete workflows: |
207 | | - |
208 | | -\begin{lstlisting}[language=bash] |
209 | | -# Run integration tests |
210 | | -pytest tests/test_rag.py |
211 | | - |
212 | | -# Run all tests |
213 | | -pytest tests/ |
214 | | -\end{lstlisting} |
215 | | - |
216 | | -\subsubsection{Tool Tests} |
| 13 | +For detailed installation instructions, please refer to the comprehensive \textbf{README.md} file in the repository at \url{https://github.com/merendamattia/personal-financial-ai-agent}. The README contains step-by-step guidance for cloning the repository, setting up Python environments, installing dependencies, and configuring environment variables for all supported LLM providers. |
217 | 14 |
|
218 | | -Test specific tools: |
| 15 | +We \textbf{strongly recommend} using Docker Compose for deployment as it provides a consistent, isolated, and production-ready environment that works seamlessly across all platforms without local dependency management hassles. |
219 | 16 |
|
220 | | -\begin{lstlisting}[language=bash] |
221 | | -# Test financial asset analysis tool |
222 | | -pytest tests/tools/test_analyze_financial_asset.py |
| 17 | +\subsection{Testing and Validation} |
223 | 18 |
|
224 | | -# Test asset retriever |
225 | | -pytest tests/unit/test_asset_retriever.py |
226 | | -\end{lstlisting} |
| 19 | +The project includes a comprehensive test suite covering unit tests for core components (financial profile, portfolio management, asset retrieval), integration tests for complete workflows, and tool-specific tests for financial analysis features. All tests are automatically executed in the CI/CD pipeline to ensure code quality and prevent regressions. |
0 commit comments