Skip to content

Commit d0488e1

Browse files
committed
feat: update author information and enhance documentation in LaTeX files
1 parent 74133a4 commit d0488e1

File tree

5 files changed

+15
-225
lines changed

5 files changed

+15
-225
lines changed

latex/main.tex

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -222,14 +222,14 @@
222222

223223
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
224224
%Front Matter
225-
\author[1]{xxx}
226-
\author[2]{xxx}
225+
\author[1]{Merenda Saverio Mattia}
226+
\author[2]{Crafa Raffaele}
227227

228228
\affil[1]{
229-
\url{xxx}
229+
230230
}
231231
\affil[2]{
232-
\url{xxx}
232+
233233
}
234234

235235
% Title
@@ -249,6 +249,7 @@
249249

250250
% \linenumbers % utilizzato per debug
251251

252+
\newpage
252253
\printbibliography
253254

254255
\end{document}

latex/sections/01_abstract.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
\section*{Abstract}
22

3-
This paper presents the design, implementation, and evaluation of a Personal Financial AI Agent---a machine learning-based system designed to provide intelligent, personalized financial guidance to users. The system leverages large language models (LLMs) and retrieval-augmented generation (RAG) techniques to deliver contextualized financial advice, portfolio recommendations, and analysis. The agent supports multiple LLM providers, namely Ollama for local offline inference, Google Gemini for cloud-based solutions, and OpenAI for industry-standard capabilities, enabling users to choose between privacy-first approaches and feature-rich cloud solutions. A Streamlit-based web interface facilitates interactive conversations in multiple languages, providing an accessible entry point for diverse user bases. The system demonstrates proficiency in financial profile extraction from natural language conversations, comprehensive portfolio analysis, and Monte Carlo simulations for scenario planning and risk assessment. This comprehensive study details the system's architecture, implementation techniques, evaluation methodologies, and provides clear demonstration procedures for practitioners and researchers. The project showcases the practical application of conversational AI and retrieval-augmented generation in the financial advisory domain, demonstrating how modern AI techniques can democratize access to quality financial guidance.
3+
This paper presents the design, implementation, and evaluation of a Personal Financial AI Agent\footnote{Available at: \url{https://github.com/merendamattia/personal-financial-ai-agent}.}---a machine learning-based system designed to provide intelligent, personalized financial guidance to users. The system leverages large language models (LLMs) and retrieval-augmented generation (RAG) techniques to deliver contextualized financial advice, portfolio recommendations, and analysis. The agent supports multiple LLM providers, namely Ollama for local offline inference, Google Gemini for cloud-based solutions, and OpenAI for industry-standard capabilities, enabling users to choose between privacy-first approaches and feature-rich cloud solutions. A Streamlit-based web interface facilitates interactive conversations in multiple languages, providing an accessible entry point for diverse user bases. The system demonstrates proficiency in financial profile extraction from natural language conversations, comprehensive portfolio analysis, and Monte Carlo simulations for scenario planning and risk assessment. This comprehensive study details the system's architecture, implementation techniques, evaluation methodologies, and provides clear demonstration procedures for practitioners and researchers. The project showcases the practical application of conversational AI and retrieval-augmented generation in the financial advisory domain, demonstrating how modern AI techniques can democratize access to quality financial guidance.
44

55
\vspace{1em}
66

latex/sections/06_interface.tex

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,9 +26,3 @@ \subsection{Settings, Configuration and Profile Management}
2626
The settings page empowers users to customize their experience. For LLM provider selection, users can choose between Ollama for local privacy-preserving inference, Google Gemini for advanced cloud-based reasoning, or OpenAI for highest-quality responses. Users can configure API keys for cloud providers, select specific models within each provider, and test connectivity to verify configuration before proceeding. Language selection is available through a dropdown menu supporting English, Italian, Spanish, French, and German. All agent prompts and responses adapt to the selected language automatically. Financial parameters can be customized including Monte Carlo simulation parameters such as the number of scenarios and projection years, initial investment amounts, monthly contribution levels, risk tolerance overrides, and asset class preferences. This customization allows users to tailor the analysis to their specific circumstances.
2727

2828
Users can export their extracted financial profile as JSON, enabling them to download a structured record of their financial situation for record-keeping or sharing with professional advisors. Users can also import previously saved profiles to resume analysis or continue work from a previous session. Manual editing of extracted profiles is supported, allowing users to correct information or update their circumstances. The ability to manage profiles increases user confidence in data privacy and gives users a sense of control over their information.
29-
30-
\subsection{Error Handling, Feedback and Accessibility}
31-
32-
Input validation errors provide clear, actionable messages. For example, if a portfolio allocation does not sum to 100 percent, the system displays an error message indicating the actual sum and requesting correction. Long-running operations such as portfolio generation show progress indicators with spinning animations and descriptive messages like ``Generating portfolio recommendation...'', preventing users from thinking the system is frozen. Information messages provide context and guidance to users. For example, the system might display an informational message stating ``This portfolio assumes a 20-year investment horizon and moderate rebalancing''. Warning messages appear for important disclaimers such as ``Past performance does not guarantee future results'', ensuring users understand the limitations of the analysis.
33-
34-
The interface is designed with accessibility in mind. Semantic HTML structure supports screen readers for visually impaired users. Sufficient color contrast ensures visibility for users with color blindness. Keyboard navigation support allows users to interact with the interface without a mouse. Alternative text descriptions for visualizations help users understand content without seeing the charts. The responsive design adapts to mobile devices, tablets, and desktop browsers, ensuring usability across device types.

latex/sections/07_demo.tex

Lines changed: 7 additions & 214 deletions
Original file line numberDiff line numberDiff line change
@@ -1,226 +1,19 @@
11
\section{Demonstration and Usage}
22
\label{sec:demo}
33

4-
This section provides comprehensive instructions for deploying, running, and testing the Personal Financial AI Agent system.
4+
This section outlines how to deploy and use the Personal Financial AI Agent, including system requirements, installation procedures, and the integrated testing infrastructure.
55

6-
\subsection{System Requirements}
76

8-
\subsubsection{Hardware Requirements}
7+
\subsection{Software Requirements}
98

10-
\begin{itemize}
11-
\item \textbf{CPU}: Dual-core processor or better (quad-core recommended)
12-
\item \textbf{RAM}: 4GB minimum (8GB recommended for local LLM inference)
13-
\item \textbf{Storage}: 10GB free space (20GB if running Ollama with large models)
14-
\item \textbf{GPU}: Optional but recommended for Ollama inference
15-
\end{itemize}
16-
17-
\subsubsection{Software Requirements}
18-
19-
\begin{itemize}
20-
\item \textbf{Python}: 3.11 or higher
21-
\item \textbf{Docker}: Version 20.10+ (for containerized deployment)
22-
\item \textbf{Docker Compose}: Version 1.29+ (for multi-container setup)
23-
\item \textbf{Git}: For repository cloning
24-
\end{itemize}
9+
You will need \textbf{Python 3.11} or higher installed on your system. For containerized deployment, \textbf{Docker version 20.10+} is required, along with \textbf{Docker Compose version 1.29+} for managing multi-container setups. Finally, \textbf{Git} is necessary for cloning the repository from GitHub.
2510

2611
\subsection{Installation}
2712

28-
\subsubsection{Step 1: Clone Repository}
29-
30-
\begin{lstlisting}[language=bash]
31-
git clone https://github.com/merendamattia/personal-financial-ai-agent.git
32-
cd personal-financial-ai-agent
33-
\end{lstlisting}
34-
35-
\subsubsection{Step 2: Create Python Environment}
36-
37-
Using Conda (recommended):
38-
39-
\begin{lstlisting}[language=bash]
40-
conda create --name personal-financial-ai-agent python=3.11.13
41-
conda activate personal-financial-ai-agent
42-
\end{lstlisting}
43-
44-
Or using venv:
45-
46-
\begin{lstlisting}[language=bash]
47-
python3 -m venv venv
48-
source venv/bin/activate # On Windows: venv\Scripts\activate
49-
\end{lstlisting}
50-
51-
\subsubsection{Step 3: Install Dependencies}
52-
53-
\begin{lstlisting}[language=bash]
54-
python -m pip install --upgrade pip
55-
pip install -r requirements.txt
56-
\end{lstlisting}
57-
58-
\subsubsection{Step 4: Configure Environment Variables}
59-
60-
\begin{lstlisting}[language=bash]
61-
cp .env.example .env
62-
# Edit .env with your settings
63-
\end{lstlisting}
64-
65-
Example `.env` configuration for local Ollama:
66-
67-
\begin{lstlisting}
68-
# LLM Provider
69-
LLM_PROVIDER=ollama
70-
AGENT_NAME=FinancialAdvisor
71-
72-
# Ollama Configuration
73-
OLLAMA_API_URL=http://localhost:11434
74-
OLLAMA_MODEL=mistral
75-
76-
# Logging
77-
LOG_LEVEL=INFO
78-
79-
# Monte Carlo Parameters
80-
MONTECARLO_SIMULATION_SCENARIOS=1000
81-
MONTECARLO_SIMULATION_YEARS=20
82-
MONTECARLO_DEFAULT_INITIAL_INVESTMENT=1000
83-
MONTECARLO_DEFAULT_MONTHLY_CONTRIBUTION=100
84-
\end{lstlisting}
85-
86-
\subsubsection{Step 5: Extract Dataset}
87-
88-
\begin{lstlisting}[language=bash]
89-
cd dataset
90-
unzip ETFs.zip
91-
cd ..
92-
\end{lstlisting}
93-
94-
\subsection{Running Locally}
95-
96-
\subsubsection{With Ollama (Recommended)}
97-
98-
First, install and run Ollama:
99-
100-
\begin{lstlisting}[language=bash]
101-
# Install Ollama from https://ollama.com
102-
# Start Ollama server (runs in background)
103-
ollama serve
104-
105-
# In another terminal, pull a model
106-
ollama pull mistral # Or llama2, neural-chat, etc.
107-
\end{lstlisting}
108-
109-
Then run the Streamlit app:
110-
111-
\begin{lstlisting}[language=bash]
112-
streamlit run app.py
113-
\end{lstlisting}
114-
115-
The application opens at `http://localhost:8501`.
116-
117-
\subsubsection{With Cloud LLM Providers}
118-
119-
For Google Gemini:
120-
121-
\begin{lstlisting}[language=bash]
122-
# In .env file:
123-
# LLM_PROVIDER=google
124-
# GOOGLE_MODEL=gemini-pro
125-
# GOOGLE_API_KEY=your_api_key_here
126-
127-
streamlit run app.py
128-
\end{lstlisting}
129-
130-
For OpenAI:
131-
132-
\begin{lstlisting}[language=bash]
133-
# In .env file:
134-
# LLM_PROVIDER=openai
135-
# OPENAI_MODEL=gpt-4
136-
# OPENAI_API_KEY=your_api_key_here
137-
138-
streamlit run app.py
139-
\end{lstlisting}
140-
141-
\subsection{Running with Docker}
142-
143-
\subsubsection{Docker Compose (Recommended)}
144-
145-
The simplest method with all dependencies:
146-
147-
\begin{lstlisting}[language=bash]
148-
# Start application with Ollama
149-
docker compose up
150-
151-
# Access at http://localhost:8501
152-
153-
# Stop all containers
154-
docker compose down
155-
\end{lstlisting}
156-
157-
The `docker-compose.yml` includes:
158-
159-
\begin{itemize}
160-
\item Streamlit web application (port 8501)
161-
\item Ollama service with Mistral model (port 11434)
162-
\item Persistent volumes for model caching
163-
\end{itemize}
164-
165-
\subsubsection{Docker Standalone}
166-
167-
Build and run Docker image:
168-
169-
\begin{lstlisting}[language=bash]
170-
# Build image
171-
docker build --no-cache -t financial-ai-agent:local .
172-
173-
# Run with environment file
174-
docker run -p 8501:8501 --env-file .env financial-ai-agent:local
175-
176-
# Access at http://localhost:8501
177-
\end{lstlisting}
178-
179-
Or use pre-built image from Docker Hub:
180-
181-
\begin{lstlisting}[language=bash]
182-
docker pull merendamattia/personal-financial-ai-agent:latest
183-
docker run -p 8501:8501 --env-file .env \
184-
merendamattia/personal-financial-ai-agent:latest
185-
\end{lstlisting}
186-
187-
\subsection{Testing the System}
188-
189-
\subsubsection{Unit Tests}
190-
191-
Run unit tests for core components:
192-
193-
\begin{lstlisting}[language=bash]
194-
# Run all unit tests
195-
pytest tests/unit/
196-
197-
# Run specific test module
198-
pytest tests/unit/test_financial_profile.py
199-
200-
# Run with coverage report
201-
pytest tests/unit/ --cov=src/ --cov-report=html
202-
\end{lstlisting}
203-
204-
\subsubsection{Integration Tests}
205-
206-
Test complete workflows:
207-
208-
\begin{lstlisting}[language=bash]
209-
# Run integration tests
210-
pytest tests/test_rag.py
211-
212-
# Run all tests
213-
pytest tests/
214-
\end{lstlisting}
215-
216-
\subsubsection{Tool Tests}
13+
For detailed installation instructions, please refer to the comprehensive \textbf{README.md} file in the repository at \url{https://github.com/merendamattia/personal-financial-ai-agent}. The README contains step-by-step guidance for cloning the repository, setting up Python environments, installing dependencies, and configuring environment variables for all supported LLM providers.
21714

218-
Test specific tools:
15+
We \textbf{strongly recommend} using Docker Compose for deployment as it provides a consistent, isolated, and production-ready environment that works seamlessly across all platforms without local dependency management hassles.
21916

220-
\begin{lstlisting}[language=bash]
221-
# Test financial asset analysis tool
222-
pytest tests/tools/test_analyze_financial_asset.py
17+
\subsection{Testing and Validation}
22318

224-
# Test asset retriever
225-
pytest tests/unit/test_asset_retriever.py
226-
\end{lstlisting}
19+
The project includes a comprehensive test suite covering unit tests for core components (financial profile, portfolio management, asset retrieval), integration tests for complete workflows, and tool-specific tests for financial analysis features. All tests are automatically executed in the CI/CD pipeline to ensure code quality and prevent regressions.

latex/sections/08_conclusion.tex

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,3 +5,5 @@ \section{Conclusion}
55
As AI increasingly shapes financial services, maintaining focus on user trust, transparency, and privacy will be crucial. This project provides a template for building AI systems in regulated domains where accuracy and trustworthiness are paramount. Financial advisory is a domain where AI's promise to democratize expertise must be balanced with responsibility to protect users from misinformation and poor recommendations. By emphasizing grounding in real data through RAG, supporting multiple inference options for privacy, and maintaining transparent about limitations, this system demonstrates an approach to trustworthy AI in high-stakes domains.
66

77
Despite its strengths, the system has several inherent limitations that should be acknowledged. Model dependency means system quality is fundamentally bounded by underlying LLM capabilities, with smaller models showing limitations in financial reasoning compared to larger models. Data currency presents a challenge since historical financial data has a fixed lookback window, and real-time market data integration would improve recommendations. Limited risk modeling is employed, with simplified volatility metrics; advanced portfolio construction techniques such as mean-variance optimization and factor models could enhance recommendations. The system provides recommendations only and cannot execute trades, so integration with trading platforms would require additional compliance and security measures. Scalability is limited since the current architecture uses in-memory processing, and distributed systems would be needed for enterprise-scale deployment.
8+
9+
The complete source code for the Personal Financial AI Agent is available at \url{https://github.com/merendamattia/personal-financial-ai-agent}.

0 commit comments

Comments
 (0)