The folder docs is prepared to include relevant documentation and diagrams for the project.
Right now it contains the current architecture diagram.
Install conda and homebrew if needed.
Note to self: conda is only needed to use the same version of python as the server. Is that necessary?
create a conda environment using
conda create -n energenius python=3.13activate it using
conda activate energeniusinstall the pip packages from the requirements in the env
pip install -r requirements.txtIn order to run the server, you need to create a file called private_settings.py in the same directory as settings.py. This file should contain the following variables:
PRIVATE_SETINGS = {
"LLM_LOCAL": True, # Set to True if you are using a local LLM or False if you are using a remote LLM
"LLM_KEY": {
"openai": "" # OpenAI API key
"ollama": "", # ollama API key
"anthropic": "", # Anthropic API key
"deepseek": "", # DeepSeeker API key
},
"LLM_BASE_URL": "", # Base URL for the LLM local API
}You can use standard urls for local deployment:
- Ollama:
"LLM_BASE_URL": "http://localhost:11434" - LM Studio:
"LLM_BASE_URL": "http://localhost:1234/v1"
Right now, test locally with Ollama
Models tried:
- gpt-oss
- llama3.2
- mistral
Embeddings:
- mxbai-embed-large
- nomic-embed-text
In order to run Ollama, launch the Ollama server in a separate terminal:
ollama run gpt-oss #llama3.2 or mistralTo run the UI
streamlit run streamlit_ui.py@article{Campi_Giudici_Pinciroli_Vago_Brambilla_Fraternali_2025,
title={Enhancing Human-AI Collaboration through a Conversational Agent for Energy Efficiency},
author={Campi, Riccardo and Giudici, Mathyas and Pinciroli Vago, Nicolò Oreste and Brambilla, Marco and Fraternali, Piero},
journal={Proceedings of the AAAI Symposium Series},
volume={5},
number={1},
pages={52-55},
year={2025},
month={May},
DOI={10.1609/aaaiss.v5i1.35554}
}