Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .github/workflows/CI.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ jobs:
touch .env
echo KEY_API_BDX=${{ secrets.KEY_API_BDX }} >> .env
echo MISTRAL_API_KEY=${{ secrets.MISTRAL_API_KEY }} >> .env
echo PORTKEY_API_KEY=${{ secrets.PORTKEY_API_KEY }} >> .env
echo PORTKEY_VIRTUAL_KEY=${{ secrets.PORTKEY_VIRTUAL_KEY }} >> .env
- name: Copy test data
run: |
cp -r ${{ env.ROOT_TESTS_DATA }}/* ${{ github.workspace }}/tests/
Expand Down
13 changes: 13 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,8 @@ Créer un .env à la racine du projet avec :
- MAPBOX_TOKEN="YOUR TOKEN HERE" (pour l'utilisation des graphiques avec mapbox).
- KEY_API_BDX="YOUR KEY HERE" (pour l'utilisation de l'API open data de Bordeaux. Pour obtenir une [clef](https://data.bordeaux-metropole.fr/opendata/key))
- MISTRAL_API_KEY="YOUR KEY HERE" (pour l'utilisation de l'API Mistral. Pour obtenir une [clef](https://mistral.ai/))
- PORTKEY_API_KEY="YOUR KEY HERE" (pour l'utilisation de l'API Portkey. Pour obtenir une [clef](https://app.portkey.ai/))
- PORTKEY_VIRTUAL_KEY="YOUR KEY HERE" (endpoint configuré sur-mesure dans Portkey . Pour obtenir une [clef](https://app.portkey.ai/))

## Études :

Expand All @@ -68,3 +70,14 @@ Identique à l'image précédente, mais en 3D afin de mieux observer certains ph
![image](https://user-images.githubusercontent.com/8374843/96337330-a2ec7680-1086-11eb-84ec-c42c4cd5f7f6.png)

Détection d'anomalies sur la station `Rue de la Croix Blanche` à partir des données en temps réel de la station.


### LLM :

- Monitoring via [Portkey AI](https://app.portkey.ai/) (Open Source : https://github.com/Portkey-AI/gateway)
- Utilisation de l'API Mistral.
- Exemple de use-cases :
- Poser des questions sur le statut / nombre de vélos disponibles sur une station (données internes via open data).
- Connaitre les stations les plus proches d'une adresse.
- Calculer la distance entre deux stations et le temps pour s'y rendre.
- Prédiction sur le nombre de vélos / places disponibles sur une station sur un horizon défini par l'utilisateur (inférieur à plus d'un jour).
4 changes: 3 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,9 @@ dependencies = [
"langchain-experimental==0.3.4",
"langchain-mistralai==0.2.9",
"tabulate==0.9.0",
"geopy==2.4.1"
"geopy==2.4.1",
"portkey-ai==1.11.1",
"langchain-openai==0.3.9"
]

[project.optional-dependencies]
Expand Down
33 changes: 19 additions & 14 deletions src/vcub_keeper/llm/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
from langchain_core.rate_limiters import InMemoryRateLimiter
from langchain_experimental.agents import create_pandas_dataframe_agent
from langchain_mistralai.chat_models import ChatMistralAI
from langchain_openai import ChatOpenAI
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders

from vcub_keeper.config import CONFIG_LLM
from vcub_keeper.llm.crewai.tool_python import (
Expand All @@ -21,10 +23,11 @@
load_dotenv()


MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY")
PORTKEY_API_KEY = os.getenv("PORTKEY_API_KEY")
PORTKEY_VIRTUAL_KEY = os.getenv("PORTKEY_VIRTUAL_KEY")


def create_chat(model: str, temperature: float = 0.1) -> ChatMistralAI:
def create_chat(model: str, temperature: float = 0.1, agent_name_monitor: str = "chat_vcub_keeper") -> ChatMistralAI:
"""


Expand All @@ -35,6 +38,8 @@ def create_chat(model: str, temperature: float = 0.1) -> ChatMistralAI:
_description_
temperature : float, optional
_description_, by default 0.0
agent_name_monitor : str, optional
Nom du user pour le monitoring via portkey, by default "chat_vcub_keeper"

Returns
-------
Expand All @@ -48,22 +53,22 @@ def create_chat(model: str, temperature: float = 0.1) -> ChatMistralAI:
# To avoid rate limit errors (429 - Requests rate limit exceeded)
rate_limiter = InMemoryRateLimiter(requests_per_second=3, check_every_n_seconds=0.3, max_bucket_size=4)

# Initialize memory for conversation history
# memory = ConversationBufferMemory(memory_key="chat_history")
portkey_headers = createHeaders(
api_key=PORTKEY_API_KEY,
virtual_key=PORTKEY_VIRTUAL_KEY,
provider="mistral",
metadata={"_user": agent_name_monitor},
)

chat_llm = ChatMistralAI(
chat_llm = ChatOpenAI(
api_key="X",
base_url=PORTKEY_GATEWAY_URL,
rate_limiter=rate_limiter,
default_headers=portkey_headers,
model=model,
temperature=temperature,
api_key=MISTRAL_API_KEY,
rate_limiter=rate_limiter,
# memory=memory,
random_seed=42,
seed=42,
verbose=True,
# model_kwargs={
# "top_p": 0.92,
# "repetition_penalty": 1.1,
# "max_tokens": 1024, # Limit token generation
# },
)

return chat_llm
Expand Down
3 changes: 2 additions & 1 deletion tests/llm/test_base_llm.py
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,8 @@ def test_distance_calculation(agent, capfd):
or "1 minute" in response["output"].lower()
or "1.59" in response["output"].lower()
or "1,59" in response["output"].lower()
or " 160 secondes" in response["output"].lower()
or "160 secondes" in response["output"].lower()
or "2 minutes et 40 secondes" in response["output"].lower()
)
assert "meriadeck" in response["output"].lower()
assert "place gambetta" in response["output"].lower()
Expand Down