An AI-powered time series analysis library featuring multiple LLM providers, forecasting models, anomaly detectors, and a unique multi-agent council deliberation system.
- Multi-Provider LLM Support: Gemini, Claude, OpenAI, DeepSeek, Qwen
- Multiple Forecasting Models: Moirai2, Chronos, TimesFM, Ti-Rex, Lag-Llama, LLM-based
- Multiple Anomaly Detectors: Z-score, MAD, RuleDetector, Isolation Forest, LOF, LSTM-VAE
- Three Analysis Modes:
- Standard: Single LLM with tool execution
- Council: 3-role council (Forecaster, Risk Analyst, Business Explainer)
- Advanced Council: Karpathy-style 3-stage deliberation with peer ranking
- Web UI: FastAPI-based interface with real-time progress updates
- Structured Logging: Configurable logging throughout
# From source
git clone https://github.com/salesforce/timeseries-council.git
cd timeseries-council
pip install -e ".[all]"The library includes several foundation models for time series forecasting. Some models work out of the box, while others require additional setup.
- Chronos (Amazon) -
chronos-forecastingpackage - Merlion detectors (Salesforce) -
salesforce-merlionpackage - Statistical baselines -
statsmodelspackage
| Model | Requirement | Installation |
|---|---|---|
| Moirai2 | Python 3.10+ | pip install git+https://github.com/SalesforceAIResearch/uni2ts.git |
| TimesFM | JAX runtime | pip install timesfm |
| Ti-Rex | Special install | pip install tirex-ts |
| PyOD (ECOD, COPOD, HBOS, KNN, OCSVM, LODA) | — | pip install pyod |
Note: Without special installs, TimesFM/Ti-Rex/Moirai2 will use statistical fallback methods.
Foundation models need to download weights from HuggingFace on first use. You can pre-download them:
# Download small model weights (recommended)
timeseries-council download-models
# Download specific sizes (tiny, small, base, large)
timeseries-council download-models --sizes small base
# Full setup: install packages + download all models
timeseries-council download-models --all
# Check model status
timeseries-council status
timeseries-council status --verboseFor the best experience with all models:
# 1. Install the package with all dependencies
pip install -e ".[all]"
# 2. Install Moirai2 (requires Python 3.10+)
pip install git+https://github.com/SalesforceAIResearch/uni2ts.git
# 3. Install PyOD detectors (optional)
pip install pyod
# 4. Download model weights
timeseries-council download-models --sizes small
# 5. Verify all models are available
timeseries-council statusExpected output from status:
Forecasters (available):
[OK] zscore_baseline
[OK] llm
[OK] moirai2
[OK] chronos
[OK] timesfm
[OK] lag-llama
Detectors (available):
[OK] zscore
[OK] mad
[OK] isolation-forest
[OK] lof
[OK] merlion
...
from timeseries_council import Orchestrator
from timeseries_council.providers import create_provider
from timeseries_council.forecasters import create_forecaster
from timeseries_council.detectors import create_detector
import os
# Create LLM provider
api_key = os.getenv("GEMINI_API_KEY")
provider = create_provider("gemini", api_key) # or "anthropic", "openai", etc.
# Create optional forecaster and detector
forecaster = create_forecaster("moirai")
detector = create_detector("zscore")
# Initialize orchestrator
orchestrator = Orchestrator(
llm_provider=provider,
csv_path="data/sample_sales.csv",
target_col="sales",
forecaster=forecaster,
detector=detector
)
# Chat with your data
response = orchestrator.chat("What will sales be next week?")
print(response)
# Use council mode for multi-perspective analysis
response = orchestrator.chat_with_council("Analyze the sales trend")
print(response)# Start the web server
timeseries-council serve --host 127.0.0.1 --port 8000
# Or with uvicorn directly
uvicorn timeseries_council.web.app:create_app --factory --reloadThen open http://localhost:8000 in your browser.
# Interactive CLI session
timeseries-council chat data/sales.csv --target sales --provider gemini
# With custom models
timeseries-council chat data/sales.csv \
--target sales \
--provider anthropic \
--forecaster chronos \
--detector isolation_forest# LLM API Keys
export GEMINI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
export DEEPSEEK_API_KEY="your-key"
export DASHSCOPE_API_KEY="your-key" # For Qwen
# Deployment hardening (recommended for public hosting)
export TS_ENABLED_PROVIDERS="anthropic" # Comma-separated allowlist (unset = allow all)
export TS_ENABLE_RAW_SESSION_PATH="false" # Keep upload-only flow
# export TS_ADMIN_TOKEN="set-strong-token" # Required to enable /api/models/setup*
export TS_MAX_UPLOAD_MB="20" # Reject larger uploads (HTTP 413)
export TS_EXPOSE_SESSION_LIST="false" # Keep /api/sessions hidden
# export TS_SESSION_API_TOKEN="set-token" # Optional X-Session-Token gate for session/upload APIs
export TS_ENABLE_DYNAMIC_SKILLS="false" # Keep runtime code generation disabled
export TS_RATE_LIMIT_ENABLED="true" # Enable in-app per-IP rate limits
export TS_RATE_LIMIT_WINDOW_SECONDS="60" # Shared window duration
export TS_RATE_LIMIT_UPLOAD_PER_WINDOW="10" # Upload requests/IP/window
export TS_RATE_LIMIT_SESSION_PER_WINDOW="20" # Session requests/IP/window
export TS_RATE_LIMIT_CHAT_PER_WINDOW="60" # Chat requests/IP/window
export TS_RATE_LIMIT_DEFAULT_PER_WINDOW="120"# Other guarded endpoints/IP/window
# Logging
export TIMESERIES_COUNCIL_LOG_LEVEL="INFO" # DEBUG, INFO, WARNING, ERRORdefault_provider: gemini
default_forecaster: moirai
default_detector: zscore
providers:
gemini:
model: gemini-2.5-flash
anthropic:
model: claude-sonnet-4-20250514
forecasters:
moirai:
context_length: 512
chronos:
model_size: small
detectors:
zscore:
sensitivity: 2.0
isolation_forest:
contamination: 0.1
logging:
level: INFO
format: "%(asctime)s | %(levelname)-8s | %(name)s | %(message)s"For production deployments, place the app behind a reverse proxy (e.g., Nginx):
- Bind the app to
127.0.0.1:8000(default) - Expose only the reverse proxy on ports
80/443
Single LLM analyzes your query and executes appropriate tools:
response = orchestrator.chat("Forecast the next 7 days")Three specialized roles provide different perspectives:
- Quantitative Analyst: Focus on numbers and statistics
- Risk Analyst: Identify potential risks and uncertainties
- Business Explainer: Translate insights for stakeholders
response = orchestrator.chat_with_council("What's the outlook for Q4?")Karpathy-style 3-stage deliberation:
- Stage 1: All models provide initial responses
- Stage 2: Models rank each other's responses (anonymized)
- Stage 3: Chairman synthesizes final answer
from timeseries_council.council import AdvancedCouncil
council = AdvancedCouncil(
council_providers={
"gemini": provider1,
"claude": provider2,
"gpt4": provider3
},
chairman_name="claude"
)
result = council.run_sync(
user_query="What will sales be next month?",
context="Historical data context..."
)| Name | Description | Dependencies |
|---|---|---|
moirai |
Salesforce Moirai2 via uni2ts | uni2ts, gluonts, torch |
chronos |
Amazon Chronos | chronos-forecasting |
timesfm |
Google TimesFM | timesfm |
tirex |
Ti-Rex | tirex-ts |
lag_llama |
Lag-Llama | lag-llama |
llm |
LLM-based forecasting | LLM provider |
zscore_baseline |
Simple statistical baseline | Built-in |
| Name | Description | Dependencies |
|---|---|---|
zscore |
Z-score detection | Built-in |
mad |
Median Absolute Deviation | Built-in |
ruledetector |
Rule-based Anomaly Detector | Built-in |
isolation_forest |
Isolation Forest | scikit-learn |
lof |
Local Outlier Factor | scikit-learn |
moirai |
Moirai2 back-prediction | uni2ts, gluonts, torch |
merlion |
Merlion ensemble | salesforce-merlion |
lstm_vae |
LSTM Variational Autoencoder | torch |
pyod |
PyOD detectors (ECOD, COPOD, HBOS, KNN, OCSVM, LODA) | pyod |
llm |
LLM-based detection | LLM provider |
| Name | Description | Environment Variable |
|---|---|---|
gemini |
Google Gemini | GEMINI_API_KEY |
anthropic |
Anthropic Claude | ANTHROPIC_API_KEY |
openai |
OpenAI GPT | OPENAI_API_KEY |
deepseek |
DeepSeek | DEEPSEEK_API_KEY |
qwen |
Alibaba Qwen | DASHSCOPE_API_KEY |
The orchestrator has access to these analysis tools:
| Tool | Description |
|---|---|
forecast |
Generate time series forecasts |
describe_series |
Statistical summary of the series |
detect_anomalies |
Find anomalies in the data |
decompose_series |
Trend, seasonal, residual decomposition |
compare_series |
Compare multiple time periods |
what_if_simulation |
Scenario analysis |
The anomaly detection pipeline supports stateful memory via the DetectionMemory dataclass. This allows callers to pass historical context (baseline statistics, expected ranges, domain knowledge) so detectors can make more informed decisions.
| Field | Type | Description |
|---|---|---|
baseline_stats |
Dict[str, float] |
Known-normal statistics: mean, std, median, mad |
expected_range |
Optional[List[float]] |
Expected value range [low, high] — values inside are filtered out |
context |
Any |
Free-form domain context (used by LLM detector in its prompt) |
| Detector | Integration |
|---|---|
| Z-Score | Computes z-scores against baseline mean/std instead of batch stats |
| MAD | Uses baseline median and MAD for modified z-score computation |
| Isolation Forest | Adds baseline_zscore as an extra feature to the model |
| LOF | Adds baseline_zscore as an extra feature to the model |
| PyOD (ECOD, COPOD, HBOS, KNN, OCSVM, LODA) | Adds baseline_zscore as an extra feature |
| LSTM-VAE | Normalizes data using baseline mean/std instead of batch stats |
| Moirai2 | Boosts severity with max(model_severity, baseline_z) |
| Merlion (WindStats, Spectral, Prophet) | Scales alarm threshold by baseline/current std ratio |
| LLM | Injects baseline stats, expected range, and domain context into prompt |
All detectors also apply shared post-processing via _apply_memory(): baseline rescoring and expected-range filtering.
from timeseries_council.tools import detect_anomalies
from timeseries_council.types import DetectionMemory
# Create memory with known-normal baseline
memory = DetectionMemory(
baseline_stats={"mean": 100.0, "std": 10.0, "median": 98.0},
expected_range=[80, 120],
context="Holiday season — expect higher variability",
)
# Detection uses baseline for scoring, filters values within [80, 120]
result = detect_anomalies(series=my_series, memory=memory)When using the Orchestrator, detection memory is automatically accumulated across calls. After each detection run, baseline stats (mean, std, median) from the result metadata are stored and passed to subsequent detection calls.
orchestrator = Orchestrator(llm_provider=provider, csv_path="data/sales.csv", target_col="sales")
# First call — detects anomalies, stores baseline stats in memory
response1 = orchestrator.chat("Are there anomalies in the data?")
# Second call — detection now uses first run's stats as baseline context
response2 = orchestrator.chat("Check for anomalies in this updated data")The library supports real-time progress tracking via callbacks:
from timeseries_council.types import ProgressStage
def my_callback(stage: ProgressStage, message: str, progress: float):
print(f"[{stage.value}] {progress:.0%} - {message}")
orchestrator = Orchestrator(
llm_provider=provider,
csv_path="data/sales.csv",
target_col="sales",
progress_callback=my_callback
)Progress stages:
INITIALIZING- Starting upTOOL_SELECTION- LLM choosing toolsFORECASTING- Running forecastDETECTING- Running anomaly detectionCOUNCIL_STAGE_1- Collecting opinionsCOUNCIL_STAGE_2- Peer rankingCOUNCIL_STAGE_3- Chairman synthesisCOMPLETE- Done
See the examples/ directory for:
demo.ipynb- Interactive Jupyter notebook demosample_data.py- Generate sample datasetsfinetune_vertex.py- Fine-tune models on Vertex AIgenerate_training_data.py- Create training data
# Install development dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/
# Run with coverage
pytest tests/ --cov=timeseries_council
# Type checking
mypy src/timeseries_council
# Linting
ruff check src/timeseries-council/
├── src/timeseries_council/
│ ├── providers/ # LLM provider implementations
│ ├── forecasters/ # Forecasting model implementations
│ ├── detectors/ # Anomaly detector implementations
│ ├── tools/ # Analysis tools
│ ├── council/ # Council orchestration
│ ├── web/ # Web interface
│ ├── cli/ # Command-line interface
│ ├── orchestrator.py # Main orchestration logic
│ ├── config.py # Configuration management
│ ├── types.py # Type definitions
│ ├── logging.py # Logging utilities
│ └── exceptions.py # Custom exceptions
Contributions are welcome! Please see CONTRIBUTING.md for guidelines on:
- Development setup
- Running tests and linting
- Submitting pull requests
If you use Time Series Council in your research, please cite our ICLR 2026 TSALM Workshop paper:
@inproceedings{tune_as_inference_2026,
title={Tune-as-Inference: Amortized Configuration Learning for Time Series Foundation Models},
author={Gupta, Piyush and Reddy, Sriteja and Singh, Manpreet and Sahoo, Doyen},
booktitle={ICLR 2026 Workshop on Time Series and Language Models (TSALM)},
year={2026},
url={https://openreview.net/forum?id=LycMKa0o0b}
}Apache 2.0 License - see LICENSE for details.
- Karpathy's LLM Council for the advanced council concept
- Amazon Chronos for forecasting
- Google TimesFM for time series foundation model
- Lag-Llama for probabilistic forecasting
- GluonTS for dataset infrastructure
- IIIT Hyderabad for research collaboration