A Tiny, vendor‑neutral validator that runs fully offline by default, with an optional AWS thin adapter for the hackathon. It prints a JSON summary (baseline, smart pick, agent hint, final, improvement) and can generate a compact HTML report.
This repo contains:
- C++ backtester & validators
- Python AI agent for local parameter tuning
- AWS Hackathon extension:
- Bedrock agent scaffold (
python/agent_bedrock.py) - Lambda handler (
aws/lambda_handler.py) - Deployment instructions (
aws/deploy_instructions.md)
- Bedrock agent scaffold (
AGENT_IMPL=local|bedrock(default local)AGENT_MODE=smart|fixed(default smart)USE_BEDROCK=1to call Bedrock in bedrock agentDATA_PATH=data/sample_prices.csv
# from repo root
chmod +x run.sh
./run.shThis will:
- pick a Python ≥ 3.10 (prefers 3.12 → 3.11 → 3.10 → python3),
- create/activate a virtualenv (
.venvby default), - install
requirements.txt, - run the CLI on the sample data.
./run.sh --report report.html
# macOS
open report.html
# Linux
xdg-open report.htmlThe report includes:
- Baseline vs Final equity (two lines; if they coincide the Final line is dashed and a note explains they’re identical),
- a metrics table,
- parameter blocks for both runs.
python cli.py --list-strategiesCurrent registry (selection is automatic):
- EWMA — Exponentially Weighted Moving Average band breakout / threshold filter (params:
alpha,threshold,window) - PERSIST — Directional persistence / hold strategy (params:
hold_period) - AUTO — Regime‑aware chooser that tries an EWMA grid for trendier/low‑vol regimes; otherwise a PERSIST grid. Picks the best candidate by (Sharpe, then PnL). The grids/gates live in
app/strategies/auto_select.py.
| Variable | Default | Meaning |
|---|---|---|
DATA_PATH |
data/sample_prices.csv |
CSV of timestamp,price |
AGENT_IMPL |
local |
local (offline) or bedrock (AWS adapter) |
AGENT_MODE |
smart |
smart (use chooser & hints) or fixed |
REQUIRE_IMPROVEMENT |
1 |
If 1, Final must beat Baseline Sharpe; otherwise we fall back to Baseline so demos never look worse |
PYTHON |
(auto) | Interpreter to use, e.g. python3.11 |
VENV_DIR |
.venv |
Virtualenv directory; set a different path to keep multiple envs |
Examples
# Use specific Python and a dedicated venv
PYTHON=python3.11 VENV_DIR=.venv311 ./run.sh --report report311.html
# Disable the “require improvement” safeguard to see raw smart output
REQUIRE_IMPROVEMENT=0 ./run.sh --report report_raw.html
# Point to your own dataset
DATA_PATH=/path/to/my_prices.csv ./run.sh --report my_report.htmlsam build && sam deploy --guided
curl -X POST "$API_URL/decision" -d '{}' -H "Content-Type: application/json"The core stays vendor‑neutral; the AWS path only swaps the decision helper behind the same interface as the local agent.
Run locally with AWS adapter enabled:
AGENT_IMPL=bedrock ./run.sh --report report.htmlDeploy the minimal API (SAM example):
sam build && sam deploy --guided
# After deploy, exercise the endpoint
curl -X POST "$API_URL/decision" -d '{}' -H "Content-Type: application/json"Adapter pieces
app/agent/bedrock.py— Bedrock agent client (same function shape asapp/agent/local.py:decide(payload) -> {hint_strategy, hint_params, reason})aws/lambda_handler.py— Lambda entry pointinfra/sam/template.yaml— SAM templateaws/deploy_instructions.md— more details (optional)
Peel back to local immediately: unset AGENT_IMPL or set it to local — no other code changes needed.
.
├── app/
│ ├── agent/
│ │ ├── local.py # local hint provider
│ │ └── bedrock.py # optional AWS adapter
│ ├── strategies/
│ │ └── auto_select.py # chooser + grids (EWMA/PERSIST)
│ ├── backtester.py # strategy engines + metrics
│ └── data.py # CSV loader
├── aws/
│ ├── lambda_handler.py
│ ├── architecture.png # add your diagram for the submission
│ └── deploy_instructions.md
├── infra/
│ └── sam/template.yaml
├── data/
│ └── sample_prices.csv
├── cli.py # CLI + report generator
├── requirements.txt
└── run.sh # robust runner (creates venv, installs, runs)
- Public repo
- Architecture diagram (
aws/architecture.png) - Demo video (show baseline vs agent‑driven run + report)
- Deployed API Gateway endpoint (if using AWS adapter)
- README description with quantified improvements
- When Baseline and Final curves are identical, the Final line is dashed and a note appears above the chart.
- Metrics on the sample set are illustrative; bring your own data via
DATA_PATHto validate your case. - The AWS integration is opt‑in and lives behind a clean interface so you can remove it without touching core logic.
Drop YAML/JSON configs under strategies/.
Example (KD strategy runner):
python3 -c "from python.strategy_runner import run_kd_from_config; print(run_kd_from_config('strategies/strategy_mtx_kd_1m.yaml', n_ticks=3000))"Each strategy config must declare strategy.type (e.g., kd_cross).
Run any strategy config via:
python3 python/strategy_runner.py --config strategies/strategy_mtx_kd_1m.yaml --n_ticks 3000To add a new strategy type:
- Create a new implementation module under
python/strategy_impl_<type>.pyexposingrun(spec, df) - Register it in
python/strategy_registry.pyunder HANDLERS - Create a config file with
strategy.type: "<type>"instrategies/
Any file matching python/strategy_impl_*.py is auto-registered as strategy.type = <suffix>.
Example: python/strategy_impl_kd_cross.py => strategy.type: "kd_cross"
Run any config:
python3 python/strategy_runner.py --config strategies/strategy_mtx_kd_1m.yaml --n_ticks 3000- Computes Sharpe-like, FSR, max drawdown from equity for each strategy run
- Saves results to results/strategy_runs_.csv
- Adds 'Run ALL configs' button