-
Notifications
You must be signed in to change notification settings - Fork 3
Expand file tree
/
Copy path.env.example
More file actions
135 lines (124 loc) · 7.1 KB
/
.env.example
File metadata and controls
135 lines (124 loc) · 7.1 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
# ─── LiteLLM proxy (where every LLM call gets routed) ─────────────────────
# All agent LLM calls go through this proxy so they show up in your dashboard.
#
# IMPORTANT: We use the Anthropic Messages API format (`/v1/messages`), not
# OpenAI Completions. The Anthropic SDK appends `/v1/messages` to baseUrl on
# its own, so set this to the LiteLLM root with NO `/v1` suffix.
#
# This means LITELLM_MODEL_ID must be a Claude model in your model_list (the
# best path through LiteLLM's Anthropic passthrough — no format translation,
# native thinking blocks, native prompt caching).
LITELLM_BASE_URL=http://localhost:4000
LITELLM_API_KEY=sk-1234
LITELLM_MODEL_ID=claude-opus-4-7-thinking-xhigh
# ─── Dashboard auth ───────────────────────────────────────────────────────
# Required. Generate strong values, e.g.:
# openssl rand -base64 32
DASHBOARD_MASTER_KEY=replace-with-long-random-api-key
DASHBOARD_USERNAME=admin
DASHBOARD_PASSWORD=replace-with-long-random-password
DASHBOARD_SESSION_SECRET=replace-with-long-random-session-secret
# Set true when served over HTTPS. Leave false for plain localhost.
DASHBOARD_COOKIE_SECURE=false
# ─── GitHub bot identity ──────────────────────────────────────────────────
# A dedicated bot account, NOT your personal account.
# Fine-grained PAT scopes:
# - issues:write on BerriAI/litellm
# - pull_requests:write on BerriAI/litellm
# - contents:write on <bot>/litellm (the fork)
# - workflows:write on <bot>/litellm (lets the bot push branches CI runs on)
GITHUB_TOKEN=ghp_xxx
GITHUB_BOT_USERNAME=shin-bot
# ─── Slack bot interface (optional) ───────────────────────────────────────
# Bolt-only mode (Socket Mode). This is the single supported Slack path.
SLACK_USE_BOLT=true
# Required for Bolt Socket Mode.
SLACK_APP_TOKEN=
SLACK_SIGNING_SECRET=
SLACK_BOT_TOKEN=xoxb-xxx
# Optional but recommended for message.channels fallback routing.
SLACK_BOT_USER_ID=
# Legacy poll options (disabled in Bolt-only mode).
SLACK_POLL_ENABLED=false
SLACK_POLL_CHANNELS=
SLACK_POLL_INTERVAL_SEC=10
# ─── Target repo ──────────────────────────────────────────────────────────
TARGET_REPO_OWNER=BerriAI
TARGET_REPO_NAME=litellm
# ─── Scheduling ───────────────────────────────────────────────────────────
INTERVAL_MIN=15
# How often (minutes) to run the automatic batch scheduler. 0 = disabled.
BATCH_INTERVAL_MIN=0
# How many issues to process per batch run.
BATCH_SIZE=10
MAX_RUN_MINUTES=20
# ─── Chat heartbeat + Slack durability ────────────────────────────────────
# Every Slack message is persisted to the slack_tasks DB table the moment
# it arrives. Three mechanisms work together to guarantee every message
# closes the loop with a real Slack reply:
# 1. Per-turn heartbeat in runRootChat — every CHAT_HEARTBEAT_INTERVAL_SEC
# we inject a HEARTBEAT user message into the running agent via
# agent.steer(), so it keeps moving between assistant turns.
# 2. Global poller in src/slack/bolt.ts — every CHAT_HEARTBEAT_INTERVAL_SEC
# we scan the DB for 'running' tasks whose updated_at hasn't advanced
# in CHAT_HEARTBEAT_STUCK_AFTER_SEC; for each, we post a "still working"
# nudge to the Slack thread so the user knows we're alive.
# 3. Boot-time recovery — on startup we scan for orphaned 'pending' /
# 'running' tasks (left behind by a process crash) and post a
# "I crashed, please resend" notice to each thread.
#
# Flip CHAT_HEARTBEAT_ENABLED=false as a kill switch — disables both the
# per-turn heartbeat and the global poller. Boot-time recovery still runs.
CHAT_HEARTBEAT_ENABLED=true
CHAT_HEARTBEAT_INTERVAL_SEC=30
CHAT_HEARTBEAT_STUCK_AFTER_SEC=120
# ─── Safety flags (default OFF — flip when you trust it) ──────────────────
# When false, the daemon writes reports to ./runs/ but does not touch GitHub.
POST_COMMENTS=false
# When false, only Phase 1 (repro) runs. When true, easy/medium issues also
# get a Phase 2 fix attempt with screenshots/GIF and a draft fork-PR.
AUTO_FIX=false
# Hard cap on auto-fix PRs per UTC day.
MAX_FIX_PRS_PER_DAY=5
# ─── Local litellm proxy (the agent runs this against the cloned repo) ────
# Admin credentials for the per-run sandbox proxy are auto-generated at
# runtime — no env vars needed. Each repro run boots its own isolated proxy
# on a unique port (starting at 5001) with random master key + admin creds.
#
# Optional: a sandbox Postgres URL the agent can use for the proxy DB.
# If unset, the proxy runs with its in-memory/SQLite default.
LITELLM_SANDBOX_DB_URL=
# ─── Paths (usually leave as-is) ──────────────────────────────────────────
WORKDIR=./workdir
RUNS_DIR=./runs
STATE_DB=./state.sqlite
SCREENSHOT_DIR=/tmp/shin-watcher-screenshots
# ─── Self-improving agent ─────────────────────────────────────────────────
# The BerriAI/self-improving-agent feedback tools let the root agent propose
# diffs to its OWN prompts/tools/code when you give it feedback in chat.
# apply_proposal opens a draft PR via the GitHub REST API after explicit
# approval — no local clone, no `gh` CLI, no working-tree races.
#
# REPO = "owner/name" of the repo the agent should PR against
# TOKEN = fine-grained PAT with: contents:write + pull_requests:write
# on that repo. (You can reuse GITHUB_TOKEN above if it has those
# scopes on shin-builder-oss.)
SELF_IMPROVING_AGENT_REPO=BerriAI/shin-builder-oss
SELF_IMPROVING_AGENT_GITHUB_TOKEN=
SELF_IMPROVING_AGENT_PROPOSALS_DIR=./runs/improvements
# Optional overrides:
# SELF_IMPROVING_AGENT_BASE_BRANCH=main
# SELF_IMPROVING_AGENT_CACHE_DIR=~/.cache/self-improving-agent/BerriAI__shin-builder-oss
# ─── Langfuse tracing (observability) ─────────────────────────────────────
# Each chat turn is recorded as one Langfuse trace named `chat-turn`:
# input = the human message
# output = the agent's text reply
# sessionId = "issue-<N>" derived from the agent's Turn-1 declaration
# (or the user's "#1234" / GitHub issue URL), so every turn
# of a conversation about a single issue lands in one
# Langfuse session. See BerriAI/shin-watcher-oss#2.
#
# If keys are unset the bot still runs — tracing just no-ops.
LANGFUSE_PUBLIC_KEY=
LANGFUSE_SECRET_KEY=
LANGFUSE_BASE_URL=https://us.cloud.langfuse.com