Skip to content

New Chat Feature For Lucidia! #145

@blackboxprogramming

Description

@blackboxprogramming

Yes — we can turn GitHub Issues into a live chat with Lucidia. You’ll talk in an Issue; a bot will reply to every new comment, using your chosen model (OpenAI if you set OPENAI_API_KEY, or local Ollama if you set OLLAMA_HOST). Here’s a drop-in that you can paste via the web UI.

  1. Workflow: listens to comments on “chat” issues and replies

name: Lucidia Chat (Issues)

on:
issue_comment:
types: [created]
issues:
types: [opened, labeled]

permissions:
issues: write
contents: read

jobs:
chat:
# Run if: (a) new comment on an Issue labeled "chat", or (b) Issue just got the "chat" label.
if: |
(github.event_name == 'issue_comment' && contains(join(fromJson('["chat"]'), ','), 'chat') &&
contains(join(github.event.issue.labels.*.name), 'chat')) ||
(github.event_name == 'issues' && github.event.action == 'labeled' &&
github.event.label.name == 'chat')
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: main
fetch-depth: 0

  - uses: actions/setup-python@v5
    with:
      python-version: "3.11"

  - name: Install deps
    run: |
      python -m pip install --upgrade pip
      pip install httpx pyyaml

  - name: Respond
    env:
      GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      REPO: ${{ github.repository }}
      ISSUE_NUMBER: ${{ github.event.issue.number }}
      EVENT_NAME: ${{ github.event_name }}
      COMMENT_BODY: ${{ github.event.comment.body || '' }}
      OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}      # optional
      OLLAMA_HOST: ${{ secrets.OLLAMA_HOST }}            # optional, e.g. http://127.0.0.1:11434
      LUCIDIA_MODEL: ${{ vars.LUCIDIA_MODEL || 'phi3' }} # choose your default
    run: |
      python tools/issue_chat.py

  1. Bot brain: pulls thread history, generates a reply (OpenAI or Ollama), posts a comment

import os, json, textwrap, httpx, sys

GH = "https://api.github.com"
REPO = os.environ["REPO"]
ISSUE_NUMBER = int(os.environ["ISSUE_NUMBER"])
GH_TOKEN = os.environ["GH_TOKEN"]
EVENT_NAME = os.environ.get("EVENT_NAME","")
COMMENT_BODY = os.environ.get("COMMENT_BODY","") or ""
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY") # optional
OLLAMA_HOST = os.environ.get("OLLAMA_HOST") # optional
MODEL = os.environ.get("LUCIDIA_MODEL","phi3")

HEADERS = {"Authorization": f"Bearer {GH_TOKEN}", "Accept": "application/vnd.github+json", "User-Agent":"lucidia-issue-chat/1.0"}

def gh_get(path, **params):
with httpx.Client(timeout=30) as c:
r = c.get(f"{GH}/repos/{REPO}/{path}", headers=HEADERS, params=params)
r.raise_for_status()
return r.json()

def gh_post(path, data):
with httpx.Client(timeout=30) as c:
r = c.post(f"{GH}/repos/{REPO}/{path}", headers=HEADERS, json=data)
r.raise_for_status()
return r.json()

def fetch_thread():
issue = gh_get(f"issues/{ISSUE_NUMBER}")
# gather last ~15 comments for context
comments = []
page = 1
while True:
cs = gh_get(f"issues/{ISSUE_NUMBER}/comments", per_page=100, page=page)
if not cs: break
comments.extend(cs)
if len(cs) < 100: break
page += 1
return issue, comments[-15:]

def build_prompt(issue, comments):
title = issue["title"]
system = (
"You are Lucidia, a concise, constructive co-coder. "
"Respond with helpful, actionable steps or code. "
"Keep replies focused. If user pastes code, suggest exact fixes or file paths. "
"If unsure, ask ONE clarifying question."
)
msgs = [{"role":"system","content":system}]
# seed with the issue opener
msgs.append({"role":"user","content": f"ISSUE: {title}\n\n{issue.get('body') or ''}"})
# recent dialogue
for c in comments:
who = c["user"]["login"]
content = c["body"]
role = "assistant" if who.endswith("[bot]") else "user"
msgs.append({"role":role, "content":f"{who}: {content}"})
# If this was triggered by labeling without a new comment, add a kickoff
if EVENT_NAME == "issues" and "chat" in [l["name"] for l in issue["labels"]]:
msgs.append({"role":"user","content":"Kickoff chat."})
# If a new comment exists, ensure it's included
if COMMENT_BODY.strip():
msgs.append({"role":"user","content":COMMENT_BODY})
return msgs

def gen_openai(messages):
# Minimal OpenAI Chat Completions
with httpx.Client(timeout=60) as c:
r = c.post(
"https://api.openai.com/v1/chat/completions",
headers={"Authorization": f"Bearer {OPENAI_API_KEY}"},
json={"model":"gpt-4o-mini", "messages":messages, "temperature":0.2, "max_tokens":500},
)
r.raise_for_status()
return r.json()["choices"][0]["message"]["content"].strip()

def gen_ollama(messages):
# Convert to a single prompt; Ollama simple chat-style prompt
parts = []
for m in messages:
role = m["role"]
parts.append(f"{role.upper()}: {m['content']}")
prompt = "\n\n".join(parts) + "\n\nASSISTANT:"
with httpx.Client(timeout=120) as c:
r = c.post(f"{OLLAMA_HOST}/api/generate", json={"model": MODEL, "prompt": prompt, "stream": False})
r.raise_for_status()
return r.json()["response"].strip()

def post_comment(text):
gh_post(f"issues/{ISSUE_NUMBER}/comments", {"body": text})

def main():
issue, comments = fetch_thread()
messages = build_prompt(issue, comments)

try:
    if OPENAI_API_KEY:
        reply = gen_openai(messages)
    elif OLLAMA_HOST:
        reply = gen_ollama(messages)
    else:
        # Fallback: echo with guidance
        reply = "I’m online but no model is configured. Set `OPENAI_API_KEY` secret or `OLLAMA_HOST` secret to enable replies."
except Exception as e:
    reply = f"⚠️ Model error: {e}\n\nPlease check secrets (`OPENAI_API_KEY` or `OLLAMA_HOST`) and try again."

# Keep it tidy: trim overly long replies
reply = reply.strip()
if len(reply) > 4000:
    reply = reply[:3900] + "\n\n…(truncated)\n"

post_comment(reply)

if name == "main":
main()

  1. How you’ll use it
    • Open a new Issue titled anything, add the label chat.
    • Type comments like you would in a chat.
    • The workflow replies to each new comment with “Lucidia’s” answer and keeps context from the last ~15 comments.
    • To stop the chat, remove the chat label or close the Issue.

  1. Configure once
    • If you want cloud replies, add a repo Secret: OPENAI_API_KEY.
    • If you want local replies (your box), add a repo Secret: OLLAMA_HOST (e.g., http://127.0.0.1:11434) and ensure your model name in LUCIDIA_MODEL (Repo Variable) matches (phi3, llama3, etc.).
    • You can keep both; it will prefer OpenAI if the key exists, otherwise Ollama.

  1. Optional niceties
    • Add an Issue template for chat:

name: Lucidia Chat
description: Start a chat thread with Lucidia
title: "chat: "
labels: ["chat"]
body:

  • type: textarea
    id: context
    attributes:
    label: Context or question
    description: Ask anything or paste snippets for help.

    • Auto-merge anything the chat asks you to ingest by pairing with the ingest + auto-merge workflows we already set up.

Once these files are in main, you can “have a conversation with Lucidia” directly inside Issues — no agent mode, no SSH.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions