Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: CI

on:
push:
branches:
- main
pull_request:

jobs:
ts-lint-and-typecheck:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install pnpm
uses: pnpm/action-setup@v3
with:
version: 10.33.0 # using the one we installed just now
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Lint (ESLint)
run: pnpm run lint
- name: Format Check (Prettier)
run: pnpm run format --check .
- name: Type Check (TypeScript)
# Note: 'check' script in package.json is for prettier and eslint, there is no dedicated typecheck script in package.json.
# So we run tsc directly here.
run: pnpm exec tsc --noEmit

py-lint-and-typecheck:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./adk
steps:
- uses: actions/checkout@v4
- name: Setup uv
uses: astral-sh/setup-uv@v3
with:
enable-cache: true
- name: Setup Python
run: uv python install 3.11
- name: Install dependencies
run: uv sync --all-extras
- name: Ruff format check
run: uv run ruff format --check .
- name: Ruff check
run: uv run ruff check .
2 changes: 1 addition & 1 deletion .github/workflows/claude.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,5 +42,5 @@ jobs:
# ANTHROPIC_VERTEX_PROJECT_ID: "${{ secrets.GC_PROJECT_ID }}"
# CLOUD_ML_REGION: "global"
with:
# use_vertex: "true"
# use_vertex: "true"
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
1 change: 1 addition & 0 deletions .husky/pre-commit
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
pnpm exec lint-staged
5 changes: 4 additions & 1 deletion .prettierignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
package-lock.json
pnpm-lock.yaml
yarn.lock
yarn.lock
*.yaml
*.yml
Comment on lines +4 to +5
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's generally a good practice to format configuration files like docker-compose.yml to ensure consistency across the project. By adding *.yaml and *.yml to .prettierignore, you are excluding all YAML files from formatting. This might be unintentional, especially since docker-compose.yml was modified in this PR with what appear to be formatting changes.

I recommend removing these lines to allow Prettier to format YAML files. You can then add a rule for YAML files to your lint-staged configuration in package.json to automatically format them on commit.

service.template.yaml
3 changes: 3 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ docker compose up --build
```

This will:

- Build the **frontend** image from the root `Dockerfile`
- Build the **backend** image from `adk/Dockerfile`
- Start both containers, with the frontend waiting for the backend to be healthy
Expand All @@ -83,11 +84,13 @@ docker compose down
```

### Staging Environment

- **Trigger**: Every push or merge to the `master` branch.
- **URL**: cofacts-ai-236494820908.asia-east1.run.app .
- **Traffic**: The `master` version always receives 100% of the traffic.

### PR Previews

- **Trigger**: Every Pull Request (opened or updated).
- **Behavior**: A dedicated revision is created for each PR with a unique tag.
- **URL**: You can find the preview URL in the GitHub PR comments or the "Deployments" section of the PR sidebar.
Expand Down
38 changes: 20 additions & 18 deletions adk/cofacts_ai/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,8 @@
- AI Proof-readers: Role-play different political perspectives to test reply effectiveness
"""

from typing import Dict, Optional
from typing import Optional
import re
import json

from dotenv import load_dotenv
from google.adk.agents import LlmAgent
Expand All @@ -23,8 +22,7 @@
from .tools import (
search_cofacts_database,
get_single_cofacts_article,
submit_cofacts_reply,
resolve_vertex_redirect
resolve_vertex_redirect,
)
from .instrumentation import setup_instrumentation

Expand All @@ -35,8 +33,7 @@


async def append_grounding_sources(
callback_context: CallbackContext,
llm_response: LlmResponse
callback_context: CallbackContext, llm_response: LlmResponse
) -> Optional[LlmResponse]:
"""
After-model callback to append grounding sources to the response.
Expand Down Expand Up @@ -65,7 +62,7 @@ async def append_grounding_sources(

output_parts.append(f"**Source {len(seen_urls)}**: {title}")
output_parts.append(f"- **URL**: {display_uri}")
output_parts.append("") # Extra newline
output_parts.append("") # Extra newline

# 2. Perform "markdown work" in response text (formerly resolve_investigator_urls)
# Replace occurrences of grounding redirect URLs in the main text
Expand All @@ -77,12 +74,18 @@ async def append_grounding_sources(
if resolved_url != original_url:
# If the URL is already inside a markdown link [label](original),
# replace the entire markdown link with our resolved one.
markdown_pattern = re.compile(r'\[[^\]]*\]\(' + re.escape(original_url) + r'\)')
markdown_pattern = re.compile(
r"\[[^\]]*\]\(" + re.escape(original_url) + r"\)"
)
if markdown_pattern.search(part.text):
part.text = markdown_pattern.sub(f"[{resolved_url}]({original_url})", part.text)
part.text = markdown_pattern.sub(
f"[{resolved_url}]({original_url})", part.text
)
else:
# Otherwise just replace the raw URL
part.text = part.text.replace(original_url, f"[{resolved_url}]({original_url})")
part.text = part.text.replace(
original_url, f"[{resolved_url}]({original_url})"
)

# 3. Append Search Widget if present (Policy requirement)
if metadata.search_entry_point and metadata.search_entry_point.rendered_content:
Expand Down Expand Up @@ -133,7 +136,7 @@ async def append_grounding_sources(

Focus on providing comprehensive, well-sourced research content.
""",
tools=[google_search]
tools=[google_search],
)


Expand Down Expand Up @@ -182,7 +185,7 @@ async def append_grounding_sources(

This verification is critical for combating misinformation that relies on fake or misleading citations.
""",
tools=[url_context]
tools=[url_context],
)


Expand Down Expand Up @@ -228,7 +231,7 @@ async def append_grounding_sources(

Provide respectful, measured analysis that helps ensure fact-checking is credible across political divides.
""",
tools=[]
tools=[],
)

ai_proofreader_dpp = LlmAgent(
Expand Down Expand Up @@ -272,7 +275,7 @@ async def append_grounding_sources(

Provide engaged, democratic analysis that helps ensure fact-checking resonates with progressive audiences.
""",
tools=[]
tools=[],
)

ai_proofreader_tpp = LlmAgent(
Expand Down Expand Up @@ -316,7 +319,7 @@ async def append_grounding_sources(

Provide rational, balanced analysis that helps ensure fact-checking appeals to moderate voters seeking practical solutions.
""",
tools=[]
tools=[],
)

ai_proofreader_minor_parties = LlmAgent(
Expand Down Expand Up @@ -360,7 +363,7 @@ async def append_grounding_sources(

Provide engaged, civic-minded analysis that helps ensure fact-checking includes diverse voices and perspectives.
""",
tools=[]
tools=[],
)


Expand Down Expand Up @@ -541,9 +544,8 @@ async def append_grounding_sources(
AgentTool(agent=ai_proofreader_kmt),
AgentTool(agent=ai_proofreader_dpp),
AgentTool(agent=ai_proofreader_tpp),
AgentTool(agent=ai_proofreader_minor_parties)
AgentTool(agent=ai_proofreader_minor_parties),
],
)

root_agent = ai_writer

1 change: 1 addition & 0 deletions adk/cofacts_ai/instrumentation.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

logger = logging.getLogger(__name__)


def setup_instrumentation():
"""
Sets up Langfuse instrumentation for Google ADK.
Expand Down
Loading
Loading