Skip to content

Commit 379c32b

Browse files
authored
Revise README for Aegis project overview and details
Updated README to reflect project name change and detailed features.
1 parent 81ee743 commit 379c32b

File tree

1 file changed

+118
-2
lines changed

1 file changed

+118
-2
lines changed

README.md

Lines changed: 118 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,118 @@
1-
# junction-hack
2-
Junction Hack Finland
1+
<img width="1104" height="639" alt="{1AEBB550-3E8D-40BD-B4A7-0D42D4ADA5D4}" src="https://github.com/user-attachments/assets/4e8936ad-7206-48ba-9ce2-c5202ece5e80" />
2+
3+
# Aegis - Trust Evaluation Platform
4+
**"Security decisions in seconds"** - Aegis is an AI-powered security assessment platform that automates vendor trust evaluations. It generates instant, source-grounded reports with transparent trust scores, and includes an interactive chat agent to provide further insights and answer specific questions, enabling security teams to make fast, informed decisions.
5+
6+
## 🚀 Overview
7+
![Structural Diagram](https://github.com/user-attachments/assets/d23e5034-869f-411f-9b26-beac3a536c44)
8+
Aegis consists of two main components:
9+
- **Web-Client** - A next.js web application providing an easy to use interface to evaluate products, discuss the evaluation with an agent, and a vault giving oversight over prior scoring.
10+
- **Deep Research Agent** - An agent based on OpenAi & Langgraph, utilizing various specialist API's to thoroughly investigate the product provided.
11+
12+
## Highlights
13+
- 🔒 Firebase Auth + Profiles – Email/password and Google SSO with enriched user metadata captured in Firestore.
14+
- 📥 Submission Hub – Text prompt + binary upload workflow for requesting assessments.
15+
- 🤖 Multi-LLM Research Agent – Configurable OpenAI/Anthropic stacks for summarize→research→compress→report loops.
16+
- 🔎 Search + MCP Integrations – Pluggable Tavily, OpenAI native search, Anthropic native search, and custom MCP toolchains.
17+
- 📊 Reports Vault – High-signal trust brief cards with risk tags, source counts, and sharing links.
18+
- 🧪 Benchmark Harness – Pre-wired Deep Research Bench evaluation scripts to validate agent quality.
19+
## 🏗️ Project Structure
20+
junction-hack/
21+
├── junction-app/ # Next.js frontend│
22+
├── app/ # App Router routes (landing, auth, dashboard, reports)│
23+
├── components/ # Shared UI (AppChrome, landing sections)
24+
│ ├── contexts/AuthContext.tsx # Client-side auth/session provider│ ├── lib/firebase.ts # Firebase initialization│
25+
├── public/ # Static assets
26+
│ └── README.md
27+
├── deep_security/ # LangGraph / Open Deep Research backend
28+
│ ├── src/open_deep_research/ # Config + runtime
29+
│ ├── src/security/ # Auth helpers
30+
│ ├── tests/ # Benchmark + evaluation scripts
31+
│ ├── README.md
32+
│ └── pyproject.toml
33+
└── example_data.csv # Sample assessment data
34+
## 🎨 Frontend (Next.js)
35+
Modern App Router experience focusing on security analyst workflows:
36+
Tech Stack: Next.js 15, TypeScript, Tailwind, shadcn/ui, Lucide icons.
37+
Auth Flow: AuthContext wraps Firebase Auth; guards dashboard and reports routes.
38+
Key Screens:
39+
Landing page with hero/demo/trust-score highlights.
40+
/auth multi-step login/register with Google SSO fallback.
41+
/dashboard submission form (text + file upload) and quick links to reports.
42+
/reports gallery of trust briefs with status, sources, and risk chips.
43+
## ⚙️ Deep Research Service
44+
LangGraph-backed agent toolbox housed in deep_security/:
45+
- Configuration Surface: src/open_deep_research/configuration.py exposes sliders/toggles for structured-output retries, concurrency, model choices, search providers, and MCP settings.
46+
- Model Pipeline: Separate slots for summarization, researcher, compression, and final-report models (defaults to OpenAI gpt-4.1 / gpt-4.1-mini, but swappable to Anthropic, GPT-5, etc.).
47+
- Search & MCP: Built-in support for Tavily, OpenAI native, Anthropic native search plus external MCP servers for custom tools/data.
48+
- Evaluation: tests/run_evaluate.py and tests/extract_langsmith_data.py automate Deep Research Bench submissions (LangSmith integration).
49+
## 🧭 Data Flow
50+
1. User Authenticates – Firebase Auth session hydrates AuthContext.
51+
2. Submission – Dashboard posts text/binary payload to a Next.js API route or edge function (placeholder today).
52+
3. Assessment Orchestration – API proxies request to LangGraph runtime (Deep Research service).
53+
4. LLM + Search Loop – Agent fans out to configured LLMs, search APIs, and MCP tools, storing intermediate notes.
54+
5. Report Storage – Final trust brief, scores, and citation metadata saved back to Firestore.
55+
6. Consumption – Reports UI reads Firestore entries for sharing/export.
56+
## 🚀 Quick Start
57+
### Prerequisites
58+
- Node.js 18+ (or Bun), npm/yarn/pnpm.
59+
- Python 3.11, uv or pip.
60+
- Firebase project (Auth + Firestore) + service credentials.
61+
- OpenAI and/or Anthropic API keys (plus Tavily key if using default search).
62+
- LangSmith account if running benchmarks.
63+
### Frontend Setup
64+
'''
65+
cd junction-app
66+
cp .env.example .env.local # fill Firebase + API vars
67+
npm installnpm
68+
run dev
69+
'''
70+
Visit [http://localhost:3000](http://localhost:3000).
71+
### Backend Setup
72+
'''
73+
cd deep_securityuv venv && source .venv/bin/activate # or python -m venvuv
74+
sync # installs LangChain/LangGraph dep
75+
scp .env.example .env # configure LLM/search/MCP keys
76+
uvx --from "langgraph-cli[inmem]" langgraph dev --allow-blocking
77+
'''
78+
LangGraph Studio UI available at the printed URL (default [http://127.0.0.1:2024)](http://127.0.0.1:2024)).
79+
## 🛠️ Environment Variables
80+
Component Variable Description
81+
junction-app NEXT_PUBLIC_FIREBASE_* Firebase web config (auth domain, project ID…)
82+
NEXT_PUBLIC_ASSESSMENT_API_URL (Future) API route for submissions
83+
deep_security SUMMARIZATION_MODEL, RESEARCH_MODEL… Override default LLMs per stage
84+
SEARCH_API tavily, openai, anthropic, or none
85+
MCP_CONFIG_URL, MCP_CONFIG_TOOLS Optional MCP server info
86+
Shared OPENAI_API_KEY, ANTHROPIC_API_KEY Provider credentials
87+
Shared TAVILY_API_KEY Web search enrichment
88+
## 📚 Documentation
89+
junction-app/README.md – Frontend development tips.
90+
deep_security/README.md – LangGraph configuration, benchmarking, LangSmith usage.
91+
LangChain docs for MCP + multi-provider LLM setup.
92+
Firebase docs for Auth/Firestore provisioning.
93+
## 🚢 Deployment
94+
Layer Recommended Target
95+
Frontend Vercel / Netlify (set Firebase/public env vars)
96+
API Routes Vercel Edge Functions or Next.js serverless runtime
97+
LangGraph Dockerized service on cloud VM or LangGraph Platform
98+
Firebase Managed (Auth + Firestore)
99+
Build frontend: npm run build → deploy.
100+
Package LangGraph service with uv + langgraph dev or containerize for production.
101+
Wire API route to call LangGraph service; secure with bearer tokens.
102+
Point frontend env vars to production endpoints.
103+
## 🧪 Testing & Evaluation
104+
Frontend: npm run lint / npm run test (if configured) plus manual UI smoke tests.
105+
Backend: Run python tests/run_evaluate.py for Deep Research Bench; extract results via tests/extract_langsmith_data.py.
106+
Integration: Validate that Firestore entries appear when manual assessments are triggered (mock API route until backend is wired).
107+
##🤝 Contributing
108+
Fork and branch (git checkout -b feature/<name>).
109+
Keep frontend TypeScript strict and follow existing Tailwind patterns.
110+
For backend changes, update configuration.py docs + README when adding config knobs.
111+
Add tests or LangSmith eval notes for new research behaviors.
112+
Submit PR with a concise summary and screenshots if UI-related.
113+
## 📄 License
114+
MIT – see LICENSE.
115+
## 🙋 Support & Questions
116+
Open an issue in this repo.
117+
Check LangGraph + Firebase docs linked above.
118+
Reach out on project Slack/Discord (if applicable) for architecture questions.

0 commit comments

Comments
 (0)