feat: SaaS Production Ready - Version/Metrics Endpoints & Documentation Updates#34
Conversation
- Mark completed Phase 2 items (auth, grant sources, export, kanban) - Add version and metrics endpoint documentation - Update known issues section
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Original prompt from Datakult |
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughAdded backend monitoring endpoints (/version, /metrics) and request-count/startup tracking; added supabase re-export; new email helper wrappers; Firecrawl now tolerates missing API key with availability checks; updated deployment docs to prefer Railway and expanded PRD statuses. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@backend/src/server.js`:
- Around line 69-75: The request-counting middleware (the module that increments
requestCount using app.use) is registered after the /health and /version route
handlers so those endpoints are not counted; move the requestCount and its
app.use middleware (the block that declares startTime and let requestCount = 0
and the app.use((req, res, next) => { requestCount++; next(); })) to earlier in
the middleware chain—specifically immediately after rate limiting and before any
route definitions (i.e., before the /health and /version route registrations) so
all incoming requests are counted.
- Around line 60-66: The buildTime is being recomputed on every request; move
its computation to module startup by creating a module-level constant (e.g.,
BUILD_TIME or buildTimeAtStartup) set to process.env.BUILD_TIME || new
Date().toISOString() and then use that constant in the res.json call (where
buildTime is currently computed inline). Update the response in server.js to
reference the new constant (instead of calling new Date().toISOString() per
request) so the value is stable across requests.
In `@docs/DEPLOYMENT.md`:
- Around line 133-149: Update the /metrics documentation to explicitly state
that memory fields (memory.heapUsed, memory.heapTotal, memory.rss) are returned
as formatted strings in megabytes with an " MB" suffix (e.g., "123.45 MB") and
that uptimeFormatted is a human-readable string, or alternatively change the API
to return numeric byte values plus an explicit unit field; specifically, either
adjust the docs under the `/metrics` section to show the memory fields as
strings with " MB" units and an example response like memory: { heapUsed:
"123.45 MB", heapTotal: "256.00 MB", rss: "300.12 MB" } or modify the /metrics
response implementation to return numeric values (bytes) and add a memory.unit =
"bytes" field so the contract is unambiguous.
🧹 Nitpick comments (2)
backend/src/server.js (2)
61-62: Avoid hard-coded version string.Consider sourcing the version from a build-time env var (or package metadata) to prevent drift across releases.
♻️ Example tweak
- version: "1.0.0", + version: process.env.APP_VERSION || "1.0.0",
58-92: Consider protecting/versionand/metricsin production.These endpoints expose environment details and runtime info. A simple allowlist, auth token, or stricter rate limit would reduce unnecessary surface area.
| res.json({ | ||
| version: "1.0.0", | ||
| name: "ai-grant-crawler-backend", | ||
| buildTime: process.env.BUILD_TIME || new Date().toISOString(), | ||
| nodeVersion: process.version, | ||
| environment: env.nodeEnv, | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n backend/src/server.js | head -80Repository: Datakult0r/ai-grant-crawler-a2a-pro
Length of output: 3182
🏁 Script executed:
rg "BUILD_TIME" --type js --type json --type yaml --type shRepository: Datakult0r/ai-grant-crawler-a2a-pro
Length of output: 167
🏁 Script executed:
cat -n backend/src/config/env.js | head -50Repository: Datakult0r/ai-grant-crawler-a2a-pro
Length of output: 1628
🏁 Script executed:
cat -n backend/src/config/env.jsRepository: Datakult0r/ai-grant-crawler-a2a-pro
Length of output: 2556
Make buildTime stable across requests.
The buildTime value is computed on every request since BUILD_TIME is not set as an environment variable. This means the endpoint returns the current time instead of the actual build time, and recalculates it unnecessarily for each request. Compute it once at module startup.
🔧 Suggested fix
-// Version endpoint
-app.get("/version", (req, res) => {
+const buildTime = process.env.BUILD_TIME || new Date().toISOString();
+// Version endpoint
+app.get("/version", (req, res) => {
res.json({
version: "1.0.0",
name: "ai-grant-crawler-backend",
- buildTime: process.env.BUILD_TIME || new Date().toISOString(),
+ buildTime,
nodeVersion: process.version,
environment: env.nodeEnv,
});
});🤖 Prompt for AI Agents
In `@backend/src/server.js` around lines 60 - 66, The buildTime is being
recomputed on every request; move its computation to module startup by creating
a module-level constant (e.g., BUILD_TIME or buildTimeAtStartup) set to
process.env.BUILD_TIME || new Date().toISOString() and then use that constant in
the res.json call (where buildTime is currently computed inline). Update the
response in server.js to reference the new constant (instead of calling new
Date().toISOString() per request) so the value is stable across requests.
| // Metrics endpoint (basic) | ||
| const startTime = Date.now(); | ||
| let requestCount = 0; | ||
| app.use((req, res, next) => { | ||
| requestCount++; | ||
| next(); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the file exists and get its size
if [ -f "backend/src/server.js" ]; then
echo "=== File found ==="
wc -l backend/src/server.js
else
echo "File not found at backend/src/server.js"
# Search for server.js files
find . -name "server.js" -type f 2>/dev/null
fiRepository: Datakult0r/ai-grant-crawler-a2a-pro
Length of output: 123
🏁 Script executed:
#!/bin/bash
# Read the entire server.js file to see the code structure and route order
cat -n backend/src/server.jsRepository: Datakult0r/ai-grant-crawler-a2a-pro
Length of output: 5254
Move the request counter middleware before the /health and /version routes to count all requests.
The middleware at line 72 is registered after these routes, so requests to /health and /version never increment requestCount. Move the middleware to lines 48-51 (after rate limiting, before route definitions) if you intended to count all requests.
🤖 Prompt for AI Agents
In `@backend/src/server.js` around lines 69 - 75, The request-counting middleware
(the module that increments requestCount using app.use) is registered after the
/health and /version route handlers so those endpoints are not counted; move the
requestCount and its app.use middleware (the block that declares startTime and
let requestCount = 0 and the app.use((req, res, next) => { requestCount++;
next(); })) to earlier in the middleware chain—specifically immediately after
rate limiting and before any route definitions (i.e., before the /health and
/version route registrations) so all incoming requests are counted.
| The backend exposes several monitoring endpoints: | ||
|
|
||
| **`/health`** - Basic health check | ||
| - Returns: `{ status: "ok", timestamp, environment }` | ||
|
|
||
| **`/version`** - Application version info | ||
| - Returns: `{ version, name, buildTime, nodeVersion, environment }` | ||
|
|
||
| **`/metrics`** - Runtime metrics | ||
| - Returns: `{ uptime, uptimeFormatted, requestCount, memory: { heapUsed, heapTotal, rss }, environment }` | ||
|
|
||
| Example: | ||
| ```bash | ||
| curl https://your-app.fly.dev/health | ||
| curl https://your-app.fly.dev/version | ||
| curl https://your-app.fly.dev/metrics | ||
| ``` |
There was a problem hiding this comment.
Clarify /metrics memory units/format in docs.
The implementation returns memory values as strings with “ MB”, but the doc implies raw values. Please document the units/format (or adjust the API to return numeric values with explicit units).
📝 Suggested doc tweak
**`/metrics`** - Runtime metrics
-- Returns: `{ uptime, uptimeFormatted, requestCount, memory: { heapUsed, heapTotal, rss }, environment }`
+- Returns: `{ uptime, uptimeFormatted, requestCount, memory: { heapUsed, heapTotal, rss }, environment }` (memory values are strings in MB)🤖 Prompt for AI Agents
In `@docs/DEPLOYMENT.md` around lines 133 - 149, Update the /metrics documentation
to explicitly state that memory fields (memory.heapUsed, memory.heapTotal,
memory.rss) are returned as formatted strings in megabytes with an " MB" suffix
(e.g., "123.45 MB") and that uptimeFormatted is a human-readable string, or
alternatively change the API to return numeric byte values plus an explicit unit
field; specifically, either adjust the docs under the `/metrics` section to show
the memory fields as strings with " MB" units and an example response like
memory: { heapUsed: "123.45 MB", heapTotal: "256.00 MB", rss: "300.12 MB" } or
modify the /metrics response implementation to return numeric values (bytes) and
add a memory.unit = "bytes" field so the contract is unambiguous.
…ports for backend startup
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@backend/src/services/emailService.js`:
- Around line 244-247: In sendWeeklyDigest replace the unused proposals
parameter or document it as reserved (add a JSDoc comment on sendWeeklyDigest if
you want to keep it for future use) and guard against null/undefined grants
before slicing: ensure grants is an array (e.g., default to [] or validate with
Array.isArray) before calling grants.slice(0, 5); call createWeeklyDigestEmail
with the safe grants array so createWeeklyDigestEmail and sendEmail usage remain
unchanged.
🧹 Nitpick comments (1)
backend/src/services/firecrawl.js (1)
27-30: Return type inconsistency between error and success paths.The guard returns
{ success: false, error }, but the success path (line 35) returns the raw Firecrawl response directly. This inconsistency may confuse callers expecting a uniform response shape.Current callers in
grantDiscovery.jscheck for.markdownwhich handles this implicitly, but consider normalizing returns for a cleaner contract.♻️ Optional: Normalize success response
const scrapeResult = await firecrawl.scrapeUrl(url, { formats: ["markdown"], }); - return scrapeResult; + return { success: true, ...scrapeResult }; } catch (error) {
| export async function sendWeeklyDigest(email, grants, proposals, stats, unsubscribeToken = 'default') { | ||
| const { subject, html } = createWeeklyDigestEmail(stats, grants.slice(0, 5), unsubscribeToken); | ||
| return sendEmail(email, subject, html); | ||
| } |
There was a problem hiding this comment.
Unused proposals parameter and missing null guard on grants.
- The
proposalsparameter is declared but never used. Either remove it or document it as reserved for future use. grants.slice(0, 5)will throw aTypeErrorifgrantsisnullorundefined.
🛠️ Proposed fix
-export async function sendWeeklyDigest(email, grants, proposals, stats, unsubscribeToken = 'default') {
- const { subject, html } = createWeeklyDigestEmail(stats, grants.slice(0, 5), unsubscribeToken);
+export async function sendWeeklyDigest(email, grants, stats, unsubscribeToken = 'default') {
+ const topGrants = Array.isArray(grants) ? grants.slice(0, 5) : [];
+ const { subject, html } = createWeeklyDigestEmail(stats, topGrants, unsubscribeToken);
return sendEmail(email, subject, html);
}If proposals is intentionally reserved for future use, keep it but add a JSDoc comment explaining its purpose.
🤖 Prompt for AI Agents
In `@backend/src/services/emailService.js` around lines 244 - 247, In
sendWeeklyDigest replace the unused proposals parameter or document it as
reserved (add a JSDoc comment on sendWeeklyDigest if you want to keep it for
future use) and guard against null/undefined grants before slicing: ensure
grants is an array (e.g., default to [] or validate with Array.isArray) before
calling grants.slice(0, 5); call createWeeklyDigestEmail with the safe grants
array so createWeeklyDigestEmail and sendEmail usage remain unchanged.
Summary
This PR adds basic monitoring endpoints to the backend and updates documentation to reflect the current state of Phase 2 implementation.
Backend Changes:
/versionendpoint returning app version, build time, node version, and environment/metricsendpoint returning uptime, request count, and memory usage (heap, RSS)Documentation Updates:
Review & Testing Checklist for Human
/versionand/metricsendpoints return expected JSON when backend is running locally (curl http://localhost:3000/versionandcurl http://localhost:3000/metrics)/metricsendpoint should be protected or rate-limited more aggressively in production (currently public)Recommended Test Plan:
cd backend && npm run devcurl localhost:3000/health,curl localhost:3000/version,curl localhost:3000/metricsNotes
The metrics counter is in-memory and resets on server restart - this is intentional for simplicity. For production monitoring, consider integrating with Fly.io's built-in metrics or a service like Prometheus.
Deployment to Fly.io and Vercel is pending - credentials are needed to proceed with the live deployment phase.
Link to Devin run: https://app.devin.ai/sessions/9573820f0f6d41dab1cff8a61c26c981
Requested by: @Datakult0r
Summary by CodeRabbit
New Features
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.