-
Notifications
You must be signed in to change notification settings - Fork 993
feat(model-service): add OpenAI-compatible wrapper (+ pm2 + env example) and update ignores #254
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. OpenAI API base URLOPENAI_BASE_URL=https://api.openai.com/v1 Replace with your OpenAI API keyOPENAI_API_KEY=sk-REPLACE_ME Model name to use for inferenceMODEL_NAME=gpt-4o-mini Port to run the service onPORT=7071 |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| OPENAI_BASE_URL=https://api.openai.com/v1 | ||
| OPENAI_API_KEY=sk-REPLACE_ME | ||
| MODEL_NAME=gpt-4o-mini | ||
| PORT=7071 | ||
|
|
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The selected line Context: |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| package-lock.json | ||
| node_modules/ | ||
| .env | ||
| .DS_Store | ||
|
|
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This line defines a process type for deployment using a Procfile, which is commonly used with platforms like Heroku. The selection: specifies that the web process should be started by running the command node server.js. This means that when the application is deployed, the platform will launch the web server by executing server.js with Node.js. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| web: node server.js | ||
|
|
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. explanation is for a line typically found in a Procfile (e.g., apps: [This line is part of a Node.js configuration file, specifically for PM2, a process manager for Node.js applications. In this context, the line: apps: [starts the definition of an array of application configurations to be managed by PM2. Each object in this array specifies details about how PM2 should run a particular Node.js app (like its name, script, environment variables, etc.). In this file, the array contains the configuration for running the So, while both are related to starting Node.js services, this line is not for a Procfile or Heroku, but is part of a PM2 configuration specifying which applications PM2 should manage. |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,16 @@ | ||
| module.exports = { | ||
| apps: [ | ||
| { | ||
| name: 'reentry-model-service', | ||
| script: 'server.js', | ||
| instances: 1, | ||
| exec_mode: 'fork', | ||
| env: { | ||
| // server.js loads dotenv; these are optional overrides | ||
| PORT: process.env.PORT || 7071, | ||
| }, | ||
| watch: false, | ||
| }, | ||
| ], | ||
| }; | ||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| { | ||
| "name": "reentry-model-service", | ||
| "version": "1.0.0", | ||
| "private": true, | ||
| "main": "server.js", | ||
| "license": "UNLICENSED", | ||
| "scripts": { | ||
| "dev": "node server.js", | ||
| "start": "node server.js", | ||
| "pm2": "pm2 start ecosystem.config.js" | ||
| }, | ||
| "dependencies": { | ||
| "dotenv": "^16.4.5", | ||
| "express": "^4.19.2", | ||
| "node-fetch": "^2.7.0" | ||
| }, | ||
| "engines": { | ||
| "node": ">=18" | ||
| } | ||
| } |
|
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. const express = require('express');
const fetch = require('node-fetch');
const dotenv = require('dotenv');
dotenv.config();
const app = express();
app.use(express.json());
const PORT = process.env.PORT || 7071;
const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, '');
const OPENAI_API_KEY = process.env.OPENAI_API_KEY || '';
const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini';Line-by-Line Breakdown
Purpose and Best Practices
Suggestions
Summary: |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,64 @@ | ||
| const express = require('express'); | ||
| const fetch = require('node-fetch'); | ||
| const dotenv = require('dotenv'); | ||
|
|
||
| dotenv.config(); | ||
|
|
||
| const app = express(); | ||
| app.use(express.json()); | ||
|
|
||
| const PORT = process.env.PORT || 7071; | ||
| const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, ''); | ||
| const OPENAI_API_KEY = process.env.OPENAI_API_KEY || ''; | ||
| const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini'; | ||
|
|
||
| // Health endpoint | ||
| app.get('/health', (_req, res) => { | ||
| res.json({ ok: true, model: MODEL_NAME, base: OPENAI_BASE_URL }); | ||
| }); | ||
|
|
||
| // Generate endpoint - forwards to provider-compatible chat/completions | ||
| app.post('/generate', async (req, res) => { | ||
| try { | ||
| if (!OPENAI_API_KEY) { | ||
| return res.status(400).json({ error: 'Missing OPENAI_API_KEY in environment' }); | ||
| } | ||
|
|
||
| const body = req.body || {}; | ||
| const messages = body.messages; | ||
| const model = body.model || MODEL_NAME; | ||
| const extra = body.extra || {}; // allow optional pass-through params | ||
|
|
||
| if (!Array.isArray(messages) || messages.length === 0) { | ||
| return res.status(400).json({ error: 'Body must include messages: [{ role, content }]' }); | ||
| } | ||
|
|
||
| const url = `${OPENAI_BASE_URL}/chat/completions`; | ||
|
|
||
| const providerResp = await fetch(url, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Authorization': `Bearer ${OPENAI_API_KEY}`, | ||
| 'Content-Type': 'application/json' | ||
| }, | ||
| body: JSON.stringify({ model, messages, ...extra }) | ||
| }); | ||
|
|
||
| const data = await providerResp.json(); | ||
|
|
||
| if (!providerResp.ok) { | ||
| // Pass through provider error | ||
| return res.status(providerResp.status).json({ error: data.error || data }); | ||
| } | ||
|
|
||
| const text = data?.choices?.[0]?.message?.content || ''; | ||
| return res.json({ text }); | ||
| } catch (err) { | ||
| return res.status(500).json({ error: 'Upstream error', detail: String(err && err.message || err) }); | ||
| } | ||
| }); | ||
|
|
||
| app.listen(PORT, () => { | ||
| console.log(`model-service listening on http://localhost:${PORT}`); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like you're referencing the
.gitignorefile and mentioning the addition of the following lines:Explanation in context:
These lines tell Git to ignore the
.envfile located in themodel-servicedirectory. The.envfile typically contains environment variables, which often include sensitive information like API keys, database credentials, or other configuration details that should not be committed to source control.Code Review Feedback & Suggestions:
Duplication:
model-service/.env. Remove the duplicate to keep the.gitignoreclean.Security:
.envfiles. This helps prevent accidentally pushing sensitive data to the repository.Clarity:
.envfiles in multiple subdirectories, consider using a pattern like**/.env. However, if you only want to ignore it inmodel-service, your current approach is correct.Team Communication:
.envmeans everyone needs to create their own local version or use a sample file (like.env.example) to share the required variables.Suggested Correction:
Just keep one entry:
Summary:
Your intent is good for security and standard practice. Just remove the duplicate line for neatness. If you need further help with
.envhandling or best practices, let me know!