Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -158,3 +158,5 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like you're referencing the .gitignore file and mentioning the addition of the following lines:

model-service/.env
model-service/.env

Explanation in context:
These lines tell Git to ignore the .env file located in the model-service directory. The .env file typically contains environment variables, which often include sensitive information like API keys, database credentials, or other configuration details that should not be committed to source control.

Code Review Feedback & Suggestions:

  1. Duplication:

    • You have the same line twice. You only need one entry for model-service/.env. Remove the duplicate to keep the .gitignore clean.
  2. Security:

    • Good practice to ignore .env files. This helps prevent accidentally pushing sensitive data to the repository.
  3. Clarity:

    • If you expect .env files in multiple subdirectories, consider using a pattern like **/.env. However, if you only want to ignore it in model-service, your current approach is correct.
  4. Team Communication:

    • Remind your team that ignoring .env means everyone needs to create their own local version or use a sample file (like .env.example) to share the required variables.

Suggested Correction:
Just keep one entry:

model-service/.env

Summary:
Your intent is good for security and standard practice. Just remove the duplicate line for neatness. If you need further help with .env handling or best practices, let me know!

model-service/.env
model-service/.env
5 changes: 5 additions & 0 deletions model-service/.env.example
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpenAI API base URL

OPENAI_BASE_URL=https://api.openai.com/v1

Replace with your OpenAI API key

OPENAI_API_KEY=sk-REPLACE_ME

Model name to use for inference

MODEL_NAME=gpt-4o-mini

Port to run the service on

PORT=7071

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_API_KEY=sk-REPLACE_ME
MODEL_NAME=gpt-4o-mini
PORT=7071

5 changes: 5 additions & 0 deletions model-service/.gitignore
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The selected line node_modules/ in a .gitignore file tells Git to ignore the entire node_modules directory. This means that all files and subdirectories inside node_modules/ will not be tracked by Git or included in commits.

Context:
node_modules/ is the default directory where the Node.js package manager (npm or yarn) installs project dependencies. It can contain thousands of files and is typically very large. These files are generated from package.json and can always be recreated, so it's best practice not to include them in version control. This keeps the repository size manageable and avoids sharing unnecessary files.

Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
package-lock.json
node_modules/
.env
.DS_Store

2 changes: 2 additions & 0 deletions model-service/Procfile
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line defines a process type for deployment using a Procfile, which is commonly used with platforms like Heroku. The selection:

web: node server.js

specifies that the web process should be started by running the command node server.js. This means that when the application is deployed, the platform will launch the web server by executing server.js with Node.js.

Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
web: node server.js

16 changes: 16 additions & 0 deletions model-service/ecosystem.config.js
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

explanation is for a line typically found in a Procfile (e.g., web: node server.js). However, in the provided diff, the selected line is:

apps: [

This line is part of a Node.js configuration file, specifically for PM2, a process manager for Node.js applications. In this context, the line:

apps: [

starts the definition of an array of application configurations to be managed by PM2. Each object in this array specifies details about how PM2 should run a particular Node.js app (like its name, script, environment variables, etc.). In this file, the array contains the configuration for running the reentry-model-service by executing server.js as a single instance in fork mode.

So, while both are related to starting Node.js services, this line is not for a Procfile or Heroku, but is part of a PM2 configuration specifying which applications PM2 should manage.

Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
module.exports = {
apps: [
{
name: 'reentry-model-service',
script: 'server.js',
instances: 1,
exec_mode: 'fork',
env: {
// server.js loads dotenv; these are optional overrides
PORT: process.env.PORT || 7071,
},
watch: false,
},
],
};

20 changes: 20 additions & 0 deletions model-service/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"name": "reentry-model-service",
"version": "1.0.0",
"private": true,
"main": "server.js",
"license": "UNLICENSED",
"scripts": {
"dev": "node server.js",
"start": "node server.js",
"pm2": "pm2 start ecosystem.config.js"
},
"dependencies": {
"dotenv": "^16.4.5",
"express": "^4.19.2",
"node-fetch": "^2.7.0"
},
"engines": {
"node": ">=18"
}
}
64 changes: 64 additions & 0 deletions model-service/server.js
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const express = require('express');
const fetch = require('node-fetch');
const dotenv = require('dotenv');

dotenv.config();

const app = express();
app.use(express.json());

const PORT = process.env.PORT || 7071;
const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, '');
const OPENAI_API_KEY = process.env.OPENAI_API_KEY || '';
const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini';

Line-by-Line Breakdown

  1. Import Express:
    const express = require('express');
    Loads the Express framework, used for creating and managing the HTTP server and API endpoints.

  2. Import node-fetch:
    const fetch = require('node-fetch');
    Imports the node-fetch library, allowing the app to make HTTP requests, such as forwarding payloads to external APIs (here, OpenAI).

  3. Import dotenv:
    const dotenv = require('dotenv');
    Loads the dotenv library, which enables loading environment variables from a .env file into process.env.

  4. Load Environment Variables:
    dotenv.config();
    Executes dotenv’s config method, which reads the .env file and populates process.env with its values.

  5. Initialize Express App:
    const app = express();
    Creates an Express application instance, which will be used to set up routes/endpoints.

  6. Enable JSON Parsing:
    app.use(express.json());
    Sets up middleware to automatically parse incoming request bodies that contain JSON, making it easier to handle API requests.

  7. Set PORT Variable:
    const PORT = process.env.PORT || 7071;
    Determines the port for the server to listen on, first trying the value from environment variables and defaulting to 7071 if not set.

  8. Configure OpenAI Base URL:
    const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, '');
    Sets the base URL for the OpenAI API. If an environment variable is provided, it uses that; otherwise, it defaults to https://api.openai.com/v1. The .replace(/\/+$/, '') part removes any trailing slashes for consistency.

  9. Get OpenAI API Key:
    const OPENAI_API_KEY = process.env.OPENAI_API_KEY || '';
    Retrieves the OpenAI API key from environment variables, defaulting to an empty string if not set.

  10. Set Model Name:
    const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini';
    Sets the model name to use for API requests. Uses an environment variable if available, otherwise defaults to gpt-4o-mini.


Purpose and Best Practices

  • Modularity & Configuration:
    The use of environment variables for sensitive information (API keys, URLs, port numbers) allows for secure and easy configuration across different environments (development, production, etc.).
  • Middleware Usage:
    The Express JSON middleware ensures that any incoming API requests with JSON bodies are parsed automatically, reducing boilerplate code and potential errors.
  • Sanitization:
    Stripping trailing slashes from the API base URL prevents issues with endpoint construction later in the code.
  • Default Values:
    Providing sensible defaults (like port 7071 and the default model name) ensures the service can start with minimal configuration for development or testing.

Suggestions

  • Validation:
    Consider adding explicit validation for required environment variables (like the API key), logging a warning or error if they’re missing.
  • Security:
    Never commit actual .env files or secrets to version control.
  • Scalability:
    If more configuration options are added later, consider using a configuration management library or schema validation.

Summary:
These lines set up the foundation of an Express-based API service, configuring it for integration with OpenAI’s API, and ensuring that configuration is flexible and secure via environment variables.

Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
const express = require('express');
const fetch = require('node-fetch');
const dotenv = require('dotenv');

dotenv.config();

const app = express();
app.use(express.json());

const PORT = process.env.PORT || 7071;
const OPENAI_BASE_URL = (process.env.OPENAI_BASE_URL || 'https://api.openai.com/v1').replace(/\/+$/, '');
const OPENAI_API_KEY = process.env.OPENAI_API_KEY || '';
const MODEL_NAME = process.env.MODEL_NAME || 'gpt-4o-mini';

// Health endpoint
app.get('/health', (_req, res) => {
res.json({ ok: true, model: MODEL_NAME, base: OPENAI_BASE_URL });
});

// Generate endpoint - forwards to provider-compatible chat/completions
app.post('/generate', async (req, res) => {
try {
if (!OPENAI_API_KEY) {
return res.status(400).json({ error: 'Missing OPENAI_API_KEY in environment' });
}

const body = req.body || {};
const messages = body.messages;
const model = body.model || MODEL_NAME;
const extra = body.extra || {}; // allow optional pass-through params

if (!Array.isArray(messages) || messages.length === 0) {
return res.status(400).json({ error: 'Body must include messages: [{ role, content }]' });
}

const url = `${OPENAI_BASE_URL}/chat/completions`;

const providerResp = await fetch(url, {
method: 'POST',
headers: {
'Authorization': `Bearer ${OPENAI_API_KEY}`,
'Content-Type': 'application/json'
},
body: JSON.stringify({ model, messages, ...extra })
});

const data = await providerResp.json();

if (!providerResp.ok) {
// Pass through provider error
return res.status(providerResp.status).json({ error: data.error || data });
}

const text = data?.choices?.[0]?.message?.content || '';
return res.json({ text });
} catch (err) {
return res.status(500).json({ error: 'Upstream error', detail: String(err && err.message || err) });
}
});

app.listen(PORT, () => {
console.log(`model-service listening on http://localhost:${PORT}`);
});