Skip to content

duinness/hack2025-Modo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenAI Chat Service & MODO Project

This repository contains two projects:

  1. A simple Node.js service that communicates with OpenAI's chat endpoint
  2. MODO - A hackathon project for assessment generation

MODO Project (Hackathon Demo)

The MODO project is located in the project directory and consists of a React-based frontend and an Express-based backend.

Running MODO Backend

  1. Navigate to the server directory:
cd project/server
  1. Install dependencies:
yarn install
  1. Create a .env file with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
  1. Start the development server:
yarn dev

The backend will be available at http://localhost:3000

Running MODO Frontend

  1. Navigate to the project directory:
cd project
  1. Install dependencies:
yarn install
  1. Start the development server:
yarn dev

The frontend will be available at http://localhost:5173

Note: Both the frontend and backend need to be running simultaneously for the full application to work.

Original OpenAI Chat Service

A simple Node.js service that communicates with OpenAI's chat endpoint using Koa and node-fetch.

A very basic chat UI was added as an easy way to add prompts and see how OpenAI reacts. System prompts can be added to the messages array in public/app.js. The conversation is stored in memory, so refreshing the page restarts the conversation but the system prompts will always be there.

Setup

  1. Install dependencies:
yarn install
  1. Create a .env file in the root directory with your OpenAI API key:
OPENAI_API_KEY=your_api_key_here
PORT=3000
  1. Start the server:
node src/server.js

UI Usage

Use a web browser and navigate to localhost:3000 and chat away

API Usage

Chat Endpoint

POST /chat

Request body:

{
  "messages": [
    {
      "role": "user",
      "content": "Hello, how are you?"
    }
  ],
  "stream": false  // Optional, defaults to false
}

The messages array should follow OpenAI's chat format with role and content for each message.

If stream is set to true, the response will be streamed as Server-Sent Events (SSE).

Example Usage

// Non-streaming request
const response = await fetch('http://localhost:3000/chat', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    messages: [
      {
        role: 'user',
        content: 'Hello, how are you?'
      }
    ]
  })
});

// Streaming request
const response = await fetch('http://localhost:3000/chat', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    messages: [
      {
        role: 'user',
        content: 'Hello, how are you?'
      }
    ],
    stream: true
  })
});

About

Repo to store code for the Holtzbrinck Hackathon 2025

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors