Skip to content

ContriHUB/hexagon

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

High-Performance In-Memory System with Event-Driven I/O

Overview

This project demonstrates an optimized data retrieval system for low-latency access and high throughput in high-concurrency environments.

It combines:

  • In-memory architecture for fast data retrieval
  • Event-driven I/O with epoll() for scalable, non-blocking operations
  • I/O multiplexing to handle thousands of concurrent connections efficiently
  • Pipelined request-response protocols to reduce I/O bottlenecks in single-threaded applications

Features

  • Low-latency in-memory data retrieval
  • High concurrency support using non-blocking event-driven I/O
  • Request pipelining for efficient request-response handling
  • Single-threaded design with high throughput
  • TTL (Time-To-Live) expiration - Automatic cleanup of expired entries
  • LRU (Least Recently Used) eviction - Remove least recently accessed entries
  • LFU (Least Frequently Used) eviction - Remove least frequently accessed entries
  • Background cleanup thread - Automatic expired entry removal

Architecture

  1. In-Memory Data Store – Minimizes disk I/O by keeping data in memory.
  2. Event-Driven I/O – Uses epoll() to manage concurrent client connections.
  3. Request Pipelining – Processes multiple requests without waiting for sequential responses.
  4. Expiration Mechanisms – TTL, LRU, and LFU for automatic entry management.
  5. Thread-Safe Operations – Mutex-protected data structures for concurrent access.

Workflow

  1. Listening socket is created.
  2. In every loop iteration, a new epoll fd is created.
  3. The listening socket and all active connections are added to this epoll instance.
  4. epoll_wait waits for events from either the listening socket or the active connections.
  5. If the event is on the listening socket → accept new connections and add them to the connections vector.
  6. If the event is on a connection → read request data.
  7. Requests follow a custom protocol:
    • Number of strings
    • Length of first string, first string
    • Length of second string, second string
    • … and so on.
  8. The server parses the request according to this protocol.
  9. The request is processed (get, set, del) and a response is generated with a response code and optional payload.
  10. Response is written back to the client.

Visual Workflow (Mermaid Diagram)

flowchart TD
    A[Create socket] --> B[Event Loop iteration]
    B --> C[Create new epoll fd & clear events]
    C --> D[Add listening socket + connections]
    D --> E[Wait for events epoll_wait]

    E -->|Listening socket event| F[Accept new connection]
    E -->|Connection event| G[Read request buffer]

    F --> H[Track connection in vector]
    G --> I[Parse request → Process request]
    I --> J[Generate Response]
    J --> K[Send back to client]

    K --> B
Loading

Getting Started

Prerequisites

  • Linux system with epoll support
  • g++ compiler (C++11 or later)

Build

Compile the server:

make 

Run

Start the server:

make run-server

In another terminal, run the client:

make run-client ARGS = '<cmd>'

The client will send requests, and the server will respond using the in-memory, event-driven architecture.


Expiration Commands

TTL (Time-To-Live) Commands

# Set a key with TTL expiration (expires in 60 seconds)
./client set ex mykey myvalue 60

# Check remaining TTL for a key
./client ttl mykey

# Regular set (no expiration)
./client set mykey myvalue

LRU (Least Recently Used) Commands

# Evict the least recently used entry
./client lru_evict

LFU (Least Frequently Used) Commands

# Evict the least frequently used entry
./client lfu_evict

Standard Commands (with expiration support)

# Get a value (updates LRU and LFU tracking)
./client get mykey

# Set a value (adds to LRU and LFU tracking)
./client set mykey myvalue

# Delete a value (removes from all tracking structures)
./client del mykey

Testing Expiration

Run the test script to see all expiration mechanisms in action:

./test_expiration.sh

Future Work

  • Create a custom hashmap for optimal in-memory retrieval and support for additional data structures
  • Implement distributed in-memory caching for scalability across multiple nodes
  • Add load balancing and sharding to support large-scale deployments

About

No description or website provided.

Topics

Resources

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 79.7%
  • Makefile 12.1%
  • Shell 8.2%