Skip to content

teolex2020/aura-core

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Aura Core

Ultra-fast cognitive memory engine based on Sparse Distributed Representations (SDR).

Aura Core encodes text into high-dimensional sparse binary vectors (256K bits, 512 active) and retrieves semantically similar memories using Tanimoto similarity with inverted bitmap indexing.

Performance

Operation Latency Throughput
SDR encode ~17us 58K texts/sec
Retrieve top-5 ~74us 13K queries/sec
Batch ingest - ~100K records/sec
Storage append ~5us 200K writes/sec

Benchmarked on Intel i7-13700K, single-threaded, 10K records in database.

Features

  • SDR Engine (sdr.rs) - xxHash3-based n-gram hashing into 256K-bit sparse vectors
  • Append-Only Storage (storage.rs) - Binary format with RAM header cache for zero-disk-IO retrieval
  • Inverted Index (index.rs) - RoaringBitmap-backed inverted index for sub-millisecond search
  • Memory API (memory.rs) - High-level process/retrieve/delete interface
  • Lite Mode - 16K-bit SDR for embedded/IoT devices (feature flag: lite)
  • Unicode Support - Full UTF-8: Cyrillic, CJK, emoji, mixed scripts

Quick Start

Add to your Cargo.toml:

[dependencies]
aura-core = "0.1"

Basic Usage

use aura_core::AuraMemory;

fn main() -> anyhow::Result<()> {
    let mem = AuraMemory::new("./my_brain")?;

    // Store memories
    mem.process("Rust is a systems programming language")?;
    mem.process("Python is great for data science")?;
    mem.process("TypeScript powers modern web apps")?;

    // Retrieve relevant memories
    let results = mem.retrieve("programming", 5)?;
    for text in &results {
        println!("  {}", text);
    }

    // Batch ingest (100x faster)
    let texts = vec![
        "Neural networks learn from data".into(),
        "Transformers use attention mechanisms".into(),
        "SDR is biologically inspired".into(),
    ];
    mem.ingest_batch(texts)?;

    // Flush to disk
    mem.flush()?;

    Ok(())
}

SDR Direct Usage

use aura_core::SDRInterpreter;

let sdr = SDRInterpreter::default(); // 256K bits

let a = sdr.text_to_sdr("machine learning", false);
let b = sdr.text_to_sdr("machine learning algorithms", false);
let c = sdr.text_to_sdr("banana smoothie recipe", false);

let sim_ab = sdr.tanimoto_sparse(&a, &b);
let sim_ac = sdr.tanimoto_sparse(&a, &c);

println!("ML vs ML algorithms: {:.3}", sim_ab);  // ~0.6+
println!("ML vs banana: {:.3}", sim_ac);          // ~0.01

Architecture

text --> [SDR Encoder] --> sparse bits (Vec<u16>)
                              |
                    +---------+---------+
                    |                   |
            [Inverted Index]    [Binary Storage]
            (RoaringBitmap)     (brain.aura file)
                    |                   |
                    +----> [Retrieve] <-+
                           Tanimoto ranking

Feature Flags

Flag Description
default Full 256K-bit SDR
lite Reduced 16K-bit SDR for embedded devices

License

Licensed under the Apache License, Version 2.0. Copyright 2026 Oleksandr Tepliuk.

You are free to use, modify, and distribute Aura Core for any purpose — commercial or non-commercial — subject to the terms of the Apache 2.0 license.

See LICENSE for full terms.

Commercial Extensions

The full Aura Memory platform includes additional capabilities not in this core engine:

  • ChaCha20-Poly1305 encryption at rest
  • P2P synchronization with CRDT merge
  • Temporal sequence prediction (predict/surprise)
  • Homeostatic plasticity (GRPO reinforcement learning)
  • Active Cortex (O(1) reflex cache)
  • Federated learning with differential privacy
  • Neuromorphic export (SpiNNaker, Loihi 2, FPGA)
  • HTTP/REST dashboard server
  • Python bindings (PyO3 + NumPy)

For commercial licensing inquiries, contact the author.

Building

# Standard build
cargo build --release

# Lite mode (embedded)
cargo build --release --features lite

# Run tests
cargo test

# Run benchmarks
cargo run --release --example benchmark

About

Ultra-fast offline pattern classification engine. 60-200µs, 3MB, zero cloud. Patent Pending

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

 
 
 

Contributors

Languages