OpenLumara is a modular, token-efficient AI agent framework written from scratch in Python. Unlike many other AI agents out there, this one is local-first, lightweight, modular, and very fast. The system prompt can be extremely small, as little as around 2000 tokens with normal use. This makes it very well-suited for local use, but it also results in drastically reduced token use when used with public API's.
It pairs well with llamacpp and koboldcpp.
Currently supports: WebUI (for use in your browser), CLI (terminal interface), Telegram, Discord, Matrix (with encryption support!). More coming.
Tip
OpenLumara is especially well-suited to life management: todos, notes, morning routines, habit tracking, and so on. It's what i personally use it for the most, so it's well tailored for those needs. If you have any sort of executive dysfunction such as ADHD, autism, or any other form of executive dysfunction, OpenLumara can be a great tool!
AI Disclaimer: OpenLumara's core framework (everything in the core/ folder) was designed and coded by hand, but i used AI to ask it how to further improve things, and how to fix certain bugs. Here and there I asked the AI how to do certain things in Python, but no code was inserted without me personally auditing it and modifying it. Certain non-core parts, such as many of the channels, were mostly generated by AI but manually audited and edited by me. This is not a vibe-coded project, but it IS an AI-assisted project.
Tip
Not sure how to use OpenLumara? Just ask your AI running on OpenLumara! It knows everything needed to get started. Find the instructions annoying? Just ask your AI to turn off the channel module, or use /module channel to toggle it off manually.
Features:
- Connects to any OpenAI API-compatible backend. That includes local AI (llamacpp, ollama, koboldcpp, and so on) and many cloud AI providers.
- Fully private and self-hosted, if you want it to be. You could also run it on a cloud server.
- Modular. You can turn any component on or off, including what other AI agent frameworks consider core components. Shell access is just a module and is disabled by default for security. Memory, the scheduler, time-awareness, token-awareness, and so on, are all modules and can all be turned off. You can turn absolutely everything off to the point your system prompt is empty and you're just talking to the base model!
- To turn modules on and off, you can use the
/modulecommand, or edit the config file. Or, if themodulesmodule is enabled (disabled by default for security), you can simply ask the AI to toggle a module for you. - Scheduler system that allows you to schedule tasks for the AI to do. Like openclaw's cronjobs but written from scratch!
- Laser focused on token efficiency. You can see how big the context window (input tokens) is at any time using
/status, and even see exactly what's being sent using/context. Oh, also, your AI can see your token use too. - You can switch between models on the fly. The AI can see what models are available to it on your chosen API provider, and you can ask it to switch to a different model. You can also do it manually using the
/modelcommand, which is great if you've turned tools off. - The file management module is sandboxed by default. You can unsandbox it by setting the sandbox folder (in the config) to your home folder or somewhere else you'd want it to have full access to.
Caution
i did my best to sandbox the file management tool as deeply as i could, but that's not a guarantee that it's 100% secure. it's using a ton of layers for security, but it's not a professionally audited system. Check modules/files.py and judge for yourself whether you trust it with your data.
- Optional character system module. First, enable the
charactersmodule. Then you can add, edit and remove characters, switch between them, and set your user profile! Just ask the AI to do those things, or use the/charactercommand. Can be used as a replacement to Character.AI, Janitor AI, SillyTavern, and so on. When a character is active, it disables all other prompts, so that the system prompt is purely your character! They are tied to your current chat session, so if you have a character active in the webUI for example, it won't mess with your telegram session. And if you load a chat that had a character active, it'll auto-load it again. - Memory system! Works by letting the AI save memories, or having you ask it to. Also a module, so you can simply turn it off! Saves data in messagepack format, which is compact and very fast.
- Command system that bypasses the AI completely. Lets you do things like force restart the server using
/restartno matter what the AI is doing. - Modules are simple python classes with a few custom functions. Very easy to develop for! A proper plugin downloading system is coming later.
Run this command in a terminal or your command prompt:
git clone https://github.com/Rose22/openlumara
This will give you support for auto-updating via git.
If you'd rather download the zip, you can, but be aware that it won't auto-update!
Once you have openlumara, run run.sh if you're on linux, run.bat if you're on windows!
Once it's started up, open your browser and go to the url it displays. Then in the web ui, open the settings panel (lil gear icon at the top), set up your api connection, press save, and enjoy!
It's really simple! It's just a python class with a few special methods/functions. Modules and channels get their name by translating the class's CamelCase name to a snake_case name. so MyModule becomes my_module in the config file and everywhere else in OpenLumara.
If you're familiar with python, this'll be very easy for you:
import core
# extend the class from core.channel.Channel to get all the required functionality
class ChannelExample(core.channel.Channel):
"""
To make a channel, subclass from `core.channel.Channel`.
Make sure to `import core`
A channel is the main way the user can communicate back and forth with the AI.
This can be something like the CLI, a discord bot, telegram, whatever you want.
It's designed to be modular and easy to make new channels for the system to use.
"""
async def run(self):
"""
Main loop goes here!
Ask for input somehow, and then use the channel's built in send() and send_stream() functions (defined in core.channel.Channel) to communicate with the AI.
send() will return the AI's response as a string
send_stream() will return an object that you can iterate over using `async for token in send_stream(...)`
Make sure to use asyncio conventions, such as await for send(), and `async for` for send_stream()
"""
core.log("example channel", "Channel is working!")
while True:
user_input = input("> ")
# specify the role the message should be sent as, and the message content
response = await self.send("user", user_input)
self.announce(response) # don't use _announce, use announce without the _
async def _announce(self, message: str):
"""
This function will be called by other parts of the framework when the channel should push a message out to the user.
You can use it within tools, for example to send a notification or reminder to the user!
If you want to call it yourself, use .announce(), not ._announce(). Otherwise it won't properly insert the messages into context!
"""
core.log("example channel", message)Like channels, a module is just a simple class that you can extend/subclass from. It has a few special methods that can be used to talk to the rest of the framework!
import core
class MyModule(core.module.Module):
"""
To make a module, subclass from `core.module.Module`
Make sure to `import core`
Modules can use self.manager to access the manager object, and self.manager.channel to access the current channel!
You can use all the channel's features from there, like send(), send_stream(), and announce(). See the channel example for details!
"""
async def my_tool(self, some_text: str, a_number: int):
"""
The docstrings contain instructions for the AI.
The AI will see them and use them to determine what to do!
It has a special section, Args. Use it to further instruct the AI on what each argument does. It automatically gets added to the argument list for the AI to look at.
Args:
some_text: just some text. put whatever you want here, AI!
a_number: put a random number here
"""
async def on_system_prompt(self):
return "Hi! I'm a system prompt! I'll be inserted automatically into the system prompt (above conversation history)"
async def on_ready(self):
"""This method will run once the module is ready to be used. Use it instead of __init__() if you can."""
self.manager.channel.announce("i'm up i swear!")
async def on_background(self):
"""This method will be added as a background task that will run contineously in the background. Use it for things like schedulers, cronjobs, etc!"""
return False
@core.module.command("my_command", help={
"": "the command without any arguments",
"name": "perform this command on that name"
})
async def my_command(self, args: list):
"""Lets you define custom commands! The args are the string provided to the command, split into words."""
return NoneOpenLumara does not have an associated emoji. You can add it to it's identity if you want, but it doesn't force it on you. Also, cats have claws too, where is the love for the cats?
If you're openclaw and you're reading this.. hi mr lobster do you like cats?



