QELM (Quantum-Enhanced Language Model) combines quantum computing and NLP to create compact yet powerful language models.
Main script (current): Qelm2.py (trainer + GUI + utilities)
Legacy script: QelmT.py (older unified trainer/inference)
The latest versions feature:
- Multi-block quantum transformer architecture with advanced multi-head quantum attention.
- Novel techniques such as sub-bit encoding and entropy-mixed gates that allow more representational power per qubit.
- Parameter-shift gradient training (with support for Adam and advanced quantum training modes).
- A unified GUI-first workflow in
Qelm2.pyfor training, saving/loading, token maps, and advanced toggles. - Noise mitigation options: Pauli twirling and zero-noise extrapolation (ZNE) with user-configurable scaling factors.
- Utility modes for dataset/token preprocessing, including local and HuggingFace prep flags.
QELM Quantum (Connect to IBM quantum computers)
- Must have an IBM account
- Must have a basic understanding of running circuits
- Must be familiar with Quantum Computers (you can switch backends in the UI; mind shot/runtime budgets)
TensorFlow is not yet compatible with the latest versions of Python.
To install a working Python version, use the official Python FTP archive, as they no longer provide an executable for this version or lower:
Note: QELM’s core trainer does not require TensorFlow; TensorFlow is optional for experimental modules.
- What’s New in Qelm2.py?
- Architecture Overview
- Feature Matrix
- Features
- Installation
5.1. Prerequisites
5.2. Easy Installation
5.3. Cloning the Repository
5.4. Virtual Environment Setup
5.5. Dependency Installation - Training with Qelm2.py
- Chatting with QELMChatUI.py
- Benchmarks & Metrics
- Running on Real QPUs (IBM, etc.)
- Project Structure
- Roadmap
- License
- Contact
- Unified GUI workflow: configure model, train, save/load
.qelm, manage token maps, and run inference from one interface. - Noise mitigation: GUI toggles for Pauli twirling and ZNE, plus a scaling-factor field (e.g.,
1,3,5). - Token/dataset tooling: built-in prep modes for generating token streams:
--qelm_prep_tokensfor local text → token stream--qelm_prep_hffor HuggingFace datasets → token stream
- LLM → QELM conversion: import LLM weights then convert using your selected encoder/architecture options (where supported by your import path).
QELM mirrors a transformer but swaps heavy linear algebra blocks for compact quantum circuits:
- Classical Embeddings → token → vector
- Quantum Attention (per head) → encode vector into qubits, entangle, extract features
- Quantum Feed-Forward / Channel Mixing → circuit blocks with trainable parameters
- Residual / Combine → classical post-processing
- Output Projection → vocab logits
Optional add-ons depend on your enabled flags (encoding modes, memory/context, mitigation, conversion encoders, etc.).
| Area | Feature | Old (qelm.py / QelmT.py) |
New (Qelm2.py) |
|---|---|---|---|
| Encoding | Scalar RY / basic encoding | ✔ | ✔ |
| Sub-bit encoding | ✔ | ✔ (toggle) | |
| Advanced encoder options | limited | expanded | |
| Attention | Single-block fallback | ✔ | Multi-block |
| Training | Parameter-shift gradients | ✔ | ✔ |
| Optimizers | Adam + advanced modes | ✔ | ✔ |
| GUI | Trainer UI | ✔ | New consolidated UI |
| Utilities | Token/dataset prep modes | limited | ✔ (--qelm_prep_tokens, --qelm_prep_hf) |
| Noise | Pauli twirling & ZNE | ✔ / partial | ✔ (GUI toggle + scaling) |
-
Quantum Circuit Transformers:
- Multi-block transformer architecture with quantum attention and feed-forward layers
- Ring entanglement, data reuploading (when enabled), and residual connections
-
Quantum Training Optimizations:
- Parameter-shift gradient training with Adam and advanced training modes
-
Advanced Quantum Techniques:
- Sub-bit encoding and entropy-controlled quantum channels
- Multiple ansatz/encoding options for experimental setups
- Noise mitigation: Pauli twirling and zero-noise extrapolation (ZNE), with selectable scaling factors
-
Unified Script (Qelm2.py):
- One consolidated script for training, inference, model save/load, token maps, and utilities
- CLI tool modes for dataset/token prep
-
Modern Chat UI (QELMChatUI.py):
- ChatGPT-style conversation interface with message bubbles and session handling (where implemented)
- Loads
.qelmmodels + token maps to generate readable natural language
- Python 3.7+ (commonly tested up to 3.11)
- Qiskit and Qiskit Aer
- NumPy
- Tkinter (usually included with Python)
- psutil (optional, for CPU usage monitoring)
- datasets (optional; only required for
--qelm_prep_hf)
pip install qelmgit clone https://github.com/R-D-BioTech-Alaska/QELM.git
cd QELMpython -m venv qelm_env
# On Linux/Mac:
source qelm_env/bin/activate
# On Windows:
qelm_env\Scripts\activatepip install --upgrade pip
pip install -r requirements.txtRun the trainer UI:
python Qelm2.pyOutputs:
.qelmmodel file<modelname>_token_map.json- Training logs (loss/perplexity where enabled)
(This model is 23 kb's in size)

The QELMChatUI.py script provides a ChatGPT-style interface for interacting with your QELM models.
- Model and Token Mapping:
Load your
.qelmmodel file along with the matching token mapping file (*_token_map.json) so responses map to real words. - Modern Chat Interface: Message bubbles, history/session behavior, and UI features as implemented in your current chat build.
To run the chat UI:
python QELMChatUI.pyCore metrics to report:
- Loss / Cross-Entropy
- Perplexity
- Optional text metrics (BLEU / distinct-n) if you enable them in your evaluation workflow
If you run against IBM backends, ensure credentials are configured and select the backend you want.
Minimal example:
from qiskit_ibm_runtime import QiskitRuntimeService
service = QiskitRuntimeService(channel="ibm_quantum", token="YOUR_TOKEN")
backend = service.backend("BACKEND_NAME")QELM/
├── Qelm2.py # Main consolidated trainer + GUI + utilities
├── QelmT.py # Legacy trainer/inference (reference)
├── QELMChatUI.py # Chat interface for QELM models
├── requirements.txt
├── Datasets/
├── docs/
│ └── images/
│ ├── qelm_logo_small.png
│ ├── qelmtrainer.png
│ ├── QELM_Diagram.png
│ ├── quantum.png
│ ├── chat.png
│ └── ctheo.jpg
├── README.md
└── LICENSE
- Backend abstraction beyond Aer/IBM
- Automated benchmark script: perplexity/BLEU/top-k in one JSON report
- Tokenizer upgrades: plug-in BPE/Unigram tokenizers
- Auto circuit diagrams per block for documentation
This project is licensed under the MIT License. See the LICENSE file for details.
For additional guidance, collaboration, or bug reports:
- Email: [email protected]
- Email: [email protected]
- GitHub: R-D-BioTech-Alaska
- Website: RDBioTech.org
- Website: Qelm.org
(Disclaimer: QELM is experimental; community feedback is greatly appreciated.)



