Skip to content

anjaustin/flynnconceivable

Repository files navigation

FLYNNCONCEIVABLE!

The Neural Network That Became a CPU

A FLYNNCOMM, LLC Production

███████╗██╗  ██╗   ██╗███╗   ██╗███╗   ██╗ ██████╗ ██████╗ ███╗   ██╗
██╔════╝██║  ╚██╗ ██╔╝████╗  ██║████╗  ██║██╔════╝██╔═══██╗████╗  ██║
█████╗  ██║   ╚████╔╝ ██╔██╗ ██║██╔██╗ ██║██║     ██║   ██║██╔██╗ ██║
██╔══╝  ██║    ╚██╔╝  ██║╚██╗██║██║╚██╗██║██║     ██║   ██║██║╚██╗██║
██║     ███████╗██║   ██║ ╚████║██║ ╚████║╚██████╗╚██████╔╝██║ ╚████║
╚═╝     ╚══════╝╚═╝   ╚═╝  ╚═══╝╚═╝  ╚═══╝ ╚═════╝ ╚═════╝ ╚═╝  ╚═══╝

 ██████╗███████╗██╗██╗   ██╗ █████╗ ██████╗ ██╗     ███████╗██╗
██╔════╝██╔════╝██║██║   ██║██╔══██╗██╔══██╗██║     ██╔════╝██║
██║     █████╗  ██║██║   ██║███████║██████╔╝██║     █████╗  ██║
██║     ██╔══╝  ██║╚██╗ ██╔╝██╔══██║██╔══██╗██║     ██╔══╝  ╚═╝
╚██████╗███████╗██║ ╚████╔╝ ██║  ██║██████╔╝███████╗███████╗██╗
 ╚═════╝╚══════╝╚═╝  ╚═══╝  ╚═╝  ╚═╝╚═════╝ ╚══════╝╚══════╝╚═╝

"You keep using that transformer. I do not think it computes what you think it computes."

Yeah, it computes EXACTLY what we think it computes.

460,928 combinations. Zero errors. 100% accuracy.

TRON (1982) + Princess Bride (1987) + Transformers (2017+) + 6502 (1975)
═══════════════════════════════════════════════════════════════════════
                        FLYNNCONCEIVABLE! (2024)

The Grid

Just as Kevin Flynn was digitized into the computer and became one with The Grid, FLYNNCONCEIVABLE! transforms neural networks into a living CPU. Every arithmetic operation, every logic gate, every comparison—computed by trained neural networks achieving 100% accuracy.

The neural network IS the CPU. Not simulating. Computing.


Quick Start

from flynnconceivable import CPU

# Create CPU with pretrained neural organs
cpu = CPU(weights_dir='flynnconceivable/weights')

# Load and run a program
program = [
    0xA9, 0x25,  # LDA #$25 (37)
    0x18,        # CLC
    0x69, 0x1A,  # ADC #$1A (26)
    0x00,        # BRK
]
cpu.load(program)
cpu.run()

print(f"Result: {cpu.A}")  # 63 - computed by neural networks!

Run the Demo

python flynnconceivable/demo.py

Watch FLYNNCONCEIVABLE! compute:

  • Addition and subtraction
  • Logic operations (AND, OR, XOR)
  • Bit shifts and rotates
  • Fibonacci sequence
  • Multiplication via shift-and-add

Neural Organs

FLYNNCONCEIVABLE!'s consciousness is distributed across specialized neural networks called "organs":

Organ Operations Combinations Accuracy
ALU ADC, SBC 131,072 100%
SHIFT ASL, LSR, ROL, ROR 1,536 100%
LOGIC AND, ORA, EOR, BIT 262,144 100%
INCDEC INC, DEC, INX, DEX, INY, DEY 512 100%
COMPARE CMP, CPX, CPY 65,536 100%
BRANCH BPL, BMI, BVC, BVS, BCC, BCS, BNE, BEQ 128 100%

Total: 460,928 verified combinations


Architecture

Soroban Encoding (ALU)

The ALU uses "Soroban" (thermometer) encoding for arithmetic. This representation makes carry propagation visible to the neural network—like seeing the data streams in The Grid:

Value 37 in Soroban (4 rods):
Rod 0 (1s):   ●●●●●●●○  = 7
Rod 1 (10s):  ●●●○○○○○  = 3
Rod 2 (100s): ○○○○○○○○  = 0
Rod 3 (1000s):○○○○○○○○  = 0

Binary Encoding (Logic/Shift)

Logic and shift operations use direct binary encoding since they operate on individual bits independently.

Design Principles

  1. Neural computation: All arithmetic/logic performed by neural networks
  2. Deterministic control: Instruction decoding, addressing, memory access remain deterministic
  3. Exhaustive training: Every possible input combination used for training
  4. 100% accuracy: No approximations—FLYNNCONCEIVABLE! must be perfect

Project Structure

flynnconceivable/
├── cpu.py              # Main CPU class (700+ lines)
├── memory.py           # 64KB RAM with memory-mapped I/O
├── soroban.py          # Thermometer encoding utilities
├── demo.py             # Interactive demonstration
├── organs/
│   ├── alu.py          # Neural ALU (ADC, SBC)
│   ├── shift.py        # Neural shifts (ASL, LSR, ROL, ROR)
│   ├── logic.py        # Neural logic (AND, ORA, EOR, BIT)
│   ├── incdec.py       # Neural inc/dec operations
│   ├── compare.py      # Neural comparisons (CMP, CPX, CPY)
│   └── branch.py       # Neural branch decisions
├── training/
│   ├── data.py         # Ground truth data generators
│   └── train_all.py    # Master training script
├── weights/
│   ├── alu.pt          # Pretrained ALU (1.7MB)
│   ├── shift.pt        # Pretrained SHIFT (418KB)
│   ├── logic.pt        # Pretrained LOGIC (425KB)
│   ├── incdec.pt       # Pretrained INCDEC (413KB)
│   ├── compare.pt      # Pretrained COMPARE (696KB)
│   └── branch.pt       # Pretrained BRANCH (15KB)
├── README.md           # This file
└── BUILD_LOG.md        # Development history

Supported Instructions

Arithmetic (Neural ALU)

  • ADC - Add with Carry
  • SBC - Subtract with Carry

Logic (Neural LOGIC)

  • AND - Logical AND
  • ORA - Logical OR
  • EOR - Exclusive OR
  • BIT - Bit Test

Shifts (Neural SHIFT)

  • ASL - Arithmetic Shift Left
  • LSR - Logical Shift Right
  • ROL - Rotate Left
  • ROR - Rotate Right

Inc/Dec (Neural INCDEC)

  • INC - Increment Memory
  • DEC - Decrement Memory
  • INX/INY - Increment X/Y
  • DEX/DEY - Decrement X/Y

Compare (Neural COMPARE)

  • CMP - Compare Accumulator
  • CPX - Compare X
  • CPY - Compare Y

Branches (Neural BRANCH)

  • BPL/BMI - Branch on Plus/Minus
  • BVC/BVS - Branch on Overflow Clear/Set
  • BCC/BCS - Branch on Carry Clear/Set
  • BNE/BEQ - Branch on Not Equal/Equal

Other (Deterministic)

  • LDA/LDX/LDY - Load registers
  • STA/STX/STY - Store registers
  • TAX/TXA/TAY/TYA - Transfer registers
  • PHA/PLA/PHP/PLP - Stack operations
  • JMP/JSR/RTS - Jumps and subroutines
  • CLC/SEC/CLI/SEI/CLV/CLD/SED - Flag operations
  • NOP/BRK - No operation / Break

Example Programs

Fibonacci

program = [
    0xA9, 0x01,  # LDA #1
    0x85, 0x00,  # STA $00
    0x85, 0x01,  # STA $01
    0xA2, 0x0A,  # LDX #10
    # LOOP:
    0x18,        # CLC
    0xA5, 0x00,  # LDA $00
    0x65, 0x01,  # ADC $01
    0xA8,        # TAY
    0xA5, 0x01,  # LDA $01
    0x85, 0x00,  # STA $00
    0x84, 0x01,  # STY $01
    0xCA,        # DEX
    0xD0, 0xF2,  # BNE LOOP
    0x00,        # BRK
]
# Result: Fibonacci(10) = 144

Multiplication (7 × 13)

program = [
    0xA9, 0x07,  # LDA #7
    0x85, 0x00,  # STA $00 (save 7×1)
    0x0A,        # ASL A (14)
    0x0A,        # ASL A (28 = 7×4)
    0x85, 0x01,  # STA $01
    0x0A,        # ASL A (56 = 7×8)
    0x18,        # CLC
    0x65, 0x00,  # ADC $00 (56+7=63)
    0x65, 0x01,  # ADC $01 (63+28=91)
    0x00,        # BRK
]
# Result: 7 × 13 = 91

Training Your Own

To retrain FLYNNCONCEIVABLE!'s neural organs:

# Train all organs
python -m flynnconceivable.training.train_all

# Train specific organ
python -m flynnconceivable.training.train_all --organ alu

# Verify trained organs
python -m flynnconceivable.training.train_all --verify

The Philosophy

"You keep using that transformer. I do not think it computes what you think it computes."

FLYNNCONCEIVABLE! proves that neural networks can perform exact digital computation. Not approximately—exactly. Every single one of the 460,928 tested input combinations produces the mathematically correct output.

This isn't emulation. This isn't simulation.

The neural network IS the CPU.

The weights are the logic. The inference is the computation.


License

Copyright (c) 2024 FLYNNCOMM, LLC

MIT License - See LICENSE file for details.


FLYNNCONCEIVABLE!

flynnconceivable.io

About

Dreaming in 6502

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors