Skip to content

blakete/mit-9-660-final-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Modeling Human Inference of Simple Dynamical Rules: A Bayesian Program Induction Approach to Cellular Automata

License Julia Gen.jl Python MIT 9.660

Bayesian inference over CA rules


Paper

Paper thumbnail

Read the Full Paper (PDF)


Overview

Humans can infer rich underlying structure from limited sequential data, often in ways that resemble approximate Bayesian inference over compositional hypothesis spaces. This project investigates that capacity in a tightly controlled dynamical domain: one-dimensional binary cellular automata (1D CAs). Five participants completed a sequential prediction task in which they observed partial CA evolutions generated by unknown update rules and predicted the next state from multiple-choice options with graded confidence ratings.

In parallel, we implemented a resource-rational Bayesian program-induction model in Gen.jl that performs online inference over a grammar of Boolean CA rules using Sequential Monte Carlo (SMC) with MCMC rejuvenation. Human behavior showed significant correlation with model predictions (Pearson r = 0.39), with both exhibiting increased confidence as evidence accumulated. Human accuracy (59%) exceeded chance (25%) but remained below the model (95%), with the gap varying by rule complexity class.

Key Features

  • Grammar-Based Program Induction: Probabilistic context-free grammar over Boolean ASTs representing all 256 Wolfram CA rules
  • SMC with MCMC Rejuvenation: Online Bayesian inference with five structure-modifying proposal kernels (grow, prune, swap-op, swap-atom, toggle-not)
  • Human Behavioral Experiment: Browser-based sequential prediction task with 7-point confidence ratings across 7 CA rules and 6 time steps
  • Human vs. Model Comparison: Systematic evaluation across Wolfram complexity classes (I–IV)

Experiment

Try the Experiment

We selected 7 CA rules spanning all four Wolfram complexity classes. Participants observed partial evolutions (width = 17 cells) and rated four candidate next-row continuations on a 1–7 Likert scale.

Class I: Rule 32 Class II: Rule 5 Class III: Rule 30 Class IV: Rule 110

Examples from each Wolfram class. Left to right: Class I (Rule 32), Class II (Rule 5), Class III (Rule 30), Class IV (Rule 110).

Experiment start page Trial at t=1 Trial at t=6

Web experiment interface. Left: start page. Center: early trial (t=1). Right: late trial (t=6) with a chaotic rule.


Results

Accuracy Over Time

Accuracy by time step

Both human and model accuracy exceed chance (25%). Human accuracy increases from 20% at t=1 to ~70% by t=3, then plateaus. The model reaches near-perfect accuracy after a single observation.

Confidence in the Correct Option

Correct option confidence over time

Human confidence in the correct answer rises steadily from ~4 (unsure) to ~6 (likely), while model probability increases more rapidly. Both exhibit evidence accumulation consistent with Bayesian updating.

Human vs. Model by Wolfram Class

Accuracy by Wolfram class

Class I (Fixed) rules are easiest for both humans (80%) and model (83%). The human–model gap widens with complexity: Class IV (Complex) rules yield only 42% human accuracy versus 100% for the model.

Accuracy by Individual Rule

Accuracy by rule


Citation

If you use this work, please cite:

@article{edwards2025modeling,
  title={Modeling Human Inference of Simple Dynamical Rules: A Bayesian Program Induction Approach to Cellular Automata},
  author={Edwards, Blake},
  institution={Massachusetts Institute of Technology},
  year={2025}
}

License

This project is licensed under the MIT License — see the LICENSE file for details.

About

Bayesian program induction over Boolean cellular automata rules using SMC with MCMC rejuvenation in Gen.jl, compared against human behavioral data across Wolfram complexity classes. MIT 9.660 Final Project.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors