Can Language Models formulate ML problems from the messy data of interacting subsystems? Or, at a more fundamental level: Can Language Models understand the implicit deep causal relationships in evolving world models?
This is the problem we're exploring here. When multiple interacting subsystems contain implicit causal relationships between every component of an entire world model, a single change to any component can send rippling effects throughout the entire system.
How can a single language model track these effects, which requires causal understanding beyond surface-level correlations, ask the right fundamental questions, and attempt to hypothesise and test its understanding of a structured world model? This understanding of deep causal structures forms the foundation for AI for Scientific Discovery, and the agents which can produce novel insights from dynamically interwoven world models across any domain.
[In Preparation]
All code for the five papers will be made open-source after peer-review.