Skip to content

LI-Jialu/LLM-Graph-Eyetracking-Align

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-Graph-Eyetracking-Align

Graph Representations for Reading Comprehension Analysis using Large Language Model and Eye-Tracking Biomarker

Overview
Figure: Overview of our proposed pipeline.


Overview

This repository accompanies our paper accepted at IEEE EMBC 2025, presenting a novel method for analyzing human reading comprehension by integrating Large Language Models (LLMs), knowledge graphs, and eye-tracking biomarkers.

We propose a four-step pipeline:

  1. Construct a knowledge graph from each sentence
  2. Use LLMs to assign phrase-level importance
  3. Perform graph-theoretic analysis
  4. Evaluate alignment with human eye-tracking data

Two types of alignment are validated:

  • Graph Alignment: Phrase importance from LLMs aligns with graph-based node relevance
  • Attention Alignment: Important phrases attract more eye fixations in real reading behavior

Abstract

Reading comprehension is a fundamental skill in human cognitive development. With the advancement of Large Language Models (LLMs), there is a growing need to compare how humans and LLMs understand language across different contexts and apply this understanding to functional tasks such as inference, emotion interpretation, and information retrieval. Our previous work used LLMs and human biomarkers to study the reading comprehension process. The results showed that the biomarkers corresponding to words with high and low relevance to the inference target, as labeled by the LLMs, exhibited distinct patterns, particularly when validated using eye-tracking data. However, focusing solely on individual words limited the depth of understanding, which made the conclusions somewhat simplistic despite their potential significance. This study used an LLM-based AI agent to group words from a reading passage into nodes and edges, forming a graph-based text representation based on semantic meaning and question-oriented prompts. We then compare the distribution of eye fixations on important nodes and edges. Our findings indicate that LLMs exhibit high consistency in language understanding at the level of graph topological structure. These results build on our previous findings and offer insights into effective human-AI co-learning strategies.


Code Availability

🛠️ The code and dataset are being organized and will be released very soon. Stay tuned!


Citation

@misc{zhang2025graphrepresentationsreadingcomprehension,
      title={Graph Representations for Reading Comprehension Analysis using Large Language Model and Eye-Tracking Biomarker}, 
      author={Yuhong Zhang and Jialu Li and Shilai Yang and Yuchen Xu and Gert Cauwenberghs and Tzyy-Ping Jung},
      year={2025},
      eprint={2507.11972},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.11972}, 
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages