Skip to content

Code accompanying the IEEE NER 2025 paper "EEG Blink Artifacts Can Identify Read Music in Listening and Imagery"

License

Notifications You must be signed in to change notification settings

3x10e8/eye-blink-music

Repository files navigation

Code for: EEG Blink Artifacts Can Identify Read Music in Listening and Imagery

Link to paper: https://doi.org/10.48550/arXiv.2508.13807

Visual comparing eye blinks and saccades while reading a piece of music

Fig. 1. a One chorale (CHOR-038) from the music imagery dataset [1,2]. b–c Top independent component analysis (ICA) component corresponding to eye blinks for Subject 1, extracted from 11 listening trials with synchronous reading of CHOR-038's sheet music. d–e Second ICA component for the same subject, chorale and condition (CHOR-038, listening) showing diagonal saccades (eye sweeps) between the two lines of music, occurring near the end of the first line (expected duration = 9 bars × 2.4s/bar = 21.6s).

Listen to CHOR-038.wav | remaining stimuli: drive | Score.pdf

Overview

This repo documents the steps needed to reproduce classification results for mapping eye-blinks during music reading to the piece of music being read. Our motivation is to explore whether eye blinks, given their relatively high amplitudes and ease of recording (compared to EEG activity from cortical sources), can still be useful -- especially in wearable contexts where actual EEG may be hard to capture.

We use a public EEG dataset [1, 2] of professional musicians listening to or imagining music, synchronously with reading from a score. More information and decoding results using EEG are available in the publications associated with this dataset [3-5].

Here, we use the CND version of the dataset, and import it into EEGLAB [6] and MNE-Python [7] for exploratory analysis and blink identification. Two toolboxes are used here just for the ease of running ICA [8] and ICLabel [9] within EEGLAB, followed by BLINKER [10] for extracting blinks from eye-related ICs. MNE-related code is supplied just for the ease of using Jupyter notebooks and downstream processing with python.

Depending on your preference for working with the source data and your preferred tool (EEGLAB or MNE-Python), you could execute or skip some of the steps below, that go from:

  • reading the public dataset (.mat files, as filtered and downsampled by the dataset authors, see readme.txt),
  • extracting eye-blinks (or any other features), and
  • using blinks (or your features of choice) for music classification.

Alternatively, you could start with numpy arrays of extracted blinks by subject / condition that we include in this repo. As the dataset doesn't provide a ground truth for verifying extracted blinks, we attempt to manually inspect and accept/reject blink candidates from BLINKER. Still, given the uncertainty and subjective judging of blinks, we limit this preliminary analysis to subjects 1 through 8. The full dataset has N=21 professional musicians (our notebooks are able to load all .mat files into EEGLAB / MNE as needed for extending the analysis).

In our analysis, we first identify blinks for the first 8 subjects in the dataset, and then try to identify which piece of music a subject was reading, separately for the listening or imagery trials (within subject). We compare blink timings across trials by calculating a neuronal spike-train distance, the Victor-Purpura metric [11], here applied to blink trains. This method estimates the distance between two blink trains as the cost of converting one blink train (trial) into the other, where cost is associated with adding/removing blinks, and time-shifting the blinks. We use the Victor-Purpura implementation from Elephant [12].

Of the 8 subjects analyzed for blinking activity, two subjects were dropped as blinks could not be identified for all of their trials. The remaining 6 subjects showed above chance-level decoding accuracy for identifying which of the four chorales was being read on a given (left-out) trial, by comparing its distance to 43 other trials for the same subject and condition (listening/imagery). We also swept a cost factor q associated with the Victor-Purpura distance, which can be thought of as scaling the cost associated with shifting the blinks (that is, it allows us to control how much to penalize time-shifts, relative to adding/dropping blinks as one blink train is morphed into the other). We see best accuracies when blinks are free to shift (q=0), that is, when only the total blink counts are considered across different chorales:

Visual comparing eye blinks and saccades while reading a piece of music

Fig. 2. Intra-subject sight-read music classification accuracies using Victor-Purpura distance between blink times with one-trial left-out cross validation. Hyperparameter q (cost factor) was swept for each subject, with highest accuracies seen for q = 0Hz (equivalently 1/q= ∞ s), except for Subject 3 in the imagery condition.

Here's a step-by-step breakdown of all the scripts and notebooks in this repo:

Steps & Outputs

Environment setup

Downloading the CND dataset (.mat and stimuli files)

Converting the dataset to numpy arrays and FIFs (MNE-Python)

Extracting blinks from the .mat data using BLINKER, visually inspecting in MNE-Python

Victor-Purpura distance between paired trials (within subject & condition)

Supplemental

  • TBD, add figures comparing blink variations across subjects/chorales/conditions

References

[1] Music listening / imagery dataset from Marion et al. in the CND format, https://cnsp-resources.readthedocs.io/en/latest/datasetsPage.html

[2] Marion, G., Di Liberto, G., & Shamma, S. (2021). Data from: The music of silence. Part I: Responses to musical imagery encode melodic expectations and acoustics (Version 4, p. 1517647013 bytes) [Dataset]. Dryad. https://doi.org/10.5061/DRYAD.DBRV15F0J

[3] Marion, G., Liberto, G. M. D., & Shamma, S. A. (2021). The Music of Silence: Part I: Responses to Musical Imagery Encode Melodic Expectations and Acoustics. Journal of Neuroscience, 41(35), 7435–7448. https://doi.org/10.1523/JNEUROSCI.0183-21.2021

[4] Liberto, G. M. D., Marion, G., & Shamma, S. A. (2021). The Music of Silence: Part II: Music Listening Induces Imagery Responses. Journal of Neuroscience, 41(35), 7449–7460. https://doi.org/10.1523/JNEUROSCI.0184-21.2021

[5] Di Liberto, G. M., Marion, G., & Shamma, S. A. (2021). Accurate Decoding of Imagined and Heard Melodies. Frontiers in Neuroscience, 15. https://www.frontiersin.org/articles/10.3389/fnins.2021.673401

[6] Delorme, A. & Makeig, S. (2004) EEGLAB: an open-source toolbox for analysis of single-trial EEG dynamics, Journal of Neuroscience Methods 134:9-21. PDF.

[7] Gramfort, A., Luessi, M., Larson, R., Engemann, D. A., Strohmeier, D., Brodbeck, C., Goj, R., Jas, M., Brooks, T., Parkkonen, L., & Hämäläinen, M. S. (2013). MEG and EEG data analysis with MNE-Python. Frontiers in Neuroscience, 7(267):1–13, doi:10.3389/fnins.2013.00267.

[8] Makeig, S., Bell, A. J., Jung, T-P., & Sejnowski, T. J. (1996). Independent component analysis of electroencephalographic data, In: D. Touretzky, M. Mozer and M. Hasselmo (Eds). Advances in Neural Information Processing Systems 8:145-151, MIT Press, Cambridge, MA. PDF.

[9] Pion-Tonachini, L., Kreutz-Delgado, K., Makeig, S. (2019) ICLabel: An automated electroencephalographic independent component classifier, dataset, and website. NeuroImage 198:181-197. PDF.

[10] Kleifges, K., Bigdely-Shamlo, N., Kerick, S. E., & Robbins, K. A. (2017). BLINKER: Automated Extraction of Ocular Indices from EEG Enabling Large-Scale Analysis. Frontiers in Neuroscience, 11. https://www.frontiersin.org/articles/10.3389/fnins.2017.00012

[11] Victor, J. D., & Purpura, K. P. (1997). Metric-space analysis of spike trains: Theory, algorithms and application. Network: Computation in Neural Systems, 8(2), 127–164. https://doi.org/10.1088/0954-898X_8_2_003

[12] Denker, M., Yegenoglu, A., & Grün, S. (2018). Collaborative HPC-enabled workflows on the HBP Collaboratory using the Elephant framework. Neuroinformatics 2018, P19. doi:10.12751/incf.ni2018.0019

About

Code accompanying the IEEE NER 2025 paper "EEG Blink Artifacts Can Identify Read Music in Listening and Imagery"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages