Skip to content

saurabhr/predict_perception_imagination_tgm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predict imagined and perceived experiences using Temporal Generalization Analysis of MEG Working-Memory Imagination Task.

Slides

NOTES

  • In Djikstra et.al (2020), Djikstra et.al (2018) MEG Setup/Analysis (copied directly from the research article):
    • follwing five sensors were not recorded: MRF66, MLC11, MLC32, MLF62, MLO33.

    • MEG Used: 1200 Hz using a 275-channel MEG system with axial gradiometers (VSM/CTF Systems, Coquitlam, BC, Canada)

    • Throughout the experiment head motion was monitored using a real-time head localizer (Stolk et al., 2013). If necessary, the experimenter instructed the participant back to the initial head position during the breaks. This way, head movement was kept below 8 mm in most participants. Furthermore, horizontal and vertical electrooculograms (EOGs) and an electrocardiogram (ECG) were recorded for subsequent offline removal of the eye-related and heart-related artifacts. Eye position and pupil size were also measured for control analyses using an Eye Link 1000 Eye tracker (SR Research).

    • Per trial, three events were defined. The first event was defined as 200 ms prior to onset of the first image until 200 ms after the offset of the first image. The second event was defined similarly for the second image. Further analyses focused only on the first event, because the neural response to the second image is contaminated by the neural response to the first image. Finally, the third event was defined as 200 ms prior to the onset of the retro-cue until 500 ms after the offset of the imagery frame. As a baseline correction, for each event, the activity during 300 ms from the onset of the initial fixation of that trial was averaged per channel and subtracted from the corresponding signals.

    • The data were down-sampled to 300 Hz to reduce memory and CPU load. Line noise at 50 Hz was removed from the data using a DFT notch filter. To identify artifacts, the variance of each trial was calculated. Trials with high variance were visually inspected and removed if they contained excessive artifacts. After artifact rejection, on average 108 perception face trials (±11), 107 perception house trials (±12) and 105 imagery face trials (±16) and 106 imagery house trials (±13) remained for analysis. To remove eye movement and heart rate artifacts, independent components of the MEG data were calculated and correlated with the EOG and ECG signals. Components with high correlations were manually inspected before removal. The eye tracker data was cleaned separately by inspecting trials with high variance and removing them if they contained blinks or other excessive artifacts.

      data = hdr: [1x1 struct] % header information

      label: {187x1 cell} % channel labels

      trial: {1x266 cell} % data (Nchans*Nsamples) for each trial

      time: {1x266 cell} % time axis for each trial

      fsample: 300 % sampling frequency

      grad: [1x1 struct] % gradiometer structure

      cfg: [1x1 struct] % the configuration used for processing the data

About

In search of the "Reality Generator" in the brain!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published