Skip to content

Commit 5278020

Browse files
Deploying to gh-pages from @ 6093127 🚀
0 parents  commit 5278020

721 files changed

Lines changed: 193933 additions & 0 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.gitignore

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
*.pyc
2+
*.swp
3+
.cache
4+
.ipynb_checkpoints
5+
*.bbl
6+
*-blx.bib
7+
*.swp
8+
*.aux
9+
*.log
10+
*.out
11+
*.run.xml
12+
*.blg
13+
*.bcf
14+
*.synctex.gz
15+
*.fls
16+
*.fdb_latexmk
17+
*.swo
18+
*#
19+
*.toc
20+
*.egg-info
21+
.doit*
22+
doc/_build
23+
doc/auto_examples
24+
doc/generated
25+
build
26+
dist
27+
html
28+
doc/functions
29+
.vscode
30+
.coverage*
31+
.noseids
32+
junit-results.xml
33+
TAGS
34+
tags
35+
tests/.coverage*

.nojekyll

Whitespace-only changes.

0.10/_downloads/01b66faa2c7ff6340cb83a39f4121888/01_sensor_level_tutorial.ipynb

Lines changed: 363 additions & 0 deletions
Large diffs are not rendered by default.
Binary file not shown.
Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n# Source-level RSA using a searchlight on volumetric data\n\nThis example demonstrates how to perform representational similarity analysis (RSA) on\nvolumetric source localized MEG data, using a searchlight approach.\n\nIn the searchlight approach, representational similarity is computed between the model\nand searchlight \"patches\". A patch is defined by a seed voxel in the source space and\nall voxels within a given radius. By default, patches are created using each voxel as a\nseed point, so you can think of it as a \"searchlight\" that scans through the brain.\n\nThe radius of a searchlight can be defined in space, in time, or both. In this example,\nour searchlight will have a spatial radius of 1 cm. To save computation time, we will\nonly perform the RSA on a single time point, but feel free to experiment with specifying\na temporal radius and performing the RSA in time as well.\n\nThe dataset will be the MNE-sample dataset: a collection of 288 epochs in which the\nparticipant was presented with an auditory beep or visual stimulus to either the left or\nright ear or visual field.\n\n| Authors:\n| Marijn van Vliet <marijn.vanvliet@aalto.fi>\n"
8+
]
9+
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {
14+
"collapsed": false
15+
},
16+
"outputs": [],
17+
"source": [
18+
"# sphinx_gallery_thumbnail_number=2\n\n# Import required packages\nimport mne\nimport mne_rsa\nfrom nilearn.plotting import plot_stat_map\n\nmne.set_log_level(False) # Be less verbose"
19+
]
20+
},
21+
{
22+
"cell_type": "markdown",
23+
"metadata": {},
24+
"source": [
25+
"We'll be using the data from the MNE-sample set.\n\n"
26+
]
27+
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": null,
31+
"metadata": {
32+
"collapsed": false
33+
},
34+
"outputs": [],
35+
"source": [
36+
"sample_root = mne.datasets.sample.data_path(verbose=True)\nsample_path = sample_root / \"MEG\" / \"sample\"\nmri_dir = sample_root / \"subjects\" / \"sample\""
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"Creating epochs from the continuous (raw) data. We downsample to 100 Hz to speed up\nthe RSA computations later on.\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"raw = mne.io.read_raw_fif(sample_path / \"sample_audvis_filt-0-40_raw.fif\")\nevents = mne.read_events(sample_path / \"sample_audvis_filt-0-40_raw-eve.fif\")\nevent_id = {\"audio/left\": 1, \"audio/right\": 2, \"visual/left\": 3, \"visual/right\": 4}\nepochs = mne.Epochs(raw, events, event_id, preload=True)"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"It's important that the model RDM and the epochs are in the same order, so that each\nrow in the model RDM will correspond to an epoch. The model RDM will be easier to\ninterpret visually if the data is ordered such that all epochs belonging to the same\nexperimental condition are right next to each-other, so patterns jump out. This can be\nachieved by first splitting the epochs by experimental condition and then\nconcatenating them together again.\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"epoch_splits = [\n epochs[cl] for cl in [\"audio/left\", \"audio/right\", \"visual/left\", \"visual/right\"]\n]\nepochs = mne.concatenate_epochs(epoch_splits)"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"Now that the epochs are in the proper order, we can create a RDM based on the\nexperimental conditions. This type of RDM is referred to as a \"sensitivity RDM\". Let's\ncreate a sensitivity RDM that will pick up the left auditory response when RSA-ed\nagainst the MEG data. Since we want to capture areas where left beeps generate a large\nsignal, we specify that left beeps should be similar to other left beeps. Since we do\nnot want areas where visual stimuli generate a large signal, we specify that beeps\nmust be different from visual stimuli. Furthermore, since in areas where visual\nstimuli generate only a small signal, random noise will dominate, we also specify that\nvisual stimuli are different from other visual stimuli. Finally left and right\nauditory beeps will be somewhat similar.\n\n"
80+
]
81+
},
82+
{
83+
"cell_type": "code",
84+
"execution_count": null,
85+
"metadata": {
86+
"collapsed": false
87+
},
88+
"outputs": [],
89+
"source": [
90+
"def sensitivity_metric(event_id_1, event_id_2):\n \"\"\"Determine similarity between two epochs, given their event ids.\"\"\"\n if event_id_1 == 1 and event_id_2 == 1:\n return 0 # Completely similar\n if event_id_1 == 2 and event_id_2 == 2:\n return 0.5 # Somewhat similar\n elif event_id_1 == 1 and event_id_2 == 2:\n return 0.5 # Somewhat similar\n elif event_id_1 == 2 and event_id_1 == 1:\n return 0.5 # Somewhat similar\n else:\n return 1 # Not similar at all\n\n\nmodel_rdm = mne_rsa.compute_rdm(epochs.events[:, 2], metric=sensitivity_metric)\nmne_rsa.plot_rdms(model_rdm, title=\"Model RDM\")"
91+
]
92+
},
93+
{
94+
"cell_type": "markdown",
95+
"metadata": {},
96+
"source": [
97+
"This example is going to be on source-level, so let's load the inverse operator and\napply it to obtain a volumetric source estimate for each epoch.\n\n"
98+
]
99+
},
100+
{
101+
"cell_type": "code",
102+
"execution_count": null,
103+
"metadata": {
104+
"collapsed": false
105+
},
106+
"outputs": [],
107+
"source": [
108+
"inv = mne.minimum_norm.read_inverse_operator(\n sample_path / \"sample_audvis-meg-vol-7-meg-inv.fif\"\n)\nepochs_stc = mne.minimum_norm.apply_inverse_epochs(epochs, inv, lambda2=0.1111)"
109+
]
110+
},
111+
{
112+
"cell_type": "markdown",
113+
"metadata": {},
114+
"source": [
115+
"Performing the RSA. This will take some time. Consider increasing ``n_jobs`` to\nparallelize the computation across multiple CPUs.\n\n"
116+
]
117+
},
118+
{
119+
"cell_type": "code",
120+
"execution_count": null,
121+
"metadata": {
122+
"collapsed": false
123+
},
124+
"outputs": [],
125+
"source": [
126+
"rsa_vals = mne_rsa.rsa_stcs(\n epochs_stc, # The source localized epochs\n model_rdm, # The model RDM we constructed above\n src=inv[\"src\"], # The inverse operator has our source space\n stc_rdm_metric=\"correlation\", # Metric to compute the MEG RDMs\n rsa_metric=\"kendall-tau-a\", # Metric to compare model and EEG RDMs\n spatial_radius=0.01, # Spatial radius of the searchlight patch\n temporal_radius=None, # Don't perform search light over time\n tmin=0.09,\n tmax=0.11, # Time interval to analyze\n n_jobs=1, # Only use one CPU core. Increase this for more speed.\n verbose=False,\n) # Set to True to display a progress bar"
127+
]
128+
},
129+
{
130+
"cell_type": "markdown",
131+
"metadata": {},
132+
"source": [
133+
"Here is how to plot the result using nilearn.\n\n"
134+
]
135+
},
136+
{
137+
"cell_type": "code",
138+
"execution_count": null,
139+
"metadata": {
140+
"collapsed": false
141+
},
142+
"outputs": [],
143+
"source": [
144+
"img = rsa_vals.as_volume(inv[\"src\"], mri_resolution=False)\nt1_fname = mri_dir / \"mri\" / \"T1.mgz\"\nplot_stat_map(img, t1_fname, threshold=0.1)"
145+
]
146+
}
147+
],
148+
"metadata": {
149+
"kernelspec": {
150+
"display_name": "Python 3",
151+
"language": "python",
152+
"name": "python3"
153+
},
154+
"language_info": {
155+
"codemirror_mode": {
156+
"name": "ipython",
157+
"version": 3
158+
},
159+
"file_extension": ".py",
160+
"mimetype": "text/x-python",
161+
"name": "python",
162+
"nbconvert_exporter": "python",
163+
"pygments_lexer": "ipython3",
164+
"version": "3.12.11"
165+
}
166+
},
167+
"nbformat": 4,
168+
"nbformat_minor": 0
169+
}
Lines changed: 133 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,133 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"\n# Sensor-level RSA using mixed sensor types\n\nThis example demonstrates how to perform representational similarity analysis (RSA) on\nMEEG data containing magnetometers, gradiometers and EEG channels. In this scenario\nthere are important things we need to keep in mind:\n\n1. Different sensor types see the underlying sources from different perspectives, hence\n spatial searchlight patches based on the sensor positions are a bad idea. We will\n perform a searchlight over time only, pooling data from all sensors at all times.\n2. The sensors have different units of measurement, hence the numeric data is in\n different orders of magnitude. If we don't compensate for this, only the sensors with\n data in the highest order of magnitude will matter when compuring RDMs. We will\n compute a noise covariance matrix and perform data whitening to achieve this.\n\nThe dataset will be the MNE-sample dataset: a collection of 288 epochs in which the\nparticipant was presented with an auditory beep or visual stimulus to either the left or\nright ear or visual field.\n\n| Authors:\n| Marijn van Vliet <marijn.vanvliet@aalto.fi>\n"
8+
]
9+
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": null,
13+
"metadata": {
14+
"collapsed": false
15+
},
16+
"outputs": [],
17+
"source": [
18+
"# sphinx_gallery_thumbnail_number=2\n\n# Import required packages\nimport operator\n\nimport mne\nimport mne_rsa\nimport numpy as np\n\nmne.set_log_level(False) # Be less verbose"
19+
]
20+
},
21+
{
22+
"cell_type": "markdown",
23+
"metadata": {},
24+
"source": [
25+
"We'll be using the data from the MNE-sample set.\n\n"
26+
]
27+
},
28+
{
29+
"cell_type": "code",
30+
"execution_count": null,
31+
"metadata": {
32+
"collapsed": false
33+
},
34+
"outputs": [],
35+
"source": [
36+
"sample_root = mne.datasets.sample.data_path(verbose=True)\nsample_path = sample_root / \"MEG\" / \"sample\""
37+
]
38+
},
39+
{
40+
"cell_type": "markdown",
41+
"metadata": {},
42+
"source": [
43+
"Creating epochs from the continuous (raw) data. We downsample to 100 Hz to speed up\nthe RSA computations later on.\n\n"
44+
]
45+
},
46+
{
47+
"cell_type": "code",
48+
"execution_count": null,
49+
"metadata": {
50+
"collapsed": false
51+
},
52+
"outputs": [],
53+
"source": [
54+
"raw = mne.io.read_raw_fif(sample_path / \"sample_audvis_filt-0-40_raw.fif\")\nevents = mne.read_events(sample_path / \"sample_audvis_filt-0-40_raw-eve.fif\")\nevent_id = {\"audio/left\": 1, \"visual/left\": 3}\nepochs = mne.Epochs(raw, events, event_id, preload=True)\nepochs.resample(100)"
55+
]
56+
},
57+
{
58+
"cell_type": "markdown",
59+
"metadata": {},
60+
"source": [
61+
"Plotting the evokeds for each sensor type. Not the difference in scaling of the values\n(=the y-limits of the plot).\n\n"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"metadata": {
68+
"collapsed": false
69+
},
70+
"outputs": [],
71+
"source": [
72+
"epochs.average().plot()"
73+
]
74+
},
75+
{
76+
"cell_type": "markdown",
77+
"metadata": {},
78+
"source": [
79+
"To estimate the differences in signal amplitude between the different sensor types, we\ncompute the (co-)variance during a period of relative rest in the signal: the baseline\nperiod (-200 to 0 milliseconds). See [MNE-Python's covariance tutorial](https://mne.tools/stable/auto_tutorials/forward/90_compute_covariance.html)_ for\ndetails.\n\n"
80+
]
81+
},
82+
{
83+
"cell_type": "code",
84+
"execution_count": null,
85+
"metadata": {
86+
"collapsed": false
87+
},
88+
"outputs": [],
89+
"source": [
90+
"noise_cov = mne.compute_covariance(\n epochs, tmin=-0.2, tmax=0, method=\"shrunk\", rank=\"info\"\n)\nnoise_cov.plot(epochs.info)"
91+
]
92+
},
93+
{
94+
"cell_type": "markdown",
95+
"metadata": {},
96+
"source": [
97+
"Now we compute a reference RDM (simply encoding visual vs audio condition) and RSA it\nagainst the sensor data, which we will do in a sliding window across time.\n\n"
98+
]
99+
},
100+
{
101+
"cell_type": "code",
102+
"execution_count": null,
103+
"metadata": {
104+
"collapsed": false
105+
},
106+
"outputs": [],
107+
"source": [
108+
"# Sort the epochs by condition\nepochs = mne.concatenate_epochs([epochs[\"audio\"], epochs[\"visual\"]])\n\n# Compute model RDM\nmodel_rdm = mne_rsa.compute_rdm(epochs.events[:, 2], metric=operator.ne)\nmne_rsa.plot_rdms(model_rdm)\n\n# Perform RSA across time\nrsa_scores = mne_rsa.rsa_epochs(\n epochs,\n model_rdm,\n noise_cov=noise_cov,\n temporal_radius=0.02,\n y=np.arange(len(epochs)),\n)\nrsa_scores.plot(units=dict(misc=\"Spearman correlation\"))"
109+
]
110+
}
111+
],
112+
"metadata": {
113+
"kernelspec": {
114+
"display_name": "Python 3",
115+
"language": "python",
116+
"name": "python3"
117+
},
118+
"language_info": {
119+
"codemirror_mode": {
120+
"name": "ipython",
121+
"version": 3
122+
},
123+
"file_extension": ".py",
124+
"mimetype": "text/x-python",
125+
"name": "python",
126+
"nbconvert_exporter": "python",
127+
"pygments_lexer": "ipython3",
128+
"version": "3.12.11"
129+
}
130+
},
131+
"nbformat": 4,
132+
"nbformat_minor": 0
133+
}

0 commit comments

Comments
 (0)