Skip to content

Commit c74e235

Browse files
committed
FIX use solver="svd" in notebook 4
1 parent a2a24b9 commit c74e235

15 files changed

+44
-45
lines changed

setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
]
3232

3333
extras_require = {
34-
"docs": ["sphinx", "sphinx_gallery", "numpydoc"],
34+
"docs": ["sphinx", "sphinx_gallery", "numpydoc", "nbformat"],
3535
"github": ["pytest"],
3636
}
3737

tutorials/notebooks/shortclips/00_download_shortclips.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Download the data set\n\nIn this script, we download the data set from Wasabi or GIN. No account is\nrequired.\n\n## Cite this data set\n\nThis tutorial is based on publicly available data `published on GIN\n<https://gin.g-node.org/gallantlab/shortclips>`_. If you publish any work using\nthis data set, please cite the original publication [1]_, and the data set\n[2]_.\n"
18+
"\n# Download the data set\n\nIn this script, we download the data set from Wasabi or GIN. No account is\nrequired.\n\n## Cite this data set\n\nThis tutorial is based on publicly available data [published on GIN](https://gin.g-node.org/gallantlab/shortclips). If you publish any work using\nthis data set, please cite the original publication [1]_, and the data set\n[2]_.\n"
1919
]
2020
},
2121
{
@@ -89,7 +89,7 @@
8989
"name": "python",
9090
"nbconvert_exporter": "python",
9191
"pygments_lexer": "ipython3",
92-
"version": "3.8.3"
92+
"version": "3.10.9"
9393
}
9494
},
9595
"nbformat": 4,

tutorials/notebooks/shortclips/00_setup_colab.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Setup Google Colab\n\nIn this script, we setup a Google Colab environment. This script will only work\nwhen run from `Google Colab <https://colab.research.google.com/>`_). You can\nskip it if you run the tutorials on your machine.\n"
18+
"\n# Setup Google Colab\n\nIn this script, we setup a Google Colab environment. This script will only work\nwhen run from [Google Colab](https://colab.research.google.com/)). You can\nskip it if you run the tutorials on your machine.\n"
1919
]
2020
},
2121
{
@@ -132,7 +132,7 @@
132132
"name": "python",
133133
"nbconvert_exporter": "python",
134134
"pygments_lexer": "ipython3",
135-
"version": "3.8.3"
135+
"version": "3.10.9"
136136
}
137137
},
138138
"nbformat": 4,

tutorials/notebooks/shortclips/01_plot_explainable_variance.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -177,7 +177,7 @@
177177
"cell_type": "markdown",
178178
"metadata": {},
179179
"source": [
180-
"## Map to subject flatmap\n\nTo better understand the distribution of explainable variance, we map the\nvalues to the subject brain. This can be done with `pycortex\n<https://gallantlab.github.io/pycortex/>`_, which can create interactive 3D\nviewers to be displayed in any modern browser. ``pycortex`` can also display\nflattened maps of the cortical surface to visualize the entire cortical\nsurface at once.\n\nHere, we do not share the anatomical information of the subjects for privacy\nconcerns. Instead, we provide two mappers:\n\n- to map the voxels to a (subject-specific) flatmap\n- to map the voxels to the Freesurfer average cortical surface (\"fsaverage\")\n\nThe first mapper is 2D matrix of shape (n_pixels, n_voxels) that maps each\nvoxel to a set of pixel in a flatmap. The matrix is efficiently stored in a\n``scipy`` sparse CSR matrix. The function ``plot_flatmap_from_mapper``\nprovides an example of how to use the mapper and visualize the flatmap.\n\n"
180+
"## Map to subject flatmap\n\nTo better understand the distribution of explainable variance, we map the\nvalues to the subject brain. This can be done with [pycortex](https://gallantlab.github.io/pycortex/), which can create interactive 3D\nviewers to be displayed in any modern browser. ``pycortex`` can also display\nflattened maps of the cortical surface to visualize the entire cortical\nsurface at once.\n\nHere, we do not share the anatomical information of the subjects for privacy\nconcerns. Instead, we provide two mappers:\n\n- to map the voxels to a (subject-specific) flatmap\n- to map the voxels to the Freesurfer average cortical surface (\"fsaverage\")\n\nThe first mapper is 2D matrix of shape (n_pixels, n_voxels) that maps each\nvoxel to a set of pixel in a flatmap. The matrix is efficiently stored in a\n``scipy`` sparse CSR matrix. The function ``plot_flatmap_from_mapper``\nprovides an example of how to use the mapper and visualize the flatmap.\n\n"
181181
]
182182
},
183183
{
@@ -195,7 +195,7 @@
195195
"cell_type": "markdown",
196196
"metadata": {},
197197
"source": [
198-
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a `pycortex brain\nviewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
198+
"This figure is a flattened map of the cortical surface. A number of regions\nof interest (ROIs) have been labeled to ease interpretation. If you have\nnever seen such a flatmap, we recommend taking a look at a [pycortex brain\nviewer](https://www.gallantlab.org/brainviewer/Deniz2019), which displays\nthe brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\nflatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\ncursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\nThis viewer should help you understand the correspondance between the flatten\nand the folded cortical surface of the brain.\n\n"
199199
]
200200
},
201201
{
@@ -337,7 +337,7 @@
337337
"name": "python",
338338
"nbconvert_exporter": "python",
339339
"pygments_lexer": "ipython3",
340-
"version": "3.8.3"
340+
"version": "3.10.9"
341341
}
342342
},
343343
"nbformat": 4,

tutorials/notebooks/shortclips/02_plot_ridge_regression.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -406,7 +406,7 @@
406406
"name": "python",
407407
"nbconvert_exporter": "python",
408408
"pygments_lexer": "ipython3",
409-
"version": "3.8.3"
409+
"version": "3.7.12"
410410
}
411411
},
412412
"nbformat": 4,

tutorials/notebooks/shortclips/03_plot_wordnet_model.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -653,7 +653,7 @@
653653
"name": "python",
654654
"nbconvert_exporter": "python",
655655
"pygments_lexer": "ipython3",
656-
"version": "3.8.3"
656+
"version": "3.7.12"
657657
}
658658
},
659659
"nbformat": 4,

tutorials/notebooks/shortclips/04_plot_hemodynamic_response.ipynb

+4-4
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@
235235
"cell_type": "markdown",
236236
"metadata": {},
237237
"source": [
238-
"In the next cell we are plotting six lines. The subplot at the top shows the\nsimulated BOLD response, while the other subplots show the simulated feature\nat different delays. The effect of the delayer is clear: it creates multiple\ncopies of the original feature shifted forward in time by how many samples we\nrequested (in this case, from 0 to 4 samples, which correspond to 0, 2, 4, 6,\nand 8 s in time with a 2 s TR).\n\nWhen these delayed features are used to fit a voxelwise encoding model, the\nbrain response $y$ at time $t$ is simultaneously modeled by the\nfeature $x$ at times $t-0, t-2, t-4, t-6, t-8$. In the remaining\nof this example we will see that this method improves model prediction accuracy\nand it allows to account for the underlying shape of the hemodynamic response\nfunction.\n\n"
238+
"In the next cell we are plotting six lines. The subplot at the top shows the\nsimulated BOLD response, while the other subplots show the simulated feature\nat different delays. The effect of the delayer is clear: it creates multiple\ncopies of the original feature shifted forward in time by how many samples we\nrequested (in this case, from 0 to 4 samples, which correspond to 0, 2, 4, 6,\nand 8 s in time with a 2 s TR).\n\nWhen these delayed features are used to fit a voxelwise encoding model, the\nbrain response $y$ at time $t$ is simultaneously modeled by the\nfeature $x$ at times $t-0, t-2, t-4, t-6, t-8$. In the remaining\nof this example we will see that this method improves model prediction\naccuracy and it allows to account for the underlying shape of the hemodynamic\nresponse function.\n\n"
239239
]
240240
},
241241
{
@@ -253,7 +253,7 @@
253253
"cell_type": "markdown",
254254
"metadata": {},
255255
"source": [
256-
"## Compare with a model without delays\n\nWe define here another model without feature delays (i.e. no ``Delayer``).\nBecause the BOLD signal is inherently slow due to the dynamics of\nneuro-vascular coupling, this model is unlikely to perform well.\n\nNote that if we remove the feature delays, we wil have more fMRI samples (3600) than\nnumber of features (1705). In this case, running a kernel version of ridge regression\nis computationally suboptimal. Thus, to create a model without delays we are using\n`RidgeCV` instead of `KernelRidgeCV`.\n\n"
256+
"## Compare with a model without delays\n\nWe define here another model without feature delays (i.e. no ``Delayer``).\nBecause the BOLD signal is inherently slow due to the dynamics of\nneuro-vascular coupling, this model is unlikely to perform well.\n\nNote that if we remove the feature delays, we will have more fMRI samples\n(3600) than number of features (1705). In this case, running a kernel version\nof ridge regression is computationally suboptimal. Thus, to create a model\nwithout delays we are using `RidgeCV` instead of `KernelRidgeCV`.\n\n"
257257
]
258258
},
259259
{
@@ -264,7 +264,7 @@
264264
},
265265
"outputs": [],
266266
"source": [
267-
"pipeline_no_delay = make_pipeline(\n StandardScaler(with_mean=True, with_std=False),\n RidgeCV(\n alphas=alphas, cv=cv,\n solver_params=dict(n_targets_batch=500, n_alphas_batch=5,\n n_targets_batch_refit=100)),\n)\npipeline_no_delay"
267+
"pipeline_no_delay = make_pipeline(\n StandardScaler(with_mean=True, with_std=False),\n RidgeCV(\n alphas=alphas, cv=cv, solver=\"svd\",\n solver_params=dict(n_targets_batch=500, n_alphas_batch=5,\n n_targets_batch_refit=100)),\n)\npipeline_no_delay"
268268
]
269269
},
270270
{
@@ -352,7 +352,7 @@
352352
"name": "python",
353353
"nbconvert_exporter": "python",
354354
"pygments_lexer": "ipython3",
355-
"version": "3.8.3"
355+
"version": "3.10.9"
356356
}
357357
},
358358
"nbformat": 4,

tutorials/notebooks/shortclips/05_plot_motion_energy_model.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -341,7 +341,7 @@
341341
"name": "python",
342342
"nbconvert_exporter": "python",
343343
"pygments_lexer": "ipython3",
344-
"version": "3.8.3"
344+
"version": "3.7.12"
345345
}
346346
},
347347
"nbformat": 4,

tutorials/notebooks/shortclips/06_plot_banded_ridge_model.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -510,7 +510,7 @@
510510
"name": "python",
511511
"nbconvert_exporter": "python",
512512
"pygments_lexer": "ipython3",
513-
"version": "3.8.3"
513+
"version": "3.7.12"
514514
}
515515
},
516516
"nbformat": 4,

tutorials/notebooks/shortclips/07_extract_motion_energy.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Extract motion energy features from the stimuli\n\nThis script describes how to extract motion-energy features from the stimuli.\n\n.. Note:: The public data set already contains precomputed motion-energy.\n Therefore, you do not need to run this script to fit motion-energy models\n in other part of this tutorial.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\nThe motion-energy extraction is performed by the package `pymoten\n<https://github.com/gallantlab/pymoten>`_. Check the pymoten `gallery of\nexamples <https://gallantlab.github.io/pymoten/auto_examples/index.html>`_ for\nvisualizing motion-energy filters, and for pymoten API usage examples.\n\n## Running time\nExtracting motion energy is a bit longer than the other examples. It typically\ntakes a couple hours to run.\n"
18+
"\n# Extract motion energy features from the stimuli\n\nThis script describes how to extract motion-energy features from the stimuli.\n\n.. Note:: The public data set already contains precomputed motion-energy.\n Therefore, you do not need to run this script to fit motion-energy models\n in other part of this tutorial.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\nThe motion-energy extraction is performed by the package [pymoten](https://github.com/gallantlab/pymoten). Check the pymoten [gallery of\nexamples](https://gallantlab.github.io/pymoten/auto_examples/index.html) for\nvisualizing motion-energy filters, and for pymoten API usage examples.\n\n## Running time\nExtracting motion energy is a bit longer than the other examples. It typically\ntakes a couple hours to run.\n"
1919
]
2020
},
2121
{
@@ -136,7 +136,7 @@
136136
"name": "python",
137137
"nbconvert_exporter": "python",
138138
"pygments_lexer": "ipython3",
139-
"version": "3.8.3"
139+
"version": "3.10.9"
140140
}
141141
},
142142
"nbformat": 4,

tutorials/notebooks/shortclips/merged_for_colab.ipynb

+13-14
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
"# Setup Google Colab\n",
2828
"\n",
2929
"In this script, we setup a Google Colab environment. This script will only work\n",
30-
"when run from `Google Colab <https://colab.research.google.com/>`_). You can\n",
30+
"when run from [Google Colab](https://colab.research.google.com/)). You can\n",
3131
"skip it if you run the tutorials on your machine.\n"
3232
]
3333
},
@@ -458,8 +458,7 @@
458458
"## Map to subject flatmap\n",
459459
"\n",
460460
"To better understand the distribution of explainable variance, we map the\n",
461-
"values to the subject brain. This can be done with `pycortex\n",
462-
"<https://gallantlab.github.io/pycortex/>`_, which can create interactive 3D\n",
461+
"values to the subject brain. This can be done with [pycortex](https://gallantlab.github.io/pycortex/), which can create interactive 3D\n",
463462
"viewers to be displayed in any modern browser. ``pycortex`` can also display\n",
464463
"flattened maps of the cortical surface to visualize the entire cortical\n",
465464
"surface at once.\n",
@@ -498,8 +497,8 @@
498497
"source": [
499498
"This figure is a flattened map of the cortical surface. A number of regions\n",
500499
"of interest (ROIs) have been labeled to ease interpretation. If you have\n",
501-
"never seen such a flatmap, we recommend taking a look at a `pycortex brain\n",
502-
"viewer <https://www.gallantlab.org/brainviewer/Deniz2019>`_, which displays\n",
500+
"never seen such a flatmap, we recommend taking a look at a [pycortex brain\n",
501+
"viewer](https://www.gallantlab.org/brainviewer/Deniz2019), which displays\n",
503502
"the brain in 3D. In this viewer, press \"I\" to inflate the brain, \"F\" to\n",
504503
"flatten the surface, and \"R\" to reset the view (or use the ``surface/unfold``\n",
505504
"cursor on the right menu). Press \"H\" for a list of all keyboard shortcuts.\n",
@@ -2737,9 +2736,9 @@
27372736
"When these delayed features are used to fit a voxelwise encoding model, the\n",
27382737
"brain response $y$ at time $t$ is simultaneously modeled by the\n",
27392738
"feature $x$ at times $t-0, t-2, t-4, t-6, t-8$. In the remaining\n",
2740-
"of this example we will see that this method improves model prediction accuracy\n",
2741-
"and it allows to account for the underlying shape of the hemodynamic response\n",
2742-
"function.\n",
2739+
"of this example we will see that this method improves model prediction\n",
2740+
"accuracy and it allows to account for the underlying shape of the hemodynamic\n",
2741+
"response function.\n",
27432742
"\n"
27442743
]
27452744
},
@@ -2780,10 +2779,10 @@
27802779
"Because the BOLD signal is inherently slow due to the dynamics of\n",
27812780
"neuro-vascular coupling, this model is unlikely to perform well.\n",
27822781
"\n",
2783-
"Note that if we remove the feature delays, we will have more fMRI samples (3600) than\n",
2784-
"number of features (1705). In this case, running a kernel version of ridge regression\n",
2785-
"is computationally suboptimal. Thus, to create a model without delays we are using\n",
2786-
"`RidgeCV` instead of `KernelRidgeCV`.\n",
2782+
"Note that if we remove the feature delays, we will have more fMRI samples\n",
2783+
"(3600) than number of features (1705). In this case, running a kernel version\n",
2784+
"of ridge regression is computationally suboptimal. Thus, to create a model\n",
2785+
"without delays we are using `RidgeCV` instead of `KernelRidgeCV`.\n",
27872786
"\n"
27882787
]
27892788
},
@@ -2798,7 +2797,7 @@
27982797
"pipeline_no_delay = make_pipeline(\n",
27992798
" StandardScaler(with_mean=True, with_std=False),\n",
28002799
" RidgeCV(\n",
2801-
" alphas=alphas, cv=cv,\n",
2800+
" alphas=alphas, cv=cv, solver=\"svd\",\n",
28022801
" solver_params=dict(n_targets_batch=500, n_alphas_batch=5,\n",
28032802
" n_targets_batch_refit=100)),\n",
28042803
")\n",
@@ -4288,7 +4287,7 @@
42884287
"name": "python",
42894288
"nbconvert_exporter": "python",
42904289
"pygments_lexer": "ipython3",
4291-
"version": "3.8.3"
4290+
"version": "3.10.9"
42924291
},
42934292
"name": "_merged"
42944293
},

tutorials/notebooks/vim2/00_download_vim2.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Download the data set from CRCNS\n\nIn this script, we download the data set from CRCNS.\nA (free) account is required.\n\n## Cite this data set\n\nThis tutorial is based on publicly available data\n`published on CRCNS <https://crcns.org/data-sets/vc/vim-2/about-vim-2>`_.\nIf you publish any work using this data set, please cite the original\npublication [1]_, and the data set [2]_.\n"
18+
"\n# Download the data set from CRCNS\n\nIn this script, we download the data set from CRCNS.\nA (free) account is required.\n\n## Cite this data set\n\nThis tutorial is based on publicly available data\n[published on CRCNS](https://crcns.org/data-sets/vc/vim-2/about-vim-2).\nIf you publish any work using this data set, please cite the original\npublication [1]_, and the data set [2]_.\n"
1919
]
2020
},
2121
{
@@ -100,7 +100,7 @@
100100
"name": "python",
101101
"nbconvert_exporter": "python",
102102
"pygments_lexer": "ipython3",
103-
"version": "3.8.3"
103+
"version": "3.10.9"
104104
}
105105
},
106106
"nbformat": 4,

tutorials/notebooks/vim2/01_extract_motion_energy.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"cell_type": "markdown",
1616
"metadata": {},
1717
"source": [
18-
"\n# Extract motion energy features from the stimuli\n\nThis script describes how to extract motion-energy features from the stimuli.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\nThe motion-energy extraction is performed by the package `pymoten\n<https://github.com/gallantlab/pymoten>`_. Check the pymoten `gallery of\nexamples <https://gallantlab.github.io/pymoten/auto_examples/index.html>`_ for\nvisualizing motion-energy filters, and for pymoten API usage examples.\n\n## Running time\nExtracting motion energy is a bit longer than the other examples. It typically\ntakes a couple hours to run.\n"
18+
"\n# Extract motion energy features from the stimuli\n\nThis script describes how to extract motion-energy features from the stimuli.\n\n*Motion-energy features:* Motion-energy features result from filtering a video\nstimulus with spatio-temporal Gabor filters. A pyramid of filters is used to\ncompute the motion-energy features at multiple spatial and temporal scales.\nMotion-energy features were introduced in [1]_.\n\nThe motion-energy extraction is performed by the package [pymoten](https://github.com/gallantlab/pymoten). Check the pymoten [gallery of\nexamples](https://gallantlab.github.io/pymoten/auto_examples/index.html) for\nvisualizing motion-energy filters, and for pymoten API usage examples.\n\n## Running time\nExtracting motion energy is a bit longer than the other examples. It typically\ntakes a couple hours to run.\n"
1919
]
2020
},
2121
{
@@ -143,7 +143,7 @@
143143
"name": "python",
144144
"nbconvert_exporter": "python",
145145
"pygments_lexer": "ipython3",
146-
"version": "3.8.3"
146+
"version": "3.10.9"
147147
}
148148
},
149149
"nbformat": 4,

tutorials/notebooks/vim2/02_plot_ridge_model.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -334,7 +334,7 @@
334334
"name": "python",
335335
"nbconvert_exporter": "python",
336336
"pygments_lexer": "ipython3",
337-
"version": "3.8.3"
337+
"version": "3.7.12"
338338
}
339339
},
340340
"nbformat": 4,

0 commit comments

Comments
 (0)