Skip to content

Commit 9b1401f

Browse files
committed
Process tutorial notebooks
1 parent 1c5cade commit 9b1401f

File tree

12 files changed

+267
-978
lines changed

12 files changed

+267
-978
lines changed

tutorials/W2D1_ModelingPractice/W2D1_DaySummary.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@
9696
"name": "python",
9797
"nbconvert_exporter": "python",
9898
"pygments_lexer": "ipython3",
99-
"version": "3.9.17"
99+
"version": "3.9.22"
100100
}
101101
},
102102
"nbformat": 4,

tutorials/W2D1_ModelingPractice/W2D1_Intro.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@
112112
"name": "python",
113113
"nbconvert_exporter": "python",
114114
"pygments_lexer": "ipython3",
115-
"version": "3.9.17"
115+
"version": "3.9.22"
116116
}
117117
},
118118
"nbformat": 4,

tutorials/W2D1_ModelingPractice/W2D1_Outro.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
},
2626
{
2727
"cell_type": "code",
28-
"execution_count": 1,
28+
"execution_count": null,
2929
"metadata": {
3030
"cellView": "form",
3131
"execution": {}
@@ -165,7 +165,7 @@
165165
"name": "python",
166166
"nbconvert_exporter": "python",
167167
"pygments_lexer": "ipython3",
168-
"version": "3.11.11"
168+
"version": "3.9.22"
169169
}
170170
},
171171
"nbformat": 4,

tutorials/W2D1_ModelingPractice/W2D1_Tutorial1.ipynb

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -784,8 +784,8 @@
784784
"<br>\n",
785785
"<font size='3pt'>\n",
786786
"\n",
787-
"The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem. \n",
788-
"We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing). \n",
787+
"The idea of our project is that the vestibular signals are noisy so that they might be mis-interpreted by the brain. We did some brainstorming and think that we need to somehow extract the self-motion judgements from the spike counts of our neurons. Based on that, our algorithm needs to make a decision: was there self motion or not? This is a classical 2-choice classification problem.\n",
788+
"We will have to transform the raw spike data into the right input for the algorithm (spike pre-processing).\n",
789789
"\n",
790790
"In order to address our question we need to design an appropriate computational data analysis pipeline.\n",
791791
"\n",
@@ -968,7 +968,7 @@
968968
"\n",
969969
"**Come up with hypotheses focussing on specific details of our overall research question.**\n",
970970
"\n",
971-
"Can you write down your hypotheses mathematically, using for example the ingredients and variables from the previous step? \n",
971+
"Can you write down your hypotheses mathematically, using for example the ingredients and variables from the previous step?\n",
972972
"\n",
973973
"*Work on this for 15 minutes.*\n",
974974
"\n",
@@ -1019,7 +1019,9 @@
10191019
},
10201020
{
10211021
"cell_type": "markdown",
1022-
"metadata": {},
1022+
"metadata": {
1023+
"execution": {}
1024+
},
10231025
"source": [
10241026
"If you want to learn more about steps 5-10 of modelling from ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)), you can later read the paper, or check out the optional notebook available [here](https://compneuro.neuromatch.io/projects/modelingsteps/ModelingSteps_5through10.html). However, at the Neuromatch Academy, we would like you to use a new app built specifically to help with projects, the NMA project planner."
10251027
]
@@ -1039,7 +1041,9 @@
10391041
},
10401042
{
10411043
"cell_type": "markdown",
1042-
"metadata": {},
1044+
"metadata": {
1045+
"execution": {}
1046+
},
10431047
"source": [
10441048
"----\n",
10451049
"# NMA project planner\n",
@@ -1054,7 +1058,10 @@
10541058
{
10551059
"cell_type": "code",
10561060
"execution_count": null,
1057-
"metadata": {},
1061+
"metadata": {
1062+
"cellView": "form",
1063+
"execution": {}
1064+
},
10581065
"outputs": [],
10591066
"source": [
10601067
"# @title Video 6: The NMA project planner\n",
@@ -1081,7 +1088,7 @@
10811088
" with out:\n",
10821089
" if video_ids[i][0] == 'Youtube':\n",
10831090
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
1084-
" height=H, fs=fs, rel=0) \n",
1091+
" height=H, fs=fs, rel=0)\n",
10851092
"\n",
10861093
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
10871094
" else:\n",
@@ -1167,7 +1174,7 @@
11671174
"name": "python",
11681175
"nbconvert_exporter": "python",
11691176
"pygments_lexer": "ipython3",
1170-
"version": "3.11.11"
1177+
"version": "3.9.22"
11711178
}
11721179
},
11731180
"nbformat": 4,

tutorials/W2D1_ModelingPractice/instructor/W2D1_DaySummary.ipynb

Lines changed: 1 addition & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -54,25 +54,6 @@
5454
"feedback_prefix = \"W2D1_DaySummary\""
5555
]
5656
},
57-
{
58-
"cell_type": "code",
59-
"execution_count": null,
60-
"metadata": {
61-
"cellView": "form",
62-
"execution": {},
63-
"pycharm": {
64-
"name": "#%%\n"
65-
}
66-
},
67-
"outputs": [],
68-
"source": [
69-
"# @title Slides\n",
70-
"from IPython.display import IFrame\n",
71-
"link_id = \"3j2cn\"\n",
72-
"print(f\"If you want to download the slides: https://osf.io/download/{link_id}/\")\n",
73-
"IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)"
74-
]
75-
},
7657
{
7758
"cell_type": "code",
7859
"execution_count": null,
@@ -115,7 +96,7 @@
11596
"name": "python",
11697
"nbconvert_exporter": "python",
11798
"pygments_lexer": "ipython3",
118-
"version": "3.9.17"
99+
"version": "3.9.22"
119100
}
120101
},
121102
"nbformat": 4,

tutorials/W2D1_ModelingPractice/instructor/W2D1_Intro.ipynb

Lines changed: 4 additions & 106 deletions
Original file line numberDiff line numberDiff line change
@@ -34,11 +34,11 @@
3434
"source": [
3535
"## Overview\n",
3636
"\n",
37-
"During the first day, you learned that different models can answer different questions. That means that depending on your question, goals, and hypotheses, you will need to develop different kinds of models. How to best approach this is the goal of today. We will use this as an opportunity to kick off your group projects simultaneously. To do so, we will start walking you through a 10-steps guide of how-to-model. This guide is applicable for both computational modeling and data neuroscience projects, and we will discuss similarities and differences between both project types. So today, we will start with what you now know determines the choice of modeling or data analysis pipeline you will need to make: how to develop a good question and goal, do the literature review, think about what ingredients you need, and what hypotheses you would like to evaluate.\n",
37+
"During the first day, you learned that different models can answer different questions. That means that depending on your question, goals, and hypotheses, you will need to develop different kinds of models. How to best approach this is the goal of today. We will use this as an opportunity to kick off your group projects simultaneously. To do so, we will start walking you through a step-by-step guide of how-to-model. Today, we will start with the following steps: how to develop a good question and goal, how to do a literature review, how to determine what ingredients you need, and what hypotheses you would like to evaluate.\n",
3838
"\n",
39-
"Today’s tutorial focuses on the first 4 steps of _how-to-model_ by demonstrating the thought process based on a simple phenomenon known as the train illusion. We will first introduce the phenomenon and then provide a step-by-step guide on thinking about and developing the 4 first steps of framing your project. To help you, we will roleplay an example thought process, focussing on typical pitfalls that groups often encounter. Groups will then think about their own projects and develop first answers to each step’s questions. Importantly, this is to get you started systematically with your projects; you will have to revisit those steps as your thinking evolves, possibly multiple times. We will also provide similar guidance for the remaining 6 steps of the how-to-model guide that you can work through with your group when you’re reaching that stage of your project. The accompanying answers to each step in our demo project and toy example code for our two different projects (model and data analytics) should help you understand the practical side of the process better.\n",
39+
"Today’s tutorial focuses on the first 4 steps of how-to-model by demonstrating the thought process based on a simple phenomenon known as the train illusion. We will first introduce the phenomenon and then provide a step-by-step guide on thinking about and developing the 4 first steps of framing your project. Groups will then think about their own projects and develop answers to each step’s questions. We will introduce an online project planner based on a large language model that can give you feedback to these answers, which you can use iteratively to develop the full plan of your project. Different types of projects will probably go through the steps in a different order, but at the end, all projects should complete all steps. \n",
4040
"\n",
41-
"How to model is rarely, if ever, taught systematically. Our guide is not the only way to approach modeling, but it’s one way to ensure you don’t miss anything important. Going through all the steps also makes publication much easier because you have already explicitly thought about all the elements you will ultimately need to communicate (see Step 10 later for our examples). Personally, I often take shortcuts in this process and then regret it later… mostly because I forgot to do the one most important thing: be precise about the framing of the project, i.e., the four first steps you will walk through today. Importantly this will set you up to develop any kind of model using any of the tools you will learn about during the remainder of NMA."
41+
"How to model is rarely, if ever, taught systematically. Our guide and project planner is not the only way to approach modeling, but it’s one way to ensure you don’t miss anything important. Going through all the steps also makes publication much easier because you have already explicitly thought about all the elements you will ultimately need to communicate (see Abstract section later for example). \n"
4242
]
4343
},
4444
{
@@ -70,108 +70,6 @@
7070
"feedback_prefix = \"W2D1_Intro\""
7171
]
7272
},
73-
{
74-
"cell_type": "markdown",
75-
"metadata": {
76-
"execution": {},
77-
"pycharm": {
78-
"name": "#%% md\n"
79-
}
80-
},
81-
"source": [
82-
"## Video"
83-
]
84-
},
85-
{
86-
"cell_type": "code",
87-
"execution_count": null,
88-
"metadata": {
89-
"cellView": "form",
90-
"execution": {},
91-
"pycharm": {
92-
"name": "#%%\n"
93-
}
94-
},
95-
"outputs": [],
96-
"source": [
97-
"# @markdown\n",
98-
"from ipywidgets import widgets\n",
99-
"from IPython.display import YouTubeVideo\n",
100-
"from IPython.display import IFrame\n",
101-
"from IPython.display import display\n",
102-
"\n",
103-
"\n",
104-
"class PlayVideo(IFrame):\n",
105-
" def __init__(self, id, source, page=1, width=400, height=300, **kwargs):\n",
106-
" self.id = id\n",
107-
" if source == 'Bilibili':\n",
108-
" src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'\n",
109-
" elif source == 'Osf':\n",
110-
" src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'\n",
111-
" super(PlayVideo, self).__init__(src, width, height, **kwargs)\n",
112-
"\n",
113-
"\n",
114-
"def display_videos(video_ids, W=400, H=300, fs=1):\n",
115-
" tab_contents = []\n",
116-
" for i, video_id in enumerate(video_ids):\n",
117-
" out = widgets.Output()\n",
118-
" with out:\n",
119-
" if video_ids[i][0] == 'Youtube':\n",
120-
" video = YouTubeVideo(id=video_ids[i][1], width=W,\n",
121-
" height=H, fs=fs, rel=0)\n",
122-
" print(f'Video available at https://youtube.com/watch?v={video.id}')\n",
123-
" else:\n",
124-
" video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,\n",
125-
" height=H, fs=fs, autoplay=False)\n",
126-
" if video_ids[i][0] == 'Bilibili':\n",
127-
" print(f'Video available at https://www.bilibili.com/video/{video.id}')\n",
128-
" elif video_ids[i][0] == 'Osf':\n",
129-
" print(f'Video available at https://osf.io/{video.id}')\n",
130-
" display(video)\n",
131-
" tab_contents.append(out)\n",
132-
" return tab_contents\n",
133-
"\n",
134-
"\n",
135-
"video_ids = [('Youtube', 'vgX4l7U8bsg'), ('Bilibili', 'BV1MB4y1T76U')]\n",
136-
"tab_contents = display_videos(video_ids, W=854, H=480)\n",
137-
"tabs = widgets.Tab()\n",
138-
"tabs.children = tab_contents\n",
139-
"for i in range(len(tab_contents)):\n",
140-
" tabs.set_title(i, video_ids[i][0])\n",
141-
"display(tabs)"
142-
]
143-
},
144-
{
145-
"cell_type": "markdown",
146-
"metadata": {
147-
"execution": {},
148-
"pycharm": {
149-
"name": "#%% md\n"
150-
}
151-
},
152-
"source": [
153-
"## Slides"
154-
]
155-
},
156-
{
157-
"cell_type": "code",
158-
"execution_count": null,
159-
"metadata": {
160-
"cellView": "form",
161-
"execution": {},
162-
"pycharm": {
163-
"name": "#%%\n"
164-
}
165-
},
166-
"outputs": [],
167-
"source": [
168-
"# @markdown\n",
169-
"from IPython.display import IFrame\n",
170-
"link_id = \"kmwus\"\n",
171-
"print(f\"If you want to download the slides: https://osf.io/download/{link_id}/\")\n",
172-
"IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)"
173-
]
174-
},
17573
{
17674
"cell_type": "code",
17775
"execution_count": null,
@@ -214,7 +112,7 @@
214112
"name": "python",
215113
"nbconvert_exporter": "python",
216114
"pygments_lexer": "ipython3",
217-
"version": "3.9.17"
115+
"version": "3.9.22"
218116
}
219117
},
220118
"nbformat": 4,

tutorials/W2D1_ModelingPractice/instructor/W2D1_Outro.ipynb

Lines changed: 2 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -123,38 +123,6 @@
123123
"display(tabs)"
124124
]
125125
},
126-
{
127-
"cell_type": "markdown",
128-
"metadata": {
129-
"execution": {},
130-
"pycharm": {
131-
"name": "#%% md\n"
132-
}
133-
},
134-
"source": [
135-
"## Slides"
136-
]
137-
},
138-
{
139-
"cell_type": "code",
140-
"execution_count": null,
141-
"metadata": {
142-
"cellView": "form",
143-
"execution": {},
144-
"pycharm": {
145-
"name": "#%%\n"
146-
}
147-
},
148-
"outputs": [],
149-
"source": [
150-
"\n",
151-
"# @markdown\n",
152-
"from IPython.display import IFrame\n",
153-
"link_id = \"agrp6\"\n",
154-
"print(f\"If you want to download the slides: https://osf.io/download/{link_id}/\")\n",
155-
"IFrame(src=f\"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render\", width=854, height=480)"
156-
]
157-
},
158126
{
159127
"cell_type": "code",
160128
"execution_count": null,
@@ -183,7 +151,7 @@
183151
"name": "python3"
184152
},
185153
"kernelspec": {
186-
"display_name": "Python 3",
154+
"display_name": "cellpose",
187155
"language": "python",
188156
"name": "python3"
189157
},
@@ -197,7 +165,7 @@
197165
"name": "python",
198166
"nbconvert_exporter": "python",
199167
"pygments_lexer": "ipython3",
200-
"version": "3.9.21"
168+
"version": "3.9.22"
201169
}
202170
},
203171
"nbformat": 4,

0 commit comments

Comments
 (0)