|
105 | 105 | "source": [ |
106 | 106 | "### **Decision making task**\n", |
107 | 107 | "\n", |
108 | | - "In the IBL decision making task a visual stimulus appears at the edge of the screen and mice must bring the stimulus to the screen centre by moving a wheel with their front two paws. If they successfully bring the stimulus to the screen centre, they recieve a reward of sugar water ; if however they move the wheel in the wrong direction until the stimulus reaches beyond the screen edge, a white noise tone is played. To initiate a trial, the mouse must hold the wheel still for a continous period between 0.4-0.7s ; they are then alerted of the start of the trial by the simultaneous presentation of the stimulus on the screen and a go cue tone. Mice have a maximum of 60s to make a decision before the trial times out and a white noise tone is played.\n", |
| 108 | + "In the IBL decision making task a visual stimulus appears at the edge of the screen and mice must bring the stimulus to the screen centre by moving a wheel with their front two paws. If they successfully bring the stimulus to the screen centre, they receive a reward of sugar water ; if however they move the wheel in the wrong direction until the stimulus reaches beyond the screen edge, a white noise tone is played. To initiate a trial, the mouse must hold the wheel still for a continuous period between 0.4-0.7s ; they are then alerted of the start of the trial by the simultaneous presentation of the stimulus on the screen and a go cue tone. Mice have a maximum of 60s to make a decision before the trial times out and a white noise tone is played.\n", |
109 | 109 | "\n", |
110 | 110 | "Varying contrasts of visual stimulus are shown throughout the session (100%, 25%, 12.5%, 6.25% and 0%). The probability of the stimulus appearing on the left or the right changes between blocks of trials. During 0% contrast trials (where no stimulus appears on the screen but a wheel response is required), the mice can use the inferred block structure to guide their decision.\n", |
111 | 111 | "\n", |
|
122 | 122 | "### **Accessing the data**\n", |
123 | 123 | "For the purposes of this course, we have precomputed task-aligned peri-stimulus time histograms (PSTHs) for all good clusters in the dataset. This allows you to quickly begin working with the data using a simplified and accessible format. The tutorial below is based on this preprocessed data.\n", |
124 | 124 | "\n", |
125 | | - "To access the full dataset, including raw electrophysiology (action potential and LFP bands), spike sorting output, wheel movement, video recordings, and pose estimation data please refer to this [this introductary notebook](https://colab.research.google.com/drive/1_1qfa-DLDbezyFXguFOnJJWF5aJ5AH0i#scrollTo=-TJR7XEgtBxS) and [this tutorial](https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/main/projects/neurons/IBL_ONE_tutorial.ipynb).\n", |
| 125 | + "To access the full dataset, including raw electrophysiology (action potential and LFP bands), spike sorting output, wheel movement, video recordings, and pose estimation data please refer to this [this introductary notebook](https://colab.research.google.com/drive/1_1qfa-DLDbezyFXguFOnJJWF5aJ5AH0i#scrollTo=-TJR7XEgtBxS) and [this tutorial](https://colab.research.google.com/drive/1y3sRI1wC7qbWqN6skvulzPOp6xw8tLm7#scrollTo=hRZA78AoaBIC).\n", |
126 | 126 | "\n", |
127 | 127 | "\n", |
128 | 128 | "\n", |
|
176 | 176 | { |
177 | 177 | "cell_type": "code", |
178 | 178 | "source": [ |
| 179 | + "# When running in jupyter set number of threads to 1\n", |
| 180 | + "import os\n", |
| 181 | + "os.environ.setdefault('ONE_HTTP_DL_THREADS', '1')\n", |
| 182 | + "\n", |
179 | 183 | "from one.api import ONE\n", |
180 | 184 | "ONE.setup(base_url='https://openalyx.internationalbrainlab.org', silent=True)\n", |
181 | 185 | "one = ONE(password='international')" |
|
2287 | 2291 | }, |
2288 | 2292 | { |
2289 | 2293 | "cell_type": "markdown", |
2290 | | - "source": [ |
2291 | | - "### **Exploring the visualisation webiste**" |
2292 | | - ], |
| 2294 | + "source": "### **Exploring the visualisation website**", |
2293 | 2295 | "metadata": { |
2294 | 2296 | "id": "rr7GcJXgb09h" |
2295 | 2297 | } |
|
3530 | 3532 | }, |
3531 | 3533 | { |
3532 | 3534 | "cell_type": "markdown", |
3533 | | - "source": [ |
3534 | | - "Simliar to above we can compute the **modulation index** to identify **responsive cells**." |
3535 | | - ], |
| 3535 | + "source": "Similar to above we can compute the **modulation index** to identify **responsive cells**.", |
3536 | 3536 | "metadata": { |
3537 | 3537 | "id": "DNpIAFXK72mg" |
3538 | 3538 | } |
|
3717 | 3717 | { |
3718 | 3718 | "cell_type": "markdown", |
3719 | 3719 | "source": [ |
3720 | | - "One analysis that we can perform is to examine **when** the neural activity representing left versus right choices begins to diverge before the first movement is made. We will use a dimentionality reduction approach (PCA) to measure the **distance between the left and right choice representations** over a time window of interest.\n", |
| 3720 | + "One analysis that we can perform is to examine **when** the neural activity representing left versus right choices begins to diverge before the first movement is made. We will use a dimensionality reduction approach (PCA) to measure the **distance between the left and right choice representations** over a time window of interest.\n", |
3721 | 3721 | "\n", |
3722 | 3722 | "\n", |
3723 | 3723 | "We start by loading data from a specific brain region of interest, **GRN**. The trials are then split based on the **choice** made." |
|
3774 | 3774 | { |
3775 | 3775 | "cell_type": "code", |
3776 | 3776 | "source": [ |
3777 | | - "# Apply PCA to reduce to 2 dimentions\n", |
| 3777 | + "# Apply PCA to reduce to 2 dimensions\n", |
3778 | 3778 | "pca = PCA(n_components=2)\n", |
3779 | 3779 | "trajs = pca.fit_transform(all_psth.T).T\n", |
3780 | 3780 | "\n", |
|
4105 | 4105 | "\n", |
4106 | 4106 | " all_modulated.append(df)\n", |
4107 | 4107 | "\n", |
4108 | | - "# Concatentate results into a single dataframe\n", |
| 4108 | + "# Concatenate results into a single dataframe\n", |
4109 | 4109 | "all_modulated = pd.concat(all_modulated)" |
4110 | 4110 | ], |
4111 | 4111 | "metadata": { |
|
4567 | 4567 | "🟨 **Note**\n", |
4568 | 4568 | "* This analysis requires access to the full dataset, rather than the pre-processed version used above. To get started with accessing the full IBL dataset, please refer to:\n", |
4569 | 4569 | " * [This introductary notebook](https://colab.research.google.com/drive/1_1qfa-DLDbezyFXguFOnJJWF5aJ5AH0i#scrollTo=-TJR7XEgtBxS)\n", |
4570 | | - " * [This tutorial](https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/main/projects/neurons/IBL_ONE_tutorial.ipynb)\n", |
| 4570 | + " * [This tutorial](https://colab.research.google.com/drive/1y3sRI1wC7qbWqN6skvulzPOp6xw8tLm7#scrollTo=hRZA78AoaBIC)\n", |
4571 | 4571 | "\n", |
4572 | 4572 | "\n", |
4573 | 4573 | "The `Bayes Optimal` model is really the best an animal could do in estimating the block prior. However, mice are likely not that optimal.\n", |
|
0 commit comments