| 
112 | 112 |    "source": [  | 
113 | 113 |     "### **Activate the DataJoint Pipeline**\n",  | 
114 | 114 |     "\n",  | 
115 |  | -    "This tutorial activates the `ephys-acute.py` module from `element-array-ephys`, along\n",  | 
 | 115 | +    "This tutorial activates the `ephys_acute.py` module from `element-array-ephys`, along\n",  | 
116 | 116 |     "with upstream dependencies from `element-animal` and `element-session`. Please refer to the\n",  | 
117 | 117 |     "[`tutorial_pipeline.py`](./tutorial_pipeline.py) for the source code."  | 
118 | 118 |    ]  | 
 | 
1065 | 1065 |    "cell_type": "markdown",  | 
1066 | 1066 |    "metadata": {},  | 
1067 | 1067 |    "source": [  | 
1068 |  | -    "Every experimental session produces a set of data files. The purpose of the `SessionDirectory` table is to locate these files. It references a directory path relative to a root directory, defined in `dj.config[\\\"custom\\\"]`. More information about `dj.config` is provided in the [documentation](https://datajoint.com/docs/elements/user-guide/)."  | 
 | 1068 | +    "Every experimental session produces a set of data files. The purpose of the `SessionDirectory` table is to locate these files. It references a directory path relative to a root directory, defined in `dj.config[\"custom\"]`. More information about `dj.config` is provided in the [documentation](https://datajoint.com/docs/elements/user-guide/)."  | 
1069 | 1069 |    ]  | 
1070 | 1070 |   },  | 
1071 | 1071 |   {  | 
 | 
1557 | 1557 |    "source": [  | 
1558 | 1558 |     "### **Populate electrophysiology recording metadata**\n",  | 
1559 | 1559 |     "\n",  | 
1560 |  | -    "In the upcoming cells, populate the `ephys.EphysRecording` table and its part table `ephys.EphysRecording.EphysFile` will extract and store the recording information from a given experimental session."  | 
 | 1560 | +    "In the upcoming cells, the `.populate()` method will automatically extract and store the\n",  | 
 | 1561 | +    "recording metadata for each experimental session in the `ephys.EphysRecording` table and its part table `ephys.EphysRecording.EphysFile`."  | 
1561 | 1562 |    ]  | 
1562 | 1563 |   },  | 
1563 | 1564 |   {  | 
 | 
2194 | 2195 |    "cell_type": "markdown",  | 
2195 | 2196 |    "metadata": {},  | 
2196 | 2197 |    "source": [  | 
2197 |  | -    "Now that we've inserted kilosort parameters into the `ClusteringParamSet` table,\n",  | 
2198 |  | -    "we're almost ready to sort our data. DataJoint uses a `ClusteringTask` table to\n",  | 
 | 2198 | +    "DataJoint uses a `ClusteringTask` table to\n",  | 
2199 | 2199 |     "manage which `EphysRecording` and `ClusteringParamSet` should be used during processing. \n",  | 
2200 | 2200 |     "\n",  | 
2201 | 2201 |     "This table is important for defining several important aspects of\n",  | 
 | 
2235 | 2235 |    "metadata": {},  | 
2236 | 2236 |    "source": [  | 
2237 | 2237 |     "The `ClusteringTask` table contains two important attributes: \n",  | 
2238 |  | -    "+ `paramset_idx` \n",  | 
2239 |  | -    "+ `task_mode` \n",  | 
2240 |  | -    "\n",  | 
2241 |  | -    "The `paramset_idx` attribute tracks\n",  | 
2242 |  | -    "your kilosort parameter sets. You can choose the parameter set using which \n",  | 
2243 |  | -    "you want spike sort ephys data. For example, `paramset_idx=0` may contain\n",  | 
2244 |  | -    "default parameters for kilosort processing whereas `paramset_idx=1` contains your custom parameters for sorting. This\n",  | 
2245 |  | -    "attribute tells the `Processing` table which set of parameters you are processing in a given `populate()`.\n",  | 
2246 |  | -    "\n",  | 
2247 |  | -    "The `task_mode` attribute can be set to either `load` or `trigger`. When set to `load`,\n",  | 
2248 |  | -    "running the processing step initiates a search for exisiting kilosort output files. When set to `trigger`, the\n",  | 
2249 |  | -    "processing step will run kilosort on the raw data. "  | 
 | 2238 | +    "+ `paramset_idx` - Allows the user to choose the parameter set with which you want to\n",  | 
 | 2239 | +    "  run spike sorting.\n",  | 
 | 2240 | +    "+ `task_mode` - Can be set to `load` or `trigger`. When set to `load`, running the\n",  | 
 | 2241 | +    "  Clustering step initiates a search for existing output files of the spike sorting\n",  | 
 | 2242 | +    "  algorithm defined in `ClusteringParamSet`. When set to `trigger`, the processing step\n",  | 
 | 2243 | +    "  will run spike sorting on the raw data."  | 
2250 | 2244 |    ]  | 
2251 | 2245 |   },  | 
2252 | 2246 |   {  | 
 | 
2266 | 2260 |     ")"  | 
2267 | 2261 |    ]  | 
2268 | 2262 |   },  | 
 | 2263 | +  {  | 
 | 2264 | +   "cell_type": "markdown",  | 
 | 2265 | +   "metadata": {},  | 
 | 2266 | +   "source": [  | 
 | 2267 | +    "Let's call populate on the `Clustering` table which checks for kilosort results since `task_mode=load`."  | 
 | 2268 | +   ]  | 
 | 2269 | +  },  | 
2269 | 2270 |   {  | 
2270 | 2271 |    "cell_type": "code",  | 
2271 | 2272 |    "execution_count": 28,  | 
 | 
0 commit comments