|
| 1 | +Frequently Asked Questions |
| 2 | +-------------- |
| 3 | + |
| 4 | +Cropped field-of-view |
| 5 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 6 | + |
| 7 | +Some issues on this: `#273 <https://github.com/MouseLand/suite2p/issues/273>`_, |
| 8 | +`#207 <https://github.com/MouseLand/suite2p/issues/207>`_, |
| 9 | +`#125 <https://github.com/MouseLand/suite2p/issues/125>`_. |
| 10 | + |
| 11 | +Why does this happen? suite2p crops the field-of-view so that areas that move out of |
| 12 | +view on the edges are not used for ROI detection and signal extraction. These areas are |
| 13 | +excluded because they are not always in the FOV - they move in and out and therefore |
| 14 | +activity in these regions is unreliable to estimate. |
| 15 | + |
| 16 | +suite2p determines the region to crop based on the maximum rigid shifts in XY. You can view |
| 17 | +these shifts with the movie in the "View registered binary" window. If these shifts are too large |
| 18 | +and don't seem to be accurate (low SNR regime), you can decrease the maximum shift that suite2p can |
| 19 | +estimate by setting ops['maxregshift'] lower than its default (which is 0.1 = 10% of the size of the FOV). |
| 20 | +suite2p does exclude some of the large outlier shifts when computing the crop, and determines the threshold |
| 21 | +of what is an "outlier" using the parameter ops['th_badframes']. Set this lower to increase the number of |
| 22 | +"outliers". These "outliers" are labelled as ops['badframes'] and these frames are excluded also from ROI detection. |
| 23 | + |
| 24 | +You can add frames to this list of ops['badframes'] by creating |
| 25 | +a numpy array (0-based, the first frame is zero) and save it as bad_frames.npy in the folder |
| 26 | +with your tiffs (if you have multiple folders, save it in the FIRST folder with tiffs, |
| 27 | +or if you have subfolders with 'look_one_level_down' it should be in the parent folder). |
| 28 | +See this page :ref:`inputs` |
| 29 | +for more info. |
| 30 | + |
| 31 | +Deconvolution means what? |
| 32 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 33 | + |
| 34 | +There is a lot of misinformation about what deconvolution is and what it isn't. Some issues on this: |
| 35 | +`#267 <https://github.com/MouseLand/suite2p/issues/267>`_, |
| 36 | +`#202 <https://github.com/MouseLand/suite2p/issues/202>`_, |
| 37 | +`#169 <https://github.com/MouseLand/suite2p/issues/169>`_ |
| 38 | + |
| 39 | +TLDR: Deconvolution will NOT tell you how many spikes happened in a neuron - there is too much |
| 40 | +variability in the calcium signal to know that. Our deconvolution has NO sparsity constraints |
| 41 | +and we recommend against thresholding the output values because they contain information about |
| 42 | +approximately how many spikes occurred. We found that using the raw deconvolved values gave us |
| 43 | +the most reliable responses to stimuli (as measured by signal variance). |
| 44 | + |
| 45 | +See this figure from our `review paper <https://www.sciencedirect.com/science/article/pii/S0959438818300977>`_ for reference: |
| 46 | + |
| 47 | +.. image:: _static/fig4_review.png |
| 48 | + :width: 600 |
| 49 | + |
| 50 | + |
| 51 | +Long answer (mostly from #267): |
| 52 | + |
| 53 | +There is an unknown scaling factor between fluorescence and # spikes, which is very hard to estimate. |
| 54 | +This is true both for the raw dF, or dF/F, and for the deconvolved amplitudes, which we usually treat |
| 55 | +as arbitrary units. The same calcium amplitude transient may have been generated by a single spike, |
| 56 | +or by a burst of many spikes, and for many neurons it is very hard to disentangle these, so we don't |
| 57 | +try. Few spike deconvolution algorithms try to estimate single spike amplitude (look up "MLspike"), |
| 58 | +but we are in general suspicious of the results, and usually have no need for absolute numbers of |
| 59 | +spikes. |
| 60 | + |
| 61 | +As for the question of thresholding, we always recommend not to, because you will lose information. |
| 62 | +More importantly, you will treat 1-spike events the same as 10-spike events, which isn't right. |
| 63 | +There are several L0-based methods that return discrete spike times, including one we've developed |
| 64 | +in the past, which we've since shown to be worse than the vanilla OASIS method |
| 65 | +(see our `Jneurosci paper <https://www.jneurosci.org/content/38/37/7976.abstract>`_). |
| 66 | +We do not use L1 penalties either, departing from the original OASIS paper, because we |
| 67 | +found that hurts in all cases (see Jneurosci). |
| 68 | + |
| 69 | +How do you compare across cells then if these values are arbitrary to some extent? |
| 70 | + |
| 71 | +If you need to compare between cells, you would usually be comparing effect sizes, |
| 72 | +such as tuning width, SNR, choice index etc. which are relative quantities, i.e. |
| 73 | +firing rate 1 / firing rate 2. If you really need to compare absolute firing rates, |
| 74 | +then you need to normalize the deconvolved events by the F0 of the fluorescence trace, |
| 75 | +because the dF/F should be more closely related to absolute firing rate. Computing the |
| 76 | +F0 has problems in itself, as it may sometimes be estimated to be negative or near-zero |
| 77 | +for high SNR sensors like gcamp6 and 7. You could take the mean F0 before subtracting the |
| 78 | +neuropil and normalize by that, and then decide on a threshold to use across all cells, |
| 79 | +but at that point you need to realize these choices will affect your result and |
| 80 | +interpretation, so you cannot really put much weight on them. For these reasons, I would |
| 81 | +avoid making statements about absolute firing rates from calcium imaging data, and I don't |
| 82 | +know of many papers that make such statements. |
| 83 | + |
| 84 | + |
| 85 | +Multiple functional channels |
| 86 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 87 | + |
| 88 | +If you have two channels and they both have functional activity, then to |
| 89 | +process both you need to run suite2p in a jupyter notebook. Here is an example |
| 90 | +notebook for that purpose: `multiple_functional_channels.ipynb <https://github.com/MouseLand/suite2p/blob/master/jupyter/multiple_functional_channels.ipynb>`_ |
| 91 | + |
| 92 | +Z-drift |
| 93 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 94 | + |
| 95 | +It's not frequently asked about but it should be :) |
| 96 | + |
| 97 | +In the GUI in the "View registered binary" window you can now load in a z-stack |
| 98 | +and compute the z-position of the recording across time. |
| 99 | + |
| 100 | +Scanimage now can do z-correction ONLINE for you! |
| 101 | + |
| 102 | +.. image:: _static/scanimage.png |
| 103 | + :width: 600 |
0 commit comments