Curation Tutorial

After spike sorting and computing quality metrics, you can automatically curate the spike sorting output using the quality metrics that you have calculated.

Import the modules and/or functions necessary from spikeinterface

import spikeinterface.core as si

from spikeinterface.qualitymetrics import compute_quality_metrics

Let’s generate a simulated dataset, and imagine that the ground-truth sorting is in fact the output of a sorter.

recording, sorting = si.generate_ground_truth_recording()
print(recording)
print(sorting)
GroundTruthRecording (InjectTemplatesRecording): 4 channels - 25.0kHz - 1 segments
                      250,000 samples - 10.00s - float32 dtype - 3.81 MiB
GroundTruthSorting (NumpySorting): 10 units - 1 segments - 25.0kHz

Create SortingAnalyzer

For this example, we will need a SortingAnalyzer and some extensions to be computed first

analyzer = si.create_sorting_analyzer(sorting=sorting, recording=recording, format="memory")
analyzer.compute(["random_spikes", "waveforms", "templates", "noise_levels"])

analyzer.compute("principal_components", n_components=3, mode="by_channel_local")
print(analyzer)
estimate_sparsity (no parallelization):   0%|          | 0/10 [00:00<?, ?it/s]
estimate_sparsity (no parallelization): 100%|██████████| 10/10 [00:00<00:00, 396.31it/s]

compute_waveforms (no parallelization):   0%|          | 0/10 [00:00<?, ?it/s]
compute_waveforms (no parallelization): 100%|██████████| 10/10 [00:00<00:00, 308.40it/s]

noise_level (no parallelization):   0%|          | 0/20 [00:00<?, ?it/s]
noise_level (no parallelization): 100%|██████████| 20/20 [00:00<00:00, 250.20it/s]

Fitting PCA:   0%|          | 0/10 [00:00<?, ?it/s]
Fitting PCA: 100%|██████████| 10/10 [00:00<00:00, 134.31it/s]

Projecting waveforms:   0%|          | 0/10 [00:00<?, ?it/s]
Projecting waveforms: 100%|██████████| 10/10 [00:00<00:00, 1286.91it/s]
SortingAnalyzer: 4 channels - 10 units - 1 segments - memory - sparse - has recording
Loaded 5 extensions: random_spikes, waveforms, templates, noise_levels, principal_components

Then we compute some quality metrics:

metrics = compute_quality_metrics(analyzer, metric_names=["snr", "isi_violation", "nearest_neighbor"])
print(metrics)
calculate pc_metrics:   0%|          | 0/10 [00:00<?, ?it/s]
calculate pc_metrics:  40%|████      | 4/10 [00:00<00:00, 32.66it/s]
calculate pc_metrics:  90%|█████████ | 9/10 [00:00<00:00, 38.10it/s]
calculate pc_metrics: 100%|██████████| 10/10 [00:00<00:00, 37.68it/s]
         snr  isi_violations_ratio  ...  nn_hit_rate  nn_miss_rate
0   5.069769                   0.0  ...     0.705556      0.027962
1   6.367202                   0.0  ...     0.753185      0.033096
2  13.487492                   0.0  ...     0.831560      0.019586
3   5.370398                   0.0  ...     0.741776      0.027757
4  19.533713                   0.0  ...     0.891369      0.011312
5  27.209492                   0.0  ...     0.911074      0.009851
6   5.797605                   0.0  ...     0.770134      0.029182
7  37.457651                   0.0  ...     0.930195      0.005784
8  50.711499                   0.0  ...     0.963576      0.005957
9  21.644221                   0.0  ...     0.891304      0.007743

[10 rows x 5 columns]

We can now threshold each quality metric and select units based on some rules.

The easiest and most intuitive way is to use boolean masking with a dataframe.

Then create a list of unit ids that we want to keep

keep_mask = (metrics["snr"] > 7.5) & (metrics["isi_violations_ratio"] < 0.2) & (metrics["nn_hit_rate"] > 0.90)
print(keep_mask)

keep_unit_ids = keep_mask[keep_mask].index.values
keep_unit_ids = [unit_id for unit_id in keep_unit_ids]
print(keep_unit_ids)
0    False
1    False
2    False
3    False
4    False
5     True
6    False
7     True
8     True
9    False
dtype: bool
['5', '7', '8']

And now let’s create a sorting that contains only curated units and save it.

curated_sorting = sorting.select_units(keep_unit_ids)
print(curated_sorting)


curated_sorting.save(folder="curated_sorting")
GroundTruthSorting (UnitsSelectionSorting): 3 units - 1 segments - 25.0kHz
NumpyFolder (NumpyFolderSorting): 3 units - 1 segments - 25.0kHz
Unit IDs
    ['5' '7' '8']
Annotations
  • name : GroundTruthSorting
Properties
    gt_unit_locations[[-5.1932306 -9.537159 25.08609 ] [-9.463798 28.898481 10.197775 ] [27.645708 26.040428 13.223235 ]]


We can also save the analyzer with only theses units

clean_analyzer = analyzer.select_units(unit_ids=keep_unit_ids, format="zarr", folder="clean_analyzer")

print(clean_analyzer)
SortingAnalyzer: 4 channels - 3 units - 1 segments - zarr - sparse - has recording
Loaded 6 extensions: random_spikes, waveforms, templates, noise_levels, principal_components, quality_metrics

Total running time of the script: (0 minutes 0.677 seconds)

Gallery generated by Sphinx-Gallery