brainbox.task.closed_loop
Computes task related output
Functions
Compute statistical test between two arrays 

Determine units which significantly differentiate between two task events (e.g. 

Generate a pseudo block structure 

Generate a complete pseudo session with biased blocks, all stimulus contrasts, choices and rewards and omissions. 

Generate a block structure with stimuli 

Generate impostor targets by selecting from a list of current targets of variable length. 

Determine responsive neurons by doing a Wilcoxon SignedRank test between a baseline period before a certain task event (e.g. 

Calcluate area under the ROC curve that indicates how well the activity of the neuron distiguishes between two events (e.g. 

Determine how well neurons respond to a certain task event by calculating the area under the ROC curve between a baseline period before the event and a period after the event. 
 responsive_units(spike_times, spike_clusters, event_times, pre_time=[0.5, 0], post_time=[0, 0.5], alpha=0.05, fdr_corr=False, use_fr=False)[source]
Determine responsive neurons by doing a Wilcoxon SignedRank test between a baseline period before a certain task event (e.g. stimulus onset) and a period after the task event.
 Parameters
spike_times (1D array) – spike times (in seconds)
spike_clusters (1D array) – cluster ids corresponding to each event in spikes
event_times (1D array) – times (in seconds) of the events from the two groups
pre_time (twoelement array) – time (in seconds) preceding the event to get the baseline (e.g. [0.5, 0.2] would be a window starting 0.5 seconds before the event and ending at 0.2 seconds before the event)
post_time (twoelement array) – time (in seconds) to follow the event times
alpha (float) – alpha to use for statistical significance
fdr_corr (boolean) – whether to use an FDR correction (BenjaminHochmann) to correct for multiple testing
use_fr (bool) – whether to use the firing rate instead of total spike count
 Returns
significant_units (ndarray) – an array with the indices of clusters that are significatly modulated
stats (1D array) – the statistic of the test that was performed
p_values (ndarray) – the pvalues of all the clusters
cluster_ids (ndarray) – cluster ids of the pvalues
 differentiate_units(spike_times, spike_clusters, event_times, event_groups, pre_time=0, post_time=0.5, test='ranksums', alpha=0.05, fdr_corr=False)[source]
Determine units which significantly differentiate between two task events (e.g. stimulus left/right) by performing a statistical test between the spike rates elicited by the two events. Default is a Wilcoxon Rank Sum test.
 Parameters
spike_times (1D array) – spike times (in seconds)
spike_clusters (1D array) – cluster ids corresponding to each event in spikes
event_times (1D array) – times (in seconds) of the events from the two groups
event_groups (1D array) – group identities of the events as either 0 or 1
pre_time (float) – time (in seconds) to precede the event times to get the baseline
post_time (float) – time (in seconds) to follow the event times
test (string) –
 which statistical test to use, options are:
’ranksums’ Wilcoxon Rank Sums test ‘signrank’ Wilcoxon Signed Rank test (for paired observations) ‘ttest’ independent samples ttest ‘paired_ttest’ paired ttest
alpha (float) – alpha to use for statistical significance
fdr_corr (boolean) – whether to use an FDR correction (BenjaminHochmann) to correct for multiple testing
 Returns
significant_units (1D array) – an array with the indices of clusters that are significatly modulated
stats (1D array) – the statistic of the test that was performed
p_values (1D array) – the pvalues of all the clusters
cluster_ids (ndarray) – cluster ids of the pvalues
 compute_comparison_statistics(value1, value2, test='ranksums', alpha=0.05, fdr_corr=False)[source]
Compute statistical test between two arrays
 Parameters
value1 (1D array) – first array of values to compare
value2 (1D array) – second array of values to compare
test (string) –
 which statistical test to use, options are:
’ranksums’ Wilcoxon Rank Sums test ‘signrank’ Wilcoxon Signed Rank test (for paired observations) ‘ttest’ independent samples ttest ‘paired_ttest’ paired ttest
alpha (float) – alpha to use for statistical significance
fdr_corr (boolean) – whether to use an FDR correction (BenjaminHochmann) to correct for multiple testing
 Returns
significant_units (1D array) – an array with the indices of values that are significatly modulated
stats (1D array) – the statistic of the test that was performed
p_values (1D array) – the pvalues of all the values
 roc_single_event(spike_times, spike_clusters, event_times, pre_time=[0.5, 0], post_time=[0, 0.5])[source]
Determine how well neurons respond to a certain task event by calculating the area under the ROC curve between a baseline period before the event and a period after the event. Values of > 0.5 indicate the neuron respons positively to the event and < 0.5 indicate a negative response.
 Parameters
spike_times (1D array) – spike times (in seconds)
spike_clusters (1D array) – cluster ids corresponding to each event in spikes
event_times (1D array) – times (in seconds) of the events from the two groups
pre_time (twoelement array) – time (in seconds) preceding the event to get the baseline (e.g. [0.5, 0.2] would be a window starting 0.5 seconds before the event and ending at 0.2 seconds before the event)
post_time (twoelement array) – time (in seconds) to follow the event times
 Returns
auc_roc (1D array) – the area under the ROC curve
cluster_ids (1D array) – cluster ids of the pvalues
 roc_between_two_events(spike_times, spike_clusters, event_times, event_groups, pre_time=0, post_time=0.25)[source]
Calcluate area under the ROC curve that indicates how well the activity of the neuron distiguishes between two events (e.g. movement to the right vs left). A value of 0.5 indicates the neuron cannot distiguish between the two events. A value of 0 or 1 indicates maximum distinction. Significance is determined by bootstrapping the ROC curves. If 0.5 is not included in the 95th percentile of the bootstrapped distribution, the neuron is deemed to be significant.
 Parameters
spike_times (1D array) – spike times (in seconds)
spike_clusters (1D array) – cluster ids corresponding to each event in spikes
event_times (1D array) – times (in seconds) of the events from the two groups
event_groups (1D array) – group identities of the events as either 0 or 1
pre_time (float) – time (in seconds) to precede the event times
post_time (float) – time (in seconds) to follow the event times
 Returns
auc_roc (1D array) – an array of the area under the ROC curve for every neuron
cluster_ids (1D array) – cluster ids of the AUC values
 generate_pseudo_blocks(n_trials, factor=60, min_=20, max_=100, first5050=90)[source]
Generate a pseudo block structure
 Parameters
n_trials (int) – how many trials to generate
factor (int) – factor of the exponential
min (int) – minimum number of trials per block
max (int) – maximum number of trials per block
first5050 (int) – amount of trials with 50/50 left right probability at the beginning
 Returns
probabilityLeft – array with probability left per trial
 Return type
1D array
 generate_pseudo_stimuli(n_trials, contrast_set=[0, 0.06, 0.12, 0.25, 1], first5050=90)[source]
Generate a block structure with stimuli
 Parameters
n_trials (int) – number of trials to generate
contrast_set (1D array) – the contrasts that are presented. The default is [0.06, 0.12, 0.25, 1].
first5050 (int) – Number of 50/50 trials at the beginning of the session. The default is 90.
 Returns
p_left (1D array) – probability of left stimulus
contrast_left (1D array) – contrast on the left
contrast_right (1D array) – contrast on the right
 generate_pseudo_session(trials, generate_choices=True, contrast_distribution='nonuniform')[source]
Generate a complete pseudo session with biased blocks, all stimulus contrasts, choices and rewards and omissions. Biased blocks and stimulus contrasts are generated using the same statistics as used in the actual task. The choices of the animal are generated using the actual psychometrics of the animal in the session. For each synthetic trial the choice is determined by drawing from a Bernoulli distribution that is biased according to the proportion of times the animal chose left for the stimulus contrast, side, and block probability. Nogo trials are ignored in the generating of the synthetic choices.
 Parameters
trials (DataFrame) – Pandas dataframe with columns as trial vectors loaded using ONE
generate_choices (bool) – whether to generate the choices (runs faster without)
contrast_distribution (str ['uniform', 'nonuniform']) – the absolute contrast distribution. If uniform, the zero contrast is as likely as other contrasts: BiasedChoiceWorld task If ‘nonuniform’, the zero contrast is half as likely to occur: EphysChoiceWorld task (‘biased’ is kept for compatibility, but is deprecated as it is confusing)
 Returns
pseudo_trials – a trials dataframe with synthetically generated trials
 Return type
DataFrame
 get_impostor_target(targets, labels, current_label=None, seed_idx=None, verbose=False)[source]
Generate impostor targets by selecting from a list of current targets of variable length. Targets are selected and stitched together to the length of the current labeled target, aka ‘Frankenstein’ targets, often used for evaluating a null distribution while decoding.
 Parameters
targets (list of all targets) – targets may be arrays of any dimension (a,b,…,z) but must have the same shape except for the last dimension, z. All targets must have z > 0.
labels (numpy array of strings) – labels corresponding to each target e.g. session eid. only targets with unique labels are used to create impostor target. Typically, use eid as the label because each eid has a unique target.
current_label (string) – targets with the current label are not used to create impostor target. Size of corresponding target is used to determine size of impostor target. If None, a random selection from the set of unique labels is used.
 Returns
impostor_final
 Return type
numpy array, same shape as all targets except last dimension