brainbox.quality.permutation_test

Quality control for arbitrary metrics, using permutation testing.

Written by Sebastian Bruijns

Functions

permut_test

Compute the probability of observating metric difference for datasets, via permutation testing.

plot_permut_test

Plot permutation test result.

permut_test(data1, data2, metric, n_permut=1000, show=False, title=None)[source]

Compute the probability of observating metric difference for datasets, via permutation testing.

We’re taking absolute values of differences, because the order of dataset input shouldn’t matter We’re only computing means, what if we want to apply a more complicated function to the permutation result? Pay attention to always give one list (even if its just one dataset, but then it doesn’t make sense anyway…)

Parameters:
  • data1 (array-like) – First data set, list or array of data-entities to use for permutation test (make data2 optional and then permutation test more similar to tuning sensitivity?)

  • data2 (array-like) – Second data set, also list or array of data-entities to use for permutation test

  • metric (function, array-like -> float) – Metric to use for permutation test, will be used to reduce elements of data1 and data2 to one number

  • n_permut (integer (optional)) – Number of perumtations to use for test

  • plot (Boolean (optional)) – Whether or not to show a plot of the permutation distribution and a marker for the position of the true difference in relation to this distribution

Returns:

p – p-value of true difference in permutation distribution

Return type:

float

See also

TODO

Examples

TODO:

plot_permut_test(permut_diffs, true_diff, p, title=None)[source]

Plot permutation test result.