brainbox.behavior.dlc

Set of functions to deal with dlc data.

Functions

get_dlc_everything

Get out features of interest for dlc

get_feature_event_times

Detect events from the dlc traces.

get_licks

Compute lick times from the tongue dlc points

get_pupil_diameter

Estimates pupil diameter by taking median of different computations.

get_smooth_pupil_diameter

param diameter_raw:

np.array, raw pupil diameters, calculated from (thresholded) dlc traces

get_sniffs

Compute sniff times from the nose tip

get_speed

FIXME Document and add unit test!

get_speed_for_features

Wrapper to compute speed for a number of dlc features and add them to dlc table

insert_idx

likelihood_threshold

Set dlc points with likelihood less than threshold to nan.

plot_lick_hist

Plots histogramm of lick events aligned to feedback time, separate for correct and incorrect trials

plot_lick_raster

Plots lick raster for correct trials

plot_motion_energy_hist

Plots mean motion energy of given cameras, aligned to stimulus onset.

plot_pupil_diameter_hist

Plots histogram of pupil diameter aligned to simulus onset and feedback time.

plot_speed_hist

Plots speed histogram of a given dlc feature, aligned to stimulus onset, separate for correct and incorrect trials

plot_trace_on_frame

Plots dlc traces as scatter plots on a frame of the video.

plot_wheel_position

Plots wheel position across trials, color by which side was chosen

plt_window

plt_window(x)[source]
insert_idx(array, values)[source]
likelihood_threshold(dlc, threshold=0.9)[source]

Set dlc points with likelihood less than threshold to nan.

FIXME Add unit test.

Parameters:
  • dlc – dlc pqt object

  • threshold – likelihood threshold

Returns:

get_speed(dlc, dlc_t, camera, feature='paw_r')[source]

FIXME Document and add unit test!

Parameters:
  • dlc – dlc pqt table

  • dlc_t – dlc time points

  • camera – camera type e.g ‘left’, ‘right’, ‘body’

  • feature – dlc feature to compute speed over

Returns:

get_speed_for_features(dlc, dlc_t, camera, features=['paw_r', 'paw_l', 'nose_tip'])[source]

Wrapper to compute speed for a number of dlc features and add them to dlc table

Parameters:
  • dlc – dlc pqt table

  • dlc_t – dlc time points

  • camera – camera type e.g ‘left’, ‘right’, ‘body’

  • features – dlc features to compute speed for

Returns:

get_feature_event_times(dlc, dlc_t, features)[source]

Detect events from the dlc traces. Based on the standard deviation between frames

Parameters:
  • dlc – dlc pqt table

  • dlc_t – dlc times

  • features – features to consider

Returns:

get_licks(dlc, dlc_t)[source]

Compute lick times from the tongue dlc points

Parameters:
  • dlc – dlc pqt table

  • dlc_t – dlc times

Returns:

get_sniffs(dlc, dlc_t)[source]

Compute sniff times from the nose tip

Parameters:
  • dlc – dlc pqt table

  • dlc_t – dlc times

Returns:

get_dlc_everything(dlc_cam, camera)[source]

Get out features of interest for dlc

Parameters:
  • dlc_cam – dlc object

  • camera – camera type e.g ‘left’, ‘right’

Returns:

get_pupil_diameter(dlc)[source]

Estimates pupil diameter by taking median of different computations.

The two most straightforward estimates: d1 = top - bottom, d2 = left - right In addition, assume the pupil is a circle and estimate diameter from other pairs of points

Parameters:

dlc – dlc pqt table with pupil estimates, should be likelihood thresholded (e.g. at 0.9)

Returns:

np.array, pupil diameter estimate for each time point, shape (n_frames,)

get_smooth_pupil_diameter(diameter_raw, camera, std_thresh=5, nan_thresh=1)[source]
Parameters:
  • diameter_raw – np.array, raw pupil diameters, calculated from (thresholded) dlc traces

  • camera – str (‘left’, ‘right’), which camera to run the smoothing for

  • std_thresh – threshold (in standard deviations) beyond which a point is labeled as an outlier

  • nan_thresh – threshold (in seconds) above which we will not interpolate nans, but keep them (for long stretches interpolation may not be appropriate)

Returns:

plot_trace_on_frame(frame, dlc_df, cam)[source]

Plots dlc traces as scatter plots on a frame of the video. For left and right video also plots whisker pad and eye and tongue zoom.

Parameters:
  • frame – np.array, single video frame to plot on

  • dlc_df – pd.Dataframe, dlc traces with _x, _y and _likelihood info for each trace

  • cam – str, which camera to process (‘left’, ‘right’, ‘body’)

Returns:

matplolib.axis

plot_wheel_position(wheel_position, wheel_time, trials_df)[source]

Plots wheel position across trials, color by which side was chosen

Parameters:
  • wheel_position – np.array, interpolated wheel position

  • wheel_time – np.array, interpolated wheel timestamps

  • trials_df – pd.DataFrame, with column ‘stimOn_times’ (time of stimulus onset times for each trial)

Returns:

matplotlib.axis

plot_lick_hist(lick_times, trials_df)[source]

Plots histogramm of lick events aligned to feedback time, separate for correct and incorrect trials

Parameters:
  • lick_times – np.array, timestamps of lick events

  • trials_df – pd.DataFrame, with column ‘feedback_times’ (time of feedback for each trial) and ‘feedbackType’ (1 for correct, -1 for incorrect trials)

Returns:

matplotlib axis

plot_lick_raster(lick_times, trials_df)[source]

Plots lick raster for correct trials

Parameters:
  • lick_times – np.array, timestamps of lick events

  • trials_df – pd.DataFrame, with column ‘feedback_times’ (time of feedback for each trial) and feedbackType (1 for correct, -1 for incorrect trials)

Returns:

matplotlib.axis

plot_motion_energy_hist(camera_dict, trials_df)[source]

Plots mean motion energy of given cameras, aligned to stimulus onset.

Parameters:
  • camera_dict – dict, one key for each camera to be plotted (e.g. ‘left’), value is another dict with items ‘motion_energy’ (np.array, motion energy calculated from this camera) and ‘times’ (np.array, camera timestamps)

  • trials_df – pd.DataFrame, with column ‘stimOn_times’ (time of stimulus onset for each trial)

Returns:

matplotlib.axis

plot_speed_hist(dlc_df, cam_times, trials_df, feature='paw_r', cam='left', legend=True)[source]

Plots speed histogram of a given dlc feature, aligned to stimulus onset, separate for correct and incorrect trials

Parameters:
  • dlc_df – pd.Dataframe, dlc traces with _x, _y and _likelihood info for each trace

  • cam_times – np.array, camera timestamps

  • trials_df – pd.DataFrame, with column ‘stimOn_times’ (time of stimulus onset for each trial)

  • feature – str, feature with trace in dlc_df for which to plot speed hist, default is ‘paw_r’

  • cam – str, camera to use (‘body’, ‘left’, ‘right’) default is ‘left’

  • legend – bool, whether to add legend to the plot, default is True

Returns:

matplotlib.axis

plot_pupil_diameter_hist(pupil_diameter, cam_times, trials_df, cam='left')[source]

Plots histogram of pupil diameter aligned to simulus onset and feedback time.

Parameters:
  • pupil_diameter – np.array, (smoothed) pupil diameter estimate

  • cam_times – np.array, camera timestamps

  • trials_df – pd.DataFrame, with column ‘stimOn_times’ (time of stimulus onset for each trial) and feedback_times (time of feedback for each trial)

  • cam – str, camera to use (‘body’, ‘left’, ‘right’) default is ‘left’

Returns:

matplotlib.axis