brainbox.io.one

Functions for loading IBL ephys and trial data using the Open Neurophysiology Environment.

Functions

channel_locations_interpolation

oftentimes the channel map for different spike sorters may be different so interpolate the alignment onto if there is no spike sorting in the base folder, the alignment doesn't have the localCoordinates field so we reconstruct from the Neuropixel map.

load_channel_locations

Load the brain locations of each channel for a given session/probe

load_channels_from_insertion

load_ephys_session

From an eid, hits the Alyx database and downloads a standard default set of dataset types From a local session Path (pathlib.Path), loads a standard default set of dataset types to perform analysis: 'clusters.channels', 'clusters.depths', 'clusters.metrics', 'spikes.clusters', 'spikes.times', 'probes.description'

load_iti

The inter-trial interval (ITI) time for each trial, defined as the period of open-loop grey screen commencing at stimulus off and lasting until the quiescent period at the start of the following trial.

load_lfp

TODO Verify works From an eid, hits the Alyx database and downloads the standard set of datasets needed for LFP

load_passive_rfmap

For a given eid load in the passive receptive field mapping protocol data

load_spike_sorting

From an eid, loads spikes and clusters for all probes The following set of dataset types are loaded: 'clusters.channels', 'clusters.depths', 'clusters.metrics', 'spikes.clusters', 'spikes.times', 'probes.description'

load_spike_sorting_fast

From an eid, loads spikes and clusters for all probes The following set of dataset types are loaded: 'clusters.channels', 'clusters.depths', 'clusters.metrics', 'spikes.clusters', 'spikes.times', 'probes.description'

load_spike_sorting_with_channel

For a given eid, get spikes, clusters and channels information, and merges clusters and channels information before returning all three variables.

load_wheel_reaction_times

Return the calculated reaction times for session.

merge_clusters_channels

Takes (default and any extra) values in given keys from channels and assign them to clusters.

Classes

EphysSessionLoader

Spike sorting enhanced version of SessionLoader Loads spike sorting data for all probes in the session, in the self.ephys dict >>> EphysSessionLoader(eid=eid, one=one) To select for a specific probe >>> EphysSessionLoader(eid=eid, one=one, pid=pid)

SessionLoader

Object to load session data for a give session in the recommended way.

SpikeSortingLoader

Object that will load spike sorting data for a given probe insertion. This class can be instantiated in several manners - With Alyx database probe id: SpikeSortingLoader(pid=pid, one=one) - With Alyx database eic and probe name: SpikeSortingLoader(eid=eid, pname='probe00', one=one) - From a local session and probe name: SpikeSortingLoader(session_path=session_path, pname='probe00') NB: When no ONE instance is passed, any datasets that are loaded will not be recorded.

load_lfp(eid, one=None, dataset_types=None, **kwargs)[source]

TODO Verify works From an eid, hits the Alyx database and downloads the standard set of datasets needed for LFP

Parameters:
  • eid

  • dataset_types – additional dataset types to add to the list

  • open – if True, spikeglx readers are opened

Returns:

spikeglx.Reader

channel_locations_interpolation(channels_aligned, channels=None, brain_regions=None)[source]

oftentimes the channel map for different spike sorters may be different so interpolate the alignment onto if there is no spike sorting in the base folder, the alignment doesn’t have the localCoordinates field so we reconstruct from the Neuropixel map. This only happens for early pykilosort sorts

Parameters:
  • channels_aligned

    Bunch or dictionary of aligned channels containing at least keys ‘localCoordinates’, ‘mlapdv’ and ‘brainLocationIds_ccf_2017’ OR

    ’x’, ‘y’, ‘z’, ‘acronym’, ‘axial_um’ those are the guide for the interpolation

  • channels – Bunch or dictionary of aligned channels containing at least keys ‘localCoordinates’

  • brain_regions

    None (default) or iblatlas.regions.BrainRegions object if None will return a dict with keys ‘localCoordinates’, ‘mlapdv’, ‘brainLocationIds_ccf_2017 if a brain region object is provided, outputts a dict with keys

    ’x’, ‘y’, ‘z’, ‘acronym’, ‘atlas_id’, ‘axial_um’, ‘lateral_um’

Returns:

Bunch or dictionary of channels with brain coordinates keys

load_channel_locations(eid, probe=None, one=None, aligned=False, brain_atlas=None)[source]

Load the brain locations of each channel for a given session/probe

Parameters:
  • eid ([str, UUID, Path, dict]) – Experiment session identifier; may be a UUID, URL, experiment reference string details dict or Path

  • probe ([str, list of str]) – The probe label(s), e.g. ‘probe01’

  • one (one.api.OneAlyx) – An instance of ONE (shouldn’t be in ‘local’ mode)

  • aligned (bool) – Whether to get the latest user aligned channel when not resolved or use histology track

  • brain_atlas (iblatlas.BrainAtlas) – Brain atlas object (default: Allen atlas)

Returns:

  • dict of one.alf.io.AlfBunch – A dict with probe labels as keys, contains channel locations with keys (‘acronym’, ‘atlas_id’, ‘x’, ‘y’, ‘z’). Atlas IDs non-lateralized.

  • optional (string ‘resolved’, ‘aligned’, ‘traced’ or ‘’)

load_spike_sorting_fast(eid, one=None, probe=None, dataset_types=None, spike_sorter=None, revision=None, brain_regions=None, nested=True, collection=None, return_collection=False)[source]

From an eid, loads spikes and clusters for all probes The following set of dataset types are loaded:

‘clusters.channels’, ‘clusters.depths’, ‘clusters.metrics’, ‘spikes.clusters’, ‘spikes.times’, ‘probes.description’

Parameters:
  • eid – experiment UUID or pathlib.Path of the local session

  • one – an instance of OneAlyx

  • probe – name of probe to load in, if not given all probes for session will be loaded

  • dataset_types – additional spikes/clusters objects to add to the standard default list

  • spike_sorter – name of the spike sorting you want to load (None for default)

  • collection – name of the spike sorting collection to load - exclusive with spike sorter name ex: “alf/probe00”

  • brain_regions – iblatlas.regions.BrainRegions object - will label acronyms if provided

  • nested – if a single probe is required, do not output a dictionary with the probe name as key

  • return_collection – (False) if True, will return the collection used to load

Returns:

spikes, clusters, channels (dict of bunch, 1 bunch per probe)

load_spike_sorting(eid, one=None, probe=None, dataset_types=None, spike_sorter=None, revision=None, brain_regions=None, return_collection=False)[source]

From an eid, loads spikes and clusters for all probes The following set of dataset types are loaded:

‘clusters.channels’, ‘clusters.depths’, ‘clusters.metrics’, ‘spikes.clusters’, ‘spikes.times’, ‘probes.description’

Parameters:
  • eid – experiment UUID or pathlib.Path of the local session

  • one – an instance of OneAlyx

  • probe – name of probe to load in, if not given all probes for session will be loaded

  • dataset_types – additional spikes/clusters objects to add to the standard default list

  • spike_sorter – name of the spike sorting you want to load (None for default)

  • brain_regions – iblatlas.regions.BrainRegions object - will label acronyms if provided

:param return_collection:(bool - False) if True, returns the collection for loading the data :return: spikes, clusters (dict of bunch, 1 bunch per probe)

load_spike_sorting_with_channel(eid, one=None, probe=None, aligned=False, dataset_types=None, spike_sorter=None, brain_atlas=None, nested=True, return_collection=False)[source]

For a given eid, get spikes, clusters and channels information, and merges clusters and channels information before returning all three variables.

Parameters:
  • eid ([str, UUID, Path, dict]) – Experiment session identifier; may be a UUID, URL, experiment reference string details dict or Path

  • one (one.api.OneAlyx) – An instance of ONE (shouldn’t be in ‘local’ mode)

  • probe ([str, list of str]) – The probe label(s), e.g. ‘probe01’

  • aligned (bool) – Whether to get the latest user aligned channel when not resolved or use histology track

  • dataset_types (list of str) – Optional additional spikes/clusters objects to add to the standard default list

  • spike_sorter (str) – Name of the spike sorting you want to load (None for default which is pykilosort if it’s available otherwise the default MATLAB kilosort)

  • brain_atlas (iblatlas.atlas.BrainAtlas) – Brain atlas object (default: Allen atlas)

  • return_collection (bool) – Returns an extra argument with the collection chosen

Returns:

  • spikes (dict of one.alf.io.AlfBunch) – A dict with probe labels as keys, contains bunch(es) of spike data for the provided session and spike sorter, with keys (‘clusters’, ‘times’)

  • clusters (dict of one.alf.io.AlfBunch) – A dict with probe labels as keys, contains bunch(es) of cluster data, with keys (‘channels’, ‘depths’, ‘metrics’)

  • channels (dict of one.alf.io.AlfBunch) – A dict with probe labels as keys, contains channel locations with keys (‘acronym’, ‘atlas_id’, ‘x’, ‘y’, ‘z’). Atlas IDs non-lateralized.

load_ephys_session(eid, one=None)[source]

From an eid, hits the Alyx database and downloads a standard default set of dataset types From a local session Path (pathlib.Path), loads a standard default set of dataset types

to perform analysis:

‘clusters.channels’, ‘clusters.depths’, ‘clusters.metrics’, ‘spikes.clusters’, ‘spikes.times’, ‘probes.description’

Parameters:
  • eid ([str, UUID, Path, dict]) – Experiment session identifier; may be a UUID, URL, experiment reference string details dict or Path

  • one (oneibl.one.OneAlyx, optional) – ONE object to use for loading. Will generate internal one if not used, by default None

Returns:

  • spikes (dict of one.alf.io.AlfBunch) – A dict with probe labels as keys, contains bunch(es) of spike data for the provided session and spike sorter, with keys (‘clusters’, ‘times’)

  • clusters (dict of one.alf.io.AlfBunch) – A dict with probe labels as keys, contains bunch(es) of cluster data, with keys (‘channels’, ‘depths’, ‘metrics’)

  • trials (one.alf.io.AlfBunch of numpy.ndarray) – The session trials data

merge_clusters_channels(dic_clus, channels, keys_to_add_extra=None)[source]

Takes (default and any extra) values in given keys from channels and assign them to clusters. If channels does not contain any data, the new keys are added to clusters but left empty.

Parameters:
  • dic_clus (dict of one.alf.io.AlfBunch) – 1 bunch per probe, containing cluster information

  • channels (dict of one.alf.io.AlfBunch) – 1 bunch per probe, containing channels bunch with keys (‘acronym’, ‘atlas_id’, ‘x’, ‘y’, z’, ‘localCoordinates’)

  • keys_to_add_extra (list of str) – Any extra keys to load into channels bunches

Returns:

clusters (1 bunch per probe) with new keys values.

Return type:

dict of one.alf.io.AlfBunch

load_passive_rfmap(eid, one=None)[source]

For a given eid load in the passive receptive field mapping protocol data

Parameters:
  • eid ([str, UUID, Path, dict]) – Experiment session identifier; may be a UUID, URL, experiment reference string details dict or Path

  • one (oneibl.one.OneAlyx, optional) – An instance of ONE (may be in ‘local’ - offline - mode)

Returns:

Passive receptive field mapping data

Return type:

one.alf.io.AlfBunch

load_wheel_reaction_times(eid, one=None)[source]

Return the calculated reaction times for session. Reaction times are defined as the time between the go cue (onset tone) and the onset of the first substantial wheel movement. A movement is considered sufficiently large if its peak amplitude is at least 1/3rd of the distance to threshold (~0.1 radians).

Negative times mean the onset of the movement occurred before the go cue. Nans may occur if there was no detected movement withing the period, or when the goCue_times or feedback_times are nan.

Parameters:
  • eid ([str, UUID, Path, dict]) – Experiment session identifier; may be a UUID, URL, experiment reference string details dict or Path

  • one (one.api.OneAlyx, optional) – one object to use for loading. Will generate internal one if not used, by default None

Returns:

reaction times

Return type:

array-like

load_iti(trials)[source]

The inter-trial interval (ITI) time for each trial, defined as the period of open-loop grey screen commencing at stimulus off and lasting until the quiescent period at the start of the following trial. Note that the ITI for the first trial is the time between the first trial and the next, therefore the last value is NaN.

Parameters:

trials (one.alf.io.AlfBunch) – An ALF trials object containing the keys {‘intervals’, ‘stimOff_times’}.

Returns:

An array of inter-trial intervals, the last value being NaN.

Return type:

np.array

load_channels_from_insertion(ins, depths=None, one=None, ba=None)[source]
class SpikeSortingLoader(one: One | None = None, atlas: None = None, pid: str | None = None, eid: str = '', pname: str = '', session_path: Path = '', collections: list | None = None, datasets: list | None = None, files: dict | None = None, raw_data_files: list | None = None, collection: str = '', histology: str = '', spike_sorter: str = 'pykilosort', spike_sorting_path: Path | None = None, _sync: dict | None = None)[source]

Bases: object

Object that will load spike sorting data for a given probe insertion. This class can be instantiated in several manners - With Alyx database probe id:

SpikeSortingLoader(pid=pid, one=one)

  • With Alyx database eic and probe name:

    SpikeSortingLoader(eid=eid, pname=’probe00’, one=one)

  • From a local session and probe name:

    SpikeSortingLoader(session_path=session_path, pname=’probe00’)

NB: When no ONE instance is passed, any datasets that are loaded will not be recorded.

one: One = None
atlas: None = None
pid: str = None
eid: str = ''
pname: str = ''
session_path: Path = ''
collections: list = None
datasets: list = None
files: dict = None
raw_data_files: list = None
collection: str = ''
histology: str = ''
spike_sorter: str = 'pykilosort'
spike_sorting_path: Path = None
load_spike_sorting_object(obj, *args, **kwargs)[source]

Loads an ALF object

Parameters:
  • obj – object name, str between ‘spikes’, ‘clusters’ or ‘channels’

  • spike_sorter – (defaults to ‘pykilosort’)

  • dataset_types – list of extra dataset types, for example [‘spikes.samples’]

  • collection – string specifiying the collection, for example ‘alf/probe01/pykilosort’

  • kwargs – additional arguments to be passed to one.api.One.load_object

  • missing – ‘raise’ (default) or ‘ignore’

Returns:

get_version(spike_sorter='pykilosort')[source]
download_spike_sorting_object(obj, spike_sorter='pykilosort', dataset_types=None, collection=None, missing='raise', **kwargs)[source]

Downloads an ALF object

Parameters:
  • obj – object name, str between ‘spikes’, ‘clusters’ or ‘channels’

  • spike_sorter – (defaults to ‘pykilosort’)

  • dataset_types – list of extra dataset types, for example [‘spikes.samples’]

  • collection – string specifiying the collection, for example ‘alf/probe01/pykilosort’

  • kwargs – additional arguments to be passed to one.api.One.load_object

  • missing – ‘raise’ (default) or ‘ignore’

Returns:

download_spike_sorting(**kwargs)[source]

Downloads spikes, clusters and channels

Parameters:
  • spike_sorter – (defaults to ‘pykilosort’)

  • dataset_types – list of extra dataset types

Returns:

download_raw_electrophysiology(band='ap')[source]

Downloads raw electrophysiology data files on local disk.

Parameters:

band – “ap” (default) or “lf” for LFP band

Returns:

list of raw data files full paths (ch, meta and cbin files)

raw_electrophysiology(stream=True, band='ap', **kwargs)[source]

Returns a reader for the raw electrophysiology data By default it is a streamer object, but if stream is False, it will return a spikeglx.Reader after having downloaded the raw data file if necessary

Parameters:
  • stream

  • band

  • kwargs

Returns:

load_channels(**kwargs)[source]

Loads channels The channel locations can come from several sources, it will load the most advanced version of the histology available, regardless of the spike sorting version loaded. The steps are (from most advanced to fresh out of the imaging): - alf: the final version of channel locations, same as resolved with the difference that data is on file - resolved: channel locations alignments have been agreed upon - aligned: channel locations have been aligned, but review or other alignments are pending, potentially not accurate - traced: the histology track has been recovered from microscopy, however the depths may not match, inaccurate data

Parameters:
  • spike_sorter – (defaults to ‘pykilosort’)

  • dataset_types – list of extra dataset types

Returns:

load_spike_sorting(spike_sorter='pykilosort', **kwargs)[source]

Loads spikes, clusters and channels

There could be several spike sorting collections, by default the loader will get the pykilosort collection

The channel locations can come from several sources, it will load the most advanced version of the histology available, regardless of the spike sorting version loaded. The steps are (from most advanced to fresh out of the imaging): - alf: the final version of channel locations, same as resolved with the difference that data is on file - resolved: channel locations alignments have been agreed upon - aligned: channel locations have been aligned, but review or other alignments are pending, potentially not accurate - traced: the histology track has been recovered from microscopy, however the depths may not match, inaccurate data

Parameters:
  • spike_sorter – (defaults to ‘pykilosort’)

  • dataset_types – list of extra dataset types

Returns:

static compute_metrics(spikes, clusters=None)[source]
static merge_clusters(spikes, clusters, channels, cache_dir=None, compute_metrics=False)[source]

Merge the metrics and the channel information into the clusters dictionary

Parameters:
  • spikes

  • clusters

  • channels

  • cache_dir – if specified, will look for a cached parquet file to speed up. This is to be used for clusters or analysis applications (defaults to None).

  • compute_metrics – if True, will explicitly recompute metrics (defaults to false)

Returns:

cluster dictionary containing metrics and histology

property url

Gets flatiron URL for the session

timesprobe2times(values, direction='forward')[source]
samples2times(values, direction='forward')[source]

Converts ephys sample values to session main clock seconds

Parameters:
  • values – numpy array of times in seconds or samples to resync

  • direction – ‘forward’ (samples probe time to seconds main time) or ‘reverse’ (seconds main time to samples probe time)

Returns:

property pid2ref
raster(spikes, channels, save_dir=None, br=None, label='raster', time_series=None, **kwargs)[source]
Parameters:
  • spikes – spikes dictionary or Bunch

  • channels – channels dictionary or Bunch.

  • save_dir – if specified save to this directory as “{pid}_{probe}_{label}.png”. Otherwise, plot.

  • br – brain regions object (optional)

  • label – label for saved image (optional, default=”raster”)

  • time_series – timeseries dictionary for behavioral event times (optional)

  • **kwargs

    kwargs passed to driftmap() (optional)

Returns:

class SessionLoader(one: ~one.api.One | None = None, session_path: ~pathlib.Path = '', eid: str = '', data_info: ~pandas.core.frame.DataFrame = <factory>, trials: ~pandas.core.frame.DataFrame = <factory>, wheel: ~pandas.core.frame.DataFrame = <factory>, pose: dict = <factory>, motion_energy: dict = <factory>, pupil: ~pandas.core.frame.DataFrame = <factory>)[source]

Bases: object

Object to load session data for a give session in the recommended way.

Parameters:
  • one (one.api.ONE instance) – Can be in remote or local mode (required)

  • session_path (string or pathlib.Path) – The absolute path to the session (one of session_path or eid is required)

  • eid (string) – database UUID of the session (one of session_path or eid is required)

  • provided (If both are) –

  • eid. (session_path takes precedence over) –

Examples

  1. Load all available session data for one session:
    >>> from one.api import ONE
    >>> from brainbox.io.one import SessionLoader
    >>> one = ONE()
    >>> sess_loader = SessionLoader(one=one, session_path='/mnt/s0/Data/Subjects/cortexlab/KS022/2019-12-10/001/')
    # Object is initiated, but no data is loaded as you can see in the data_info attribute
    >>> sess_loader.data_info
                name  is_loaded
    0         trials      False
    1          wheel      False
    2           pose      False
    3  motion_energy      False
    4          pupil      False
    

    # Loading all available session data, the data_info attribute now shows which data has been loaded >>> sess_loader.load_session_data() >>> sess_loader.data_info

    name is_loaded

    0 trials True 1 wheel True 2 pose True 3 motion_energy True 4 pupil False

    # The data is loaded in pandas dataframes that you can access via the respective attributes, e.g. >>> type(sess_loader.trials) pandas.core.frame.DataFrame >>> sess_loader.trials.shape (626, 18) # Each data comes with its own timestamps in a column called ‘times’ >>> sess_loader.wheel[‘times’] 0 0.134286 1 0.135286 2 0.136286 3 0.137286 4 0.138286

    # For camera data (pose, motionEnergy) the respective functions load the data into one dataframe per camera. # The dataframes of all cameras are collected in a dictionary >>> type(sess_loader.pose) dict >>> sess_loader.pose.keys() dict_keys([‘leftCamera’, ‘rightCamera’, ‘bodyCamera’]) >>> sess_loader.pose[‘bodyCamera’].columns Index([‘times’, ‘tail_start_x’, ‘tail_start_y’, ‘tail_start_likelihood’], dtype=’object’) # In order to control the loading of specific data by e.g. specifying parameters, use the individual loading functions: >>> sess_loader.load_wheel(sampling_rate=100)

one: One = None
session_path: Path = ''
eid: str = ''
data_info: DataFrame
trials: DataFrame
wheel: DataFrame
pose: dict
motion_energy: dict
pupil: DataFrame
load_session_data(trials=True, wheel=True, pose=True, motion_energy=True, pupil=True, reload=False)[source]

Function to load available session data into the SessionLoader object. Input parameters allow to control which data is loaded. Data is loaded into an attribute of the SessionLoader object with the same name as the input parameter (e.g. SessionLoader.trials, SessionLoader.pose). Information about which data is loaded is stored in SessionLoader.data_info

Parameters:
  • trials (boolean) – Whether to load all trials data into SessionLoader.trials, default is True

  • wheel (boolean) – Whether to load wheel data (position, velocity, acceleration) into SessionLoader.wheel, default is True

  • pose (boolean) – Whether to load pose tracking results (DLC) for each available camera into SessionLoader.pose, default is True

  • motion_energy (boolean) – Whether to load motion energy data (whisker pad for left/right camera, body for body camera) into SessionLoader.motion_energy, default is True

  • pupil (boolean) – Whether to load pupil diameter (raw and smooth) for the left/right camera into SessionLoader.pupil, default is True

  • reload (boolean) – Whether to reload data that has already been loaded into this SessionLoader object, default is False

load_trials()[source]

Function to load trials data into SessionLoader.trials

load_wheel(fs=1000, corner_frequency=20, order=8)[source]

Function to load wheel data (position, velocity, acceleration) into SessionLoader.wheel. The wheel position is first interpolated to a uniform sampling rate. Then velocity and acceleration are computed, during which a Butterworth low-pass filter is applied.

Parameters:
  • fs (int, float) – Sampling frequency for the wheel position, default is 1000 Hz

  • corner_frequency (int, float) – Corner frequency of Butterworth low-pass filter, default is 20

  • order (int, float) – Order of Butterworth low_pass filter, default is 8

load_pose(likelihood_thr=0.9, views=['left', 'right', 'body'])[source]

Function to load the pose estimation results (DLC) into SessionLoader.pose. SessionLoader.pose is a dictionary where keys are the names of the cameras for which pose data is loaded, and values are pandas Dataframes with the timestamps and pose data, one row for each body part tracked for that camera.

Parameters:
  • likelihood_thr (float) – The position of each tracked body part come with a likelihood of that estimate for each time point. Estimates for time points with likelihood < likelihood_thr are set to NaN. To skip thresholding set likelihood_thr=1. Default is 0.9

  • views (list) – List of camera views for which to try and load data. Possible options are {‘left’, ‘right’, ‘body’}

load_motion_energy(views=['left', 'right', 'body'])[source]

Function to load the motion energy data into SessionLoader.motion_energy. SessionLoader.motion_energy is a dictionary where keys are the names of the cameras for which motion energy data is loaded, and values are pandas Dataframes with the timestamps and motion energy data. The motion energy for the left and right camera is calculated for a square roughly covering the whisker pad (whiskerMotionEnergy). The motion energy for the body camera is calculated for a square covering much of the body (bodyMotionEnergy).

Parameters:

views (list) – List of camera views for which to try and load data. Possible options are {‘left’, ‘right’, ‘body’}

load_licks()[source]

Not yet implemented

load_pupil(snr_thresh=5.0)[source]

Function to load raw and smoothed pupil diameter data from the left camera into SessionLoader.pupil.

Parameters:

snr_thresh (float) – An SNR is calculated from the raw and smoothed pupil diameter. If this snr < snr_thresh the data will be considered unusable and will be discarded.

class EphysSessionLoader(*args, pname=None, pid=None, **kwargs)[source]

Bases: SessionLoader

Spike sorting enhanced version of SessionLoader Loads spike sorting data for all probes in the session, in the self.ephys dict >>> EphysSessionLoader(eid=eid, one=one) To select for a specific probe >>> EphysSessionLoader(eid=eid, one=one, pid=pid)

load_session_data(*args, **kwargs)[source]

Function to load available session data into the SessionLoader object. Input parameters allow to control which data is loaded. Data is loaded into an attribute of the SessionLoader object with the same name as the input parameter (e.g. SessionLoader.trials, SessionLoader.pose). Information about which data is loaded is stored in SessionLoader.data_info

Parameters:
  • trials (boolean) – Whether to load all trials data into SessionLoader.trials, default is True

  • wheel (boolean) – Whether to load wheel data (position, velocity, acceleration) into SessionLoader.wheel, default is True

  • pose (boolean) – Whether to load pose tracking results (DLC) for each available camera into SessionLoader.pose, default is True

  • motion_energy (boolean) – Whether to load motion energy data (whisker pad for left/right camera, body for body camera) into SessionLoader.motion_energy, default is True

  • pupil (boolean) – Whether to load pupil diameter (raw and smooth) for the left/right camera into SessionLoader.pupil, default is True

  • reload (boolean) – Whether to reload data that has already been loaded into this SessionLoader object, default is False

load_spike_sorting(pnames=None)[source]
property probes
data_info: DataFrame
trials: DataFrame
wheel: DataFrame
pose: dict
motion_energy: dict
pupil: DataFrame