ibllib.pipes.video_tasks

Classes

DLC

This task relies on a correctly installed dlc environment as per https://docs.google.com/document/d/1g0scP6_3EmaXCU4SsDNZWwDTaD9MG0es_grLA-d0gh0/edit#

EphysPostDLC

The post_dlc task takes dlc traces as input and computes useful quantities, as well as qc.

LightningPose

VideoCompress

Task to compress raw video data from .avi to .mp4 format.

VideoConvert

Task that converts compressed avi to mp4 format and renames video and camlog files.

VideoRegisterRaw

Task to register raw video data.

VideoSyncQcBpod

Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server

VideoSyncQcCamlog

Task to sync camera timestamps to main DAQ timestamps when camlog files are used.

VideoSyncQcNidq

Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server

class VideoRegisterRaw(session_path, cameras, **kwargs)[source]

Bases: VideoTask, RegisterRawDataTask

Task to register raw video data. Builds up list of files to register from list of cameras given in session params file

priority = 100
job_size = 'small'
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

assert_expected_outputs(raise_error=True)[source]

frameData replaces the timestamps file. Therefore if frameData is present, timestamps is optional and vice versa.

class VideoCompress(session_path, cameras, **kwargs)[source]

Bases: VideoTask

Task to compress raw video data from .avi to .mp4 format.

priority = 90
job_size = 'large'
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

class VideoConvert(session_path, cameras, **kwargs)[source]

Bases: VideoTask

Task that converts compressed avi to mp4 format and renames video and camlog files. Specific to UCLA widefield implementation

priority = 90
job_size = 'small'
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

class VideoSyncQcCamlog(session_path, cameras, **kwargs)[source]

Bases: VideoTask

Task to sync camera timestamps to main DAQ timestamps when camlog files are used. Specific to UCLA widefield implementation

priority = 40
job_size = 'small'
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

extract_camera(save=True)[source]

Extract trials data.

This is an abstract method called by _run and run_qc methods. Subclasses should return the extracted trials data and a list of output files. This method should also save the trials extractor object to the :prop:`extractor` property for use by run_qc.

Parameters:

save (bool) – Whether to save the extracted data as ALF datasets.

Returns:

  • dict – A dictionary of trials data.

  • list of pathlib.Path – A list of output file paths if save == true.

run_qc(camera_data=None, update=True)[source]

Run camera QC.

Subclass method should return the QC object. This just validates the trials_data is not None.

Parameters:
  • camera_data (dict) – A dictionary of extracted trials data. The output of extract_behaviour().

  • update (bool) – If true, update Alyx with the QC outcome.

Returns:

A TaskQC object replete with task data and computed metrics.

Return type:

ibllib.qc.task_metrics.TaskQC

class VideoSyncQcBpod(*args, **kwargs)[source]

Bases: VideoTask

Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server

priority = 40
job_size = 'small'
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

extract_camera(save=True)[source]

Extract trials data.

This is an abstract method called by _run and run_qc methods. Subclasses should return the extracted trials data and a list of output files. This method should also save the trials extractor object to the :prop:`extractor` property for use by run_qc.

Parameters:

save (bool) – Whether to save the extracted data as ALF datasets.

Returns:

  • dict – A dictionary of trials data.

  • list of pathlib.Path – A list of output file paths if save == true.

run_qc(camera_data=None, update=True)[source]

Run camera QC.

Subclass method should return the QC object. This just validates the trials_data is not None.

Parameters:
  • camera_data (dict) – A dictionary of extracted trials data. The output of extract_behaviour().

  • update (bool) – If true, update Alyx with the QC outcome.

Returns:

A TaskQC object replete with task data and computed metrics.

Return type:

ibllib.qc.task_metrics.TaskQC

class VideoSyncQcNidq(session_path, cameras, **kwargs)[source]

Bases: VideoTask

Task to sync camera timestamps to main DAQ timestamps N.B Signatures only reflect new daq naming convention, non-compatible with ephys when not running on server

priority = 40
job_size = 'small'
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

extract_camera(save=True)[source]

Extract trials data.

This is an abstract method called by _run and run_qc methods. Subclasses should return the extracted trials data and a list of output files. This method should also save the trials extractor object to the :prop:`extractor` property for use by run_qc.

Parameters:

save (bool) – Whether to save the extracted data as ALF datasets.

Returns:

  • dict – A dictionary of trials data.

  • list of pathlib.Path – A list of output file paths if save == true.

run_qc(camera_data=None, update=True)[source]

Run camera QC.

Subclass method should return the QC object. This just validates the trials_data is not None.

Parameters:
  • camera_data (dict) – A dictionary of extracted trials data. The output of extract_behaviour().

  • update (bool) – If true, update Alyx with the QC outcome.

Returns:

A TaskQC object replete with task data and computed metrics.

Return type:

ibllib.qc.task_metrics.TaskQC

class DLC(session_path, cameras, **kwargs)[source]

Bases: VideoTask

This task relies on a correctly installed dlc environment as per https://docs.google.com/document/d/1g0scP6_3EmaXCU4SsDNZWwDTaD9MG0es_grLA-d0gh0/edit#

If your environment is set up otherwise, make sure that you set the respective attributes: t = EphysDLC(session_path) t.dlcenv = Path(‘/path/to/your/dlcenv/bin/activate’) t.scripts = Path(‘/path/to/your/iblscripts/deploy/serverpc/dlc’)

gpu = 1
cpu = 4
io_charge = 100
level = 2
force = True
job_size = 'large'
dlcenv = PosixPath('/home/runner/Documents/PYTHON/envs/dlcenv/bin/activate')
scripts = PosixPath('/home/runner/Documents/PYTHON/iblscripts/deploy/serverpc/dlc')
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

class EphysPostDLC(*args, **kwargs)[source]

Bases: VideoTask

The post_dlc task takes dlc traces as input and computes useful quantities, as well as qc.

io_charge = 90
level = 3
force = True
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns:

class LightningPose(session_path, cameras, **kwargs)[source]

Bases: VideoTask

gpu = 1
io_charge = 100
level = 2
force = True
job_size = 'large'
env = PosixPath('/home/runner/Documents/PYTHON/envs/litpose/bin/activate')
scripts = PosixPath('/home/runner/Documents/PYTHON/iblscripts/deploy/serverpc/litpose')
property signature

The signature of the task specifies inputs and outputs for the given task. For some tasks it is dynamic and calculated. The legacy code specifies those as tuples. The preferred way is to use the ExpectedDataset input and output constructors.

I = ExpectedDataset.input O = ExpectedDataset.output signature = {

‘input_files’: [

I(name=’extract.me.npy’, collection=’raw_data’, required=True, register=False, unique=False),

], ‘output_files’: [

O(name=’look.atme.npy’, collection=’shiny_data’, required=True, register=True, unique=False)

]} is equivalent to: signature = {

‘input_files’: [(‘extract.me.npy’, ‘raw_data’, True, True)], ‘output_files’: [(‘look.atme.npy’, ‘shiny_data’, True)], }

Returns: