Reference

Describing an Experiment

Experiment description file

All experiments are described by a file with the name _ibl_experiment.description.yaml. This description file contains details about the experiment such as, information about the devices used to collect data, or the behavior tasks run during the experiment. The content of this file is used to copy data from the acquisition computer to the lab server and also determines the task pipeline that will be used to extract the data on the lab servers. It’s accuracy in fully describing the experiment is, therefore, very important!

Here is an example of a complete experiment description file for a mesoscope experiment running two consecutive tasks, biasedChoiceWorld followed by passiveChoiceWorld.

devices:
  mesoscope:
    mesoscope:
      collection: raw_imaging_data*
      sync_label: chrono
  cameras:
    belly:
      collection: raw_video_data
      sync_label: audio
      width: 640
      height: 512
      fps: 30
    left:
      collection: raw_video_data
      sync_label: audio
    right:
      collection: raw_video_data
      sync_label: audio
procedures:
- Imaging
projects:
- ibl_mesoscope_active
sync:
  nidq:
    acquisition_software: timeline
    collection: raw_sync_data
    extension: npy
tasks:
- _biasedChoiceWorld:
    collection: raw_task_data_00
    sync_label: bpod
    extractors: [TrialRegisterRaw, ChoiceWorldTrialsTimeline, TrainingStatus]
- passiveChoiceWorld:
    collection: raw_task_data_01
    sync_label: bpod
    extractors: [PassiveRegisterRaw, PassiveTaskTimeline]
version: 1.0.0

Breaking down the components of an experiment description file

Devices

The devices section in the experiment description file lists the set of devices from which data was collection in the experiment. Supported devices are Cameras, Microphone, Mesoscope, Neuropixel, Photometry and Widefield.

The convention for this section is to have the device name followed by a list of sub-devices, e.g.

devices:
  cameras:
    belly:
      collection: raw_video_data
      sync_label: audio
      width: 640
      height: 512
      fps: 30
    left:
      collection: raw_video_data
      sync_label: audio
    right:
      collection: raw_video_data
      sync_label: audio

In the above example, cameras is the device and the sub-devices are belly, left and right.

If there are no sub-devices, the sub-device is given the same name as the device, e.g.

devices:
  mesoscope:
    mesoscope:
      collection: raw_imaging_data*
      sync_label: chrono

Each sub-device must have at least the following two keys - collection - the folder containing the data - sync_label - the name of the common ttl pulses in the channel map used to sync the timestamps

Additional keys can also be specified for specific extractors, e.g. for the belly camera the camera metadata passed into the camera extractor task is defined in this file.

Procedures

The procedures section lists the set of procedures that apply to this experiment. The list of possible procedures can be found here.

As many procedure that apply to the experiment can be added e.g.

procedures:
- Fiber photometry
- Optical stimulation
- Ephys recording with acute probe(s)

Projects

The projects section lists the set of projects that apply to this experiment. The list of possible projects can be found here.

As many projects that apply to the experiment can be added e.g.

projects:
- ibl_neuropixel_brainwide_01
- carandiniharris_midbrain_ibl

Sync

The sync section contains information about the device used to collect the syncing data and the format of the data. Supported sync devices are bpod, nidq, tdms, timeline. Only one sync device can be specified per experiment description file and act as the main clock to which other timeseries are synced.

An example of an experiment run with bpod as the main syncing device is,

sync:
  bpod:
    collection: raw_behavior_data
    extension: bin

Another example for spikeglx electrophysiology recordings with Neuropixel 1B probes use the nidq as main synchronisation.

sync:
  nidq:
    collection: raw_ephys_data
    extension: bin
    acquisition_software: spikeglx

Each sync device must have at least the following two keys - collection - the folder containing the data - extension - the file extension of the sync data

Optional keys include, for example acquisition_software, the software used to acquire the sync pulses

Tasks

The tasks section contains a list of the behavioral protocols run during the experiment. The name of the protocol must be given in the list e.g.

tasks:
- _biasedChoiceWorld:
    collection: raw_task_data_00
    sync_label: bpod
    extractors: [TrialRegisterRaw, ChoiceWorldTrialsTimeline, TrainingStatus]
- passiveChoiceWorld:
    collection: raw_task_data_01
    sync_label: bpod
    extractors: [PassiveRegisterRaw, PassiveTaskTimeline]

Each task must have at least the following two keys - collection - the folder containing the data - sync_label - the name of the common ttl pulses in the channel map used to sync the timestamps

The collection must be unique for each task. i.e. Data from two tasks cannot be stored in the same folder.

If the Tasks used to extract the data are not the default tasks, the extractors to use must be passed in as an additional key. The order of the extractors defines their parent child relationship in the task architecture.

Version

The version section gives version number of the experiment description file

Quality check the task post-usage

Once a session is acquired, you can verify whether the trials data is extracted properly and that the sequence of events matches the expected logic of the task.

Metrics definitions

All the metrics computed as part of the Task logic integrity QC (Task QC) are implemented in ibllib. When run at a behavior rig, they are computed using the Bpod data, without alignment to another DAQ’s clock.

Tip

The Task QC metrics definitions can be found in this documentation page. See this page on how to write QC checks for a custom task protocol.

Some are essential, i.e. if they fail you should immediately take action and verify your rig, and some are not as critical.

Essential taskQCs:

  • check_audio_pre_trial

  • check_correct_trial_event_sequence

  • check_error_trial_event_sequence

  • check_n_trial_events

  • check_response_feedback_delays

  • check_reward_volume_set

  • check_reward_volumes

  • check_stimOn_goCue_delays

  • check_stimulus_move_before_goCue

  • check_wheel_move_before_feedback

  • check_wheel_freeze_during_quiescence

Non essential taskQCs:

  • check_stimOff_itiIn_delays

  • check_positive_feedback_stimOff_delays

  • check_negative_feedback_stimOff_delays

  • check_wheel_move_during_closed_loop

  • check_response_stimFreeze_delays

  • check_detected_wheel_moves

  • check_trial_length

  • check_goCue_delays

  • check_errorCue_delays

  • check_stimOn_delays

  • check_stimOff_delays

  • check_iti_delays

  • check_stimFreeze_delays

  • check_wheel_integrity

Tip

The value returned by each metric is the proportion of trial that fail to pass the given test. For example, if the value returned by check_errorCue_delays is 0.92, it means 8% of the trials failed this test.

Quantifying the task QC outcome at the session level

The criteria for whether a session passes the Task QC is:

  • NOT_SET: default value (= not run yet)

  • FAIL: if at least one metric is < 95%

  • WARNING: if all metrics are >=95% , and at least one metric is <99 %

  • PASS: if all metrics are >= 99%

This aggregation is done on all metrics, regardless if they are essential or not.

The criteria is defined at this code line

How to check the task QC outcome

Immediately after acquiring a session

At the behaviour PC, before the data have been copied, use the task_qc command with the session path:

task_qc C:\iblrigv8_data\Subjects\KS022\2019-12-10\001 --local

More information can be found here, or by running task_qc –help.

Once the session is registered on Alyx

  1. Check on the Alyx webpage

    From the session overview page on Alyx, find your session click on See more session info. The session QC is displayed in one of the right panels.

    To get more information regarding which test pass or fail (contributing to this overall session QC), you can click on the QC menu on the left. Bar-diagrams will appear, with essentials QCs on the left, colored in green if passing.

    Tip

    You can hover over the bars with your mouse to easily know the name of the corresponding metric. This is useful if the value of the metric is 0.

    Warning

    If an essential metric fails, run the Task QC Viewer to investigate why.

  2. Run the taskQC Viewer to investigate

    The application Task QC Viewer enables to visualise the data streams of problematic trials.

    Tip

    Unlike when run at the behaviour PC, after registration the QC is run on the final time-aligned data (if applicable).

    Run the task QC metrics and viewer

    Select the eid for your session to inspect, and run the following within the iblrig env:

    task_qc baecbddc-2b86-4eaf-a6f2-b30923225609
    

Guide to develop a custom task

iblrigv8 design: inheritance

During the lifetime of the IBL project, we realized that multiple task variants combine with multiple hardware configurations and acquisition modalities, leading to a combinatorial explosion of possible tasks and related hardware.

This left us with the only option of developing a flexible task framework through hierarchical inheritance.

All tasks inherit from the iblrig.base_tasks.BaseSession class, which provides the following functionalities:

Additionally the iblrig.base_tasks module provides “hardware mixins”. Those are classes that provide hardware-specific functionalities, such as connecting to a Bpod or a rotary encoder. They are composed with the BaseSession class to create a task.

Warning

This sounds complicated ? It is ! Forecasting all possible tasks and hardware add-ons and modification is fool’s errand, however we can go through specific examples of task implementations.

Guide to Creating Your Own Task

What Happens When Running an IBL Task?

  1. The task constructor is invoked, executing the following steps:

    • Reading of settings: hardware and IBLRIG configurations.

    • Reading of task parameters.

    • Instantiation of hardware mixins.

  2. The task initiates the run() method. Prior to execution, this method:

    • Launches the hardware modules.

    • Establishes a session folder.

    • Saves the parameters to disk.

  3. The experiment unfolds: the run() method triggers the _run() method within the child class:

    • Typically, this involves a loop that generates a Bpod state machine for each trial and runs it.

  4. Upon SIGINT or when the maximum trial count is reached, the experiment concludes. The end of the run() method includes:

    • Saving the final parameter file.

    • Recording administered water and session performance on Alyx.

    • Halting the mixins.

    • Initiating local server transfer.

Examples

Where to write your task

After the installation of iblrig the project extraction repository is located at the root of the C: drive. New tasks should be added to the C:\project_extraction\iblrig_custom_tasks folder to be made visible by the iblrig GUI. We use a convention that the task name starts with the author identifier, followed by an underscore, followed by the task name, such as olivier_awesomeChoiceWorld.

olivier_awesomeChoiceWorld
  • __init__.py

  • task.py

  • README.md

  • task_parameters.yaml

  • test_olivier_awesomeChoiceWorld.py

Example 1: variation on biased choice world

We will create a a choice world task that modifies a the quiescence period duration random draw policy. In the task.py file, the first step is to create a new task class that inherits from the BiasedChoiceWorldSession class.

Then we want to make sure that the task bears a distinctive protocol name, _iblrig_tasks_imagingChoiceWorld. We also create the command line entry point for the task that will be used by the iblrig GUI.

Also, in this case we can leverage the IBL infrastructure to perform extraction of the trials using existing extractors extractor_tasks = [‘TrialRegisterRaw’, ‘ChoiceWorldTrials’]

import iblrig.misc
from iblrig.base_choice_world import BiasedChoiceWorldSession


class Session(BiasedChoiceWorldSession):
    protocol_name = "_iblrig_tasks_imagingChoiceWorld"

    def __init__(self, *args, **kwargs):
        self.extractor_tasks = ['TrialRegisterRaw', 'ChoiceWorldTrials']
        super().__init__(*args, **kwargs)

if __name__ == "__main__":  # pragma: no cover
    kwargs = iblrig.misc.get_task_arguments(parents=[Session.extra_parser()])
    sess = Session(**kwargs)
    sess.run()

In this case the parent class BiasedChoiceWorldSession has a method that draws the quiescence period. We are going to overload this method to add our own policy. This means the parent method will be fully replaced by our implementation. The class now looks like this:

class Session(BiasedChoiceWorldSession):
    protocol_name = "_iblrig_tasks_imagingChoiceWorld"

    def draw_quiescent_period(self):
        """
        For this task we double the quiescence period texp draw and remove the absolute
        offset of 200ms. The resulting is a truncated exp distribution between 400ms and 1 sec
        """
        return iblrig.misc.texp(factor=0.35 * 2, min_=0.2 * 2, max_=0.5 * 2)

Et voilà, in a few lines, we re-used the whole biased choice world implementation to add a custom parameter. This is the most trivial and easy example. The full code is available here.

Example 2: re-writing a state-machine for a biased choice world task

In some instances changes in the task logic require to go deeper and re-write the sequence of task events. In bpod parlance, we are talking about rewritng the state-machine code.

Coming, for now here is an example of such a task.

#.. include:: reference_developer_guide.rst