ibllib.qc.camera

Video quality control.

This module runs a list of quality control metrics on the camera and extracted video data.

Examples

Run right camera QC, downloading all but video file

>>> qc = CameraQC(eid, 'right', download_data=True, stream=True)
>>> qc.run()

Run left camera QC with session path, update QC field in Alyx

>>> qc = CameraQC(session_path, 'left')
>>> outcome, extended = qc.run(update=True)  # Returns outcome of videoQC only
>>> print(f'video QC = {outcome}; overall session QC = {qc.outcome}')  # NB difference outcomes

Run only video QC (no timestamp/alignment checks) on 20 frames for the body camera

>>> qc = CameraQC(eid, 'body', n_samples=20)
>>> qc.load_video_data()  # Quicker than loading all data
>>> qc.run()

Run specific video QC check and display the plots

>>> qc = CameraQC(eid, 'left')
>>> qc.load_data(download_data=True)
>>> qc.check_position(display=True)  # NB: Not all checks make plots

Run the QC for all cameras

>>> qcs = run_all_qc(eid)
>>> qcs['left'].metrics  # Dict of checks and outcomes for left camera

Functions

data_for_keys

Check keys exist in 'data' dict and contain values other than None.

get_task_collection

Return the first non-passive task collection.

get_video_collection

Return the collection containing the raw video data for a given camera.

run_all_qc

Run QC for all cameras.

Classes

CameraQC

A class for computing camera QC metrics.

CameraQCCamlog

A class for computing camera QC metrics from camlog data.

class CameraQC(session_path_or_eid, camera, **kwargs)[source]

Bases: QC

A class for computing camera QC metrics.

dstypes = ['_ibl_experiment.description', '_iblrig_Camera.frameData', '_iblrig_Camera.frame_counter', '_iblrig_Camera.GPIO', '_iblrig_Camera.timestamps', '_iblrig_taskData.raw', '_iblrig_taskSettings.raw', '_iblrig_Camera.raw', 'camera.times', 'wheel.position', 'wheel.timestamps']
dstypes_fpga = ['_spikeglx_sync.channels', '_spikeglx_sync.polarities', '_spikeglx_sync.times', 'ephysData.raw.meta']

Recall that for the training rig there is only one side camera at 30 Hz and 1280 x 1024 px. For the recording rig there are two label cameras (left: 60 Hz, 1280 x 1024 px; right: 150 Hz, 640 x 512 px) and one body camera (30 Hz, 640 x 512 px).

video_meta = {'ephys': {'body': {'fps': 30, 'height': 512, 'width': 640}, 'left': {'fps': 60, 'height': 1024, 'width': 1280}, 'right': {'fps': 150, 'height': 512, 'width': 640}}, 'training': {'left': {'fps': 30, 'height': 1024, 'width': 1280}}}
property type

Returns the camera type based on the protocol.

Returns:

Returns either None, ‘ephys’ or ‘training’

load_data(download_data: bool | None = None, extract_times: bool = False, load_video: bool = True) None[source]

Extract the data from raw data files.

Extracts all the required task data from the raw data files.

Data keys:
  • count (int array): the sequential frame number (n, n+1, n+2…)

  • pin_state (): the camera GPIO pin; records the audio TTLs; should be one per frame

  • audio (float array): timestamps of audio TTL fronts

  • fpga_times (float array): timestamps of camera TTLs recorded by FPGA

  • timestamps (float array): extracted video timestamps (the camera.times ALF)

  • bonsai_times (datetime array): system timestamps of video PC; should be one per frame

  • camera_times (float array): camera frame timestamps extracted from frame headers

  • wheel (Bunch): rotary encoder timestamps, position and period used for wheel motion

  • video (Bunch): video meta data, including dimensions and FPS

  • frame_samples (h x w x n array): array of evenly sampled frames (1 colour channel)

Parameters:

download_data – if True, any missing raw data is downloaded via ONE.

Missing data will raise an AssertionError :param extract_times: if True, the camera.times are re-extracted from the raw data :param load_video: if True, calls the load_video_data method

load_video_data()[source]

Get basic properties of video.

Updates the data property with video metadata such as length and frame count, as well as loading some frames to perform QC on.

static get_active_wheel_period(wheel, duration_range=(3.0, 20.0), display=False)[source]

Find period of active wheel movement.

Attempts to find a period of movement where the wheel accelerates and decelerates for the wheel motion alignment QC.

Parameters:
  • wheel – A Bunch of wheel timestamps and position data

  • duration_range – The candidates must be within min/max duration range

  • display – If true, plot the selected wheel movement

Returns:

2-element array comprising the start and end times of the active period

ensure_required_data()[source]

Ensure the datasets required for QC are local.

If the download_data attribute is True, any missing data are downloaded. If all the data are not present locally at the end of it an exception is raised. If the stream attribute is True, the video file is not required to be local, however it must be remotely accessible. NB: Requires a valid instance of ONE and a valid session eid.

Raises:

AssertionError – The data requires for complete QC are not present.

run(update: bool = False, **kwargs) -> (<class 'str'>, <class 'dict'>)[source]

Run video QC checks and return outcome.

Parameters:
  • update – if True, updates the session QC fields on Alyx

  • download_data – if True, downloads any missing data if required

  • extract_times – if True, re-extracts the camera timestamps from the raw data

Returns:

overall outcome as a str, a dict of checks and their outcomes

check_brightness(bounds=(40, 200), max_std=20, roi=True, display=False)[source]

Check that the video brightness is within a given range.

The mean brightness of each frame must be with the bounds provided, and the standard deviation across samples frames should be less than the given value. Assumes that the frame samples are 2D (no colour channels).

Parameters:

bounds – For each frame, check that: bounds[0] < M < bounds[1],

where M = mean(frame). If less than 75% of sample frames outside of these bounds, the outcome is WARNING. If <75% of frames within twice the bounds, the outcome is FAIL. :param max_std: The standard deviation of the frame luminance means must be less than this :param roi: If True, check brightness on ROI of frame :param display: When True the mean frame luminance is plotted against sample frames. The sample frames with the lowest and highest mean luminance are shown.

check_file_headers()[source]

Check reported frame rate matches FPGA frame rate.

check_framerate(threshold=1.0)[source]

Check camera times match specified frame rate for camera.

Parameters:

threshold – The maximum absolute difference between timestamp sample rate and video

frame rate. NB: Does not take into account dropped frames.

check_pin_state(display=False)[source]

Check the pin state reflects Bpod TTLs.

check_dropped_frames(threshold=0.1)[source]

Check how many frames were reported missing.

Parameters:

threshold – The maximum allowable percentage of dropped frames

check_timestamps()[source]

Check that the camera.times array is reasonable.

check_camera_times()[source]

Check that the number of raw camera timestamps matches the number of video frames.

check_resolution()[source]

Check that the timestamps and video file resolution match what we expect.

check_wheel_alignment(tolerance=(1, 2), display=False)[source]

Check wheel motion in video correlates with the rotary encoder signal.

Check is skipped for body camera videos as the wheel is often obstructed

Parameters:
  • tolerance (int, (int, int)) – Maximum absolute offset in frames. If two values, the maximum value is taken as the warning threshold.

  • display (bool) – If true, the wheel motion energy is plotted against the rotary encoder.

Returns:

  • one.alf.spec.QC – The outcome, one of {‘NOT_SET’, ‘FAIL’, ‘WARNING’, ‘PASS’}.

  • int – Frame offset, i.e. by how many frames the video was shifted to match the rotary encoder signal. Negative values mean the video was shifted backwards with respect to the wheel timestamps.

Notes

  • A negative frame offset typically means that there were frame TTLs at the beginning that

do not correspond to any video frames (sometimes the first few frames aren’t saved to disk). Since 2021-09-15 the extractor should compensate for this.

check_position(hist_thresh=(75, 80), pos_thresh=(10, 15), metric=5, display=False, test=False, roi=None, pct_thresh=True)[source]

Check camera is positioned correctly.

For the template matching zero-normalized cross-correlation (default) should be more robust to exposure (which we’re not checking here). The L2 norm (TM_SQDIFF) should also work.

If display is True, the template ROI (pick hashed) is plotted over a video frame, along with the threshold regions (green solid). The histogram correlations are plotted and the full histogram is plotted for one of the sample frames and the reference frame.

Parameters:
  • hist_thresh – The minimum histogram cross-correlation threshold to pass (0-1).

  • pos_thresh – The maximum number of pixels off that the template matcher may be off by. If two values are provided, the lower threshold is treated as a warning boundary.

  • metric – The metric to use for template matching.

  • display – If true, the results are plotted

  • test – If true a reference frame instead of the frames in frame_samples.

  • roi – A tuple of indices for the face template in the for ((y1, y2), (x1, x2))

  • pct_thresh – If true, the thresholds are treated as percentages

check_focus(n=20, threshold=(100, 6), roi=False, display=False, test=False, equalize=True)[source]

Check video is in focus.

Two methods are used here: Looking at the high frequencies with a DFT and applying a Laplacian HPF and looking at the variance.

Note

  • Both methods are sensitive to noise (Laplacian is 2nd order filter).

  • The thresholds for the fft may need to be different for the left/right vs body as the distribution of frequencies in the image is different (e.g. the holder comprises mostly very high frequencies).

  • The image may be overall in focus but the places we care about can still be out of focus (namely the face). For this we’ll take an ROI around the face.

  • Focus check thrown off by brightness. This may be fixed by equalizing the histogram (set equalize=True)

Parameters:
  • n (int) – Number of frames from frame_samples data to use in check.

  • threshold (tuple of float) – The lower boundary for Laplacian variance and mean FFT filtered brightness, respectively.

  • roi (bool, None, list of slice) – If False, the roi is determined via template matching for the face or body. If None, some set ROIs for face and paws are used. A list of slices may also be passed.

  • display (bool) – If true, the results are displayed.

  • test (bool) – If true, a set of artificially blurred reference frames are used as the input. This can be used to selecting reasonable thresholds.

  • equalize (bool) – If true, the histograms of the frames are equalized, resulting in an increased the global contrast and linear CDF. This makes check robust to low light conditions.

Returns:

The QC outcome, either FAIL or PASS.

Return type:

one.spec.QC

find_face(roi=None, test=False, metric=5, refs=None)[source]

Use template matching to find face location in frame.

For the template matching zero-normalized cross-correlation (default) should be more robust to exposure (which we’re not checking here). The L2 norm (TM_SQDIFF) should also work. That said, normalizing the histograms works best.

Parameters:
  • roi – A tuple of indices for the face template in the for ((y1, y2), (x1, x2))

  • test – If True the template is matched against frames that come from the same session

  • metric – The metric to use for template matching

  • refs – An array of frames to match the template to

Returns:

(y1, y2), (x1, x2)

static load_reference_frames(side)[source]

Load some reference frames for a given video.

The reference frames are from sessions where the camera was well positioned. The frames are in qc/reference, one file per camera, only one channel per frame. The session eids can be found in qc/reference/frame_src.json

Parameters:

side – Video label, e.g. ‘left’

Returns:

numpy array of frames with the shape (n, h, w)

static imshow(frame, ax=None, title=None, **kwargs)[source]

plt.imshow with some convenient defaults for greyscale frames.

class CameraQCCamlog(session_path_or_eid, camera, sync_collection='raw_sync_data', sync_type='nidq', **kwargs)[source]

Bases: CameraQC

A class for computing camera QC metrics from camlog data.

For this QC we expect the check_pin_state to be NOT_SET as we are not using the GPIO for timestamp alignment.

dstypes = ['_iblrig_taskData.raw', '_iblrig_taskSettings.raw', '_iblrig_Camera.raw', 'camera.times', 'wheel.position', 'wheel.timestamps']
dstypes_fpga = ['_spikeglx_sync.channels', '_spikeglx_sync.polarities', '_spikeglx_sync.times', 'DAQData.raw.meta', 'DAQData.wiring']

Recall that for the training rig there is only one side camera at 30 Hz and 1280 x 1024 px. For the recording rig there are two label cameras (left: 60 Hz, 1280 x 1024 px; right: 150 Hz, 640 x 512 px) and one body camera (30 Hz, 640 x 512 px).

load_data(download_data: bool | None = None, extract_times: bool = False, load_video: bool = True, **kwargs) None[source]

Extract the data from raw data files.

Extracts all the required task data from the raw data files.

Data keys:
  • count (int array): the sequential frame number (n, n+1, n+2…)

  • pin_state (): the camera GPIO pin; records the audio TTLs; should be one per frame

  • audio (float array): timestamps of audio TTL fronts

  • fpga_times (float array): timestamps of camera TTLs recorded by FPGA

  • timestamps (float array): extracted video timestamps (the camera.times ALF)

  • bonsai_times (datetime array): system timestamps of video PC; should be one per frame

  • camera_times (float array): camera frame timestamps extracted from frame headers

  • wheel (Bunch): rotary encoder timestamps, position and period used for wheel motion

  • video (Bunch): video meta data, including dimensions and FPS

  • frame_samples (h x w x n array): array of evenly sampled frames (1 colour channel)

Parameters:

download_data – if True, any missing raw data is downloaded via ONE.

Missing data will raise an AssertionError :param extract_times: if True, the camera.times are re-extracted from the raw data :param load_video: if True, calls the load_video_data method

ensure_required_data()[source]

Ensure the datasets required for QC are local.

If the download_data attribute is True, any missing data are downloaded. If all the data are not present locally at the end of it an exception is raised. If the stream attribute is True, the video file is not required to be local, however it must be remotely accessible. NB: Requires a valid instance of ONE and a valid session eid.

check_camera_times()[source]

Check that the number of raw camera timestamps matches the number of video frames.

data_for_keys(keys, data)[source]

Check keys exist in ‘data’ dict and contain values other than None.

get_task_collection(sess_params)[source]

Return the first non-passive task collection.

Returns the first task collection from the experiment description whose task name does not contain ‘passive’, otherwise returns ‘raw_behavior_data’.

Parameters:

sess_params (dict) – The loaded experiment description file.

Returns:

The collection presumed to contain wheel data.

Return type:

str

get_video_collection(sess_params, label)[source]

Return the collection containing the raw video data for a given camera.

Parameters:
  • sess_params (dict) – The loaded experiment description file.

  • label (str) – The camera label.

Returns:

The collection presumed to contain the video data.

Return type:

str

run_all_qc(session, cameras=('left', 'right', 'body'), **kwargs)[source]

Run QC for all cameras.

Run the camera QC for left, right and body cameras.

Parameters:
  • session – A session path or eid.

  • update – If True, QC fields are updated on Alyx.

  • cameras – A list of camera names to perform QC on.

  • stream – If true and local video files not available, the data are streamed from

the remote source. :return: dict of CameraCQ objects