ibllib.pipes.mesoscope_tasks

The mesoscope data extraction pipeline.

The mesoscope pipeline currently comprises raw image registration and timestamps extraction. In the future there will be compression (and potential cropping), FOV metadata extraction, and ROI extraction.

Pipeline:
  1. Register reference images and upload snapshots and notes to Alyx

  2. Run ROI cell detection

  3. Calculate the pixel and ROI brain locations and register fields of view to Alyx

  4. Compress the raw imaging data

  5. Extract the imaging times from the main DAQ

Functions

find_triangle

Find which vertices contain a given point.

in_triangle

Check whether point lies within triangle.

surface_normal

Calculate the surface normal unit vector of one or more triangles.

Classes

MesoscopeCompress

Tar compress raw 2p tif files, optionally remove uncompressed data.

MesoscopeFOV

Create FOV and FOV location objects in Alyx from metadata.

MesoscopePreprocess

Run suite2p preprocessing on tif files

MesoscopeRegisterSnapshots

Upload snapshots as Alyx notes and register the 2P reference image(s).

MesoscopeSync

Extract the frame times from the main DAQ.

Provenance

An enumeration.

class Provenance(value)

Bases: Enum

An enumeration.

ESTIMATE = 1
FUNCTIONAL = 2
LANDMARK = 3
HISTOLOGY = 4
class MesoscopeRegisterSnapshots(session_path, **kwargs)[source]

Bases: MesoscopeTask, RegisterRawDataTask

Upload snapshots as Alyx notes and register the 2P reference image(s).

priority = 100
job_size = 'small'
property signature

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

class MesoscopeCompress(session_path, **kwargs)[source]

Bases: MesoscopeTask

Tar compress raw 2p tif files, optionally remove uncompressed data.

priority = 90
job_size = 'large'
property signature

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

setUp(**kwargs)[source]

Run at higher log level

tearDown()[source]

Function after runs() Does not run if a lock is encountered by the task (status -2)

class MesoscopePreprocess(session_path, **kwargs)[source]

Bases: MesoscopeTask

Run suite2p preprocessing on tif files

priority = 80
cpu = -1
job_size = 'large'
property signature

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

get_default_tau()[source]

Determine the tau (fluorescence decay) from the subject’s genotype.

Returns:

The tau value to use.

Return type:

float

See also

https

//suite2p.readthedocs.io/en/latest/settings.html

class MesoscopeSync(session_path, **kwargs)[source]

Bases: MesoscopeTask

Extract the frame times from the main DAQ.

priority = 40
job_size = 'small'
property signature

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

class MesoscopeFOV(session_path, **kwargs)[source]

Bases: MesoscopeTask

Create FOV and FOV location objects in Alyx from metadata.

priority = 40
job_size = 'small'
property signature

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

update_surgery_json(meta, normal_vector)[source]

Update surgery JSON with surface normal vector.

Adds the key ‘surface_normal_unit_vector’ to the most recent surgery JSON, containing the provided three element vector. The recorded craniotomy center must match the coordinates in the provided meta file.

Parameters:
  • meta (dict) – The imaging meta data file containing the ‘centerMM’ key.

  • normal_vector (array_like) – A three element unit vector normal to the surface of the craniotomy center.

Returns:

The updated surgery record, or None if no surgeries found.

Return type:

dict

roi_mlapdv(nFOV: int, suffix=None)[source]

Extract ROI MLAPDV coordinates and brain location IDs.

MLAPDV coordinates are in um relative to bregma. Location IDs are from the 2017 Allen common coordinate framework atlas.

Parameters:
  • nFOV (int) – The number of fields of view acquired.

  • suffix ({None, 'estimate'}) – The attribute suffix of the mpciMeanImage datasets to load. If generating from estimates, the suffix should be ‘estimate’.

Returns:

  • dict of int (numpy.array) – A map of field of view to ROI MLAPDV coordinates.

  • dict of int (numpy.array) – A map of field of view to ROI brain location IDs.

static get_provenance(filename)[source]

Get the field of view provenance from a mpciMeanImage or mpciROIs dataset.

Parameters:

filename (str, pathlib.Path) – A filename to get the provenance from.

Returns:

The provenance of the file.

Return type:

Provenance

register_fov(meta: dict, suffix: str | None = None) -> (<class 'list'>, <class 'list'>)[source]

Create FOV on Alyx.

Assumes field of view recorded perpendicular to objective. Assumes field of view is plane (negligible volume).

Required Alyx fixtures:
  • experiments.ImagingType(name=’mesoscope’)

  • experiments.CoordinateSystem(name=’IBL-Allen’)

Parameters:
  • meta (dict) – The raw imaging meta data from _ibl_rawImagingData.meta.json.

  • suffix (str) – The file attribute suffixes to load from the mpciMeanImage object. Either ‘estimate’ or None. No suffix means the FOV location provenance will be L (Landmark).

Returns:

  • list of dict – A list registered of field of view entries from Alyx.

  • TODO Determine dual plane ID for JSON field

load_triangulation()[source]

Load the surface triangulation file.

A triangle mesh of the smoothed convex hull of the dorsal surface of the mouse brain, generated from the 2017 Allen 10um annotation volume. This triangulation was generated in MATLAB.

Returns:

  • points (numpy.array) – An N by 3 float array of x-y vertices, defining all points of the triangle mesh. These are in um relative to the IBL bregma coordinates.

  • connectivity_list (numpy.array) – An N by 3 integer array of vertex indices defining all points that form a triangle.

project_mlapdv(meta, atlas=None)[source]

Calculate the mean image pixel locations in MLAPDV coordinates and determine the brain location IDs.

MLAPDV coordinates are in um relative to bregma. Location IDs are from the 2017 Allen common coordinate framework atlas.

Parameters:
  • meta (dict) – The raw imaging data meta file, containing coordinates for the centre of each field of view.

  • atlas (ibllib.atlas.Atlas) – An atlas instance.

Returns:

  • dict – A map of FOV number (int) to mean image MLAPDV coordinates as a 2D numpy array.

  • dict – A map of FOV number (int) to mean image brain location IDs as a 2D numpy int array.

surface_normal(triangle)[source]

Calculate the surface normal unit vector of one or more triangles.

Parameters:

triangle (numpy.array) – An array of shape (n_triangles, 3, 3) representing (Px Py Pz).

Returns:

The surface normal unit vector(s).

Return type:

numpy.array

in_triangle(triangle, point)[source]

Check whether point lies within triangle.

Parameters:
  • triangle (numpy.array) – A (2 x 3) array of x-y coordinates; A(x1, y1), B(x2, y2) and C(x3, y3).

  • point (numpy.array) – A point, P(x, y).

Returns:

True if coordinate lies within triangle.

Return type:

bool

find_triangle(point, vertices, connectivity_list)[source]

Find which vertices contain a given point.

Currently O(n) but could take advantage of connectivity order to be quicker.

Parameters:
  • point (numpy.array) – The (x, y) coordinate of a point to locate within one of the triangles.

  • vertices (numpy.array) – An N x 3 array of vertices representing a triangle mesh.

  • connectivity_list (numpy.array) – An N x 3 array of indices representing the connectivity of points.

Returns:

The index of the vertices containing point, or -1 if not within any triangle.

Return type:

int