Loading with ONEΒΆ
Once a session and datasets of interest have been identified, the ONE load methods can be used to load in the relevant data
To load all datasets for a given object we can use the load_object method
[1]:
from one.api import ONE
one = ONE(base_url='https://openalyx.internationalbrainlab.org', silent=True)
eid = 'CSH_ZAD_029/2020-09-19/001'
trials = one.load_object(eid, 'trials')
The attributes of returned object mirror the datasets
[2]:
print(trials.keys())
dict_keys(['intervals', 'rewardVolume', 'contrastRight', 'response_times', 'choice', 'stimOn_times', 'probabilityLeft', 'goCueTrigger_times', 'intervals_bpod', 'goCue_times', 'firstMovement_times', 'stimOff_times', 'contrastLeft', 'feedbackType', 'feedback_times'])
If we only want to load in certain attributes of an object we can use the following
[3]:
trials = one.load_object(eid, 'trials', attribute=['intervals', 'rewardVolume', 'probabilityLeft'])
print(trials.keys())
dict_keys(['intervals', 'rewardVolume', 'probabilityLeft', 'intervals_bpod'])
If an object belongs to more than one collection, for example the clusters object, the collection must be specified
[4]:
clusters = one.load_object(eid, 'clusters', collection='alf/probe01')
By default, the load_object method downloads and loads the data into memory, if you only want to download the data you can specify a download only flag. In this case the returned object will be a list of paths to the datasets on your local system
[5]:
files = one.load_object(eid, 'clusters', collection='alf/probe01', download_only=True)
To load a single dataset we can use the load_dataset method
[6]:
reward_volume = one.load_dataset(eid, '_ibl_trials.rewardVolume.npy')
Once again if the same dataset exists in more than one collection, the collection must be specified
[7]:
waveforms = one.load_dataset(eid, 'clusters.waveforms.npy', collection='alf/probe01')
We can use the load_datasets method to load multiple datasets at once. This method returns two lists, the first which contains the data for each dataset and the second which contains meta information about the data.
[8]:
data, info = one.load_datasets(eid, datasets=['_ibl_trials.rewardVolume.npy',
'_ibl_trials.probabilityLeft.npy'])
It is also possible to load datasets from different collections. For example if we want to simultaneously load a trials dataset and a clusters dataset we would type,
[9]:
data, info = one.load_datasets(eid, datasets=['_ibl_trials.rewardVolume.npy',
'clusters.waveforms.npy'],
collections=['alf', 'alf/probe01'])
More information about these methods can be found using the help command
[10]:
help(one.load_dataset)
Help on method load_dataset in module one.api:
load_dataset(eid: Union[str, pathlib.Path, uuid.UUID], dataset: str, collection: Optional[str] = None, revision: Optional[str] = None, query_type: Optional[str] = None, download_only: bool = False, **kwargs) -> Any method of one.api.OneAlyx instance
Load a single dataset for a given session id and dataset name
Parameters
----------
eid : str, UUID, pathlib.Path, dict
Experiment session identifier; may be a UUID, URL, experiment reference string
details dict or Path.
dataset : str, dict
The ALF dataset to load. May be a string or dict of ALF parts. Supports asterisks as
wildcards.
collection : str
The collection to which the object belongs, e.g. 'alf/probe01'.
This is the relative path of the file from the session root.
Supports asterisks as wildcards.
revision : str
The dataset revision (typically an ISO date). If no exact match, the previous
revision (ordered lexicographically) is returned. If None, the default revision is
returned (usually the most recent revision). Regular expressions/wildcards not
permitted.
query_type : str
Query cache ('local') or Alyx database ('remote')
download_only : bool
When true the data are downloaded and the file path is returned.
Returns
-------
Dataset or a Path object if download_only is true.
Examples
--------
intervals = one.load_dataset(eid, '_ibl_trials.intervals.npy')
intervals = one.load_dataset(eid, '*trials.intervals*')
filepath = one.load_dataset(eid '_ibl_trials.intervals.npy', download_only=True)
spike_times = one.load_dataset(eid 'spikes.times.npy', collection='alf/probe01')
old_spikes = one.load_dataset(eid, ''spikes.times.npy',
collection='alf/probe01', revision='2020-08-31')
[ ]: