Listing Alyx Filenames
An ALyx Filename (ALF) is any file whose path matches a specific pattern. We will use the following file structure as an example:
subject/
├─ 2021-06-30/
│ ├─ 001/
│ │ ├─ alf/
│ │ │ ├─ probe00/
│ │ │ │ ├─ spikes.clusters.npy
│ │ │ │ ├─ spikes.times.npy
│ │ │ ├─ probe01/
│ │ │ │ ├─ #2021-07-05#/
│ │ │ │ │ ├─ spikes.clusters.npy
│ │ │ │ │ ├─ spikes.times.npy
│ │ │ │ ├─ spikes.clusters.npy
│ │ │ │ ├─ spikes.times.npy
│ │ ├─ probes.description.json
Let’s create some files and generate an ONE cache table:
[1]:
from tempfile import TemporaryDirectory
from pathlib import Path
from one.alf.cache import make_parquet_db
files = [
'subject/2021-06-01/001/probes.description.json',
'subject/2021-06-01/001/alf/probe00/spikes.times.npy',
'subject/2021-06-01/001/alf/probe00/spikes.clusters.npy',
'subject/2021-06-01/001/alf/probe01/spikes.times.npy',
'subject/2021-06-01/001/alf/probe01/spikes.clusters.npy',
'subject/2021-06-01/001/alf/probe01/#2021-07-05#/spikes.times.npy',
'subject/2021-06-01/001/alf/probe01/#2021-07-05#/spikes.clusters.npy'
]
temp_dir = TemporaryDirectory() # Create a temporary directory in which to place Alyx files
for file in files:
file_path = Path(temp_dir.name).joinpath(file) # Create full path
file_path.parent.mkdir(parents=True, exist_ok=True) # Create directories
file_path.touch() # Create empty file
make_parquet_db(temp_dir.name) # Generate cache tables
Out[1]:
(WindowsPath('C:/Users/User/AppData/Local/Temp/tmpffnvueh8/sessions.pqt'),
WindowsPath('C:/Users/User/AppData/Local/Temp/tmpffnvueh8/datasets.pqt'))
The full spec is available in the one.alf.spec
module:
[2]:
import one.alf.spec as alf_spec
from one.api import ONE
one = ONE(cache_dir=temp_dir.name)
A valid ALF path includes the following parts (those in brackets are optional):
[3]:
print(alf_spec.path_pattern())
(lab/Subjects/)subject/date/number/(collection/)(#revision#/)_namespace_object.attribute_timescale.extra.extension
Details of each part can be obtained through the one.alf.spec.describe
function:
[4]:
alf_spec.describe('collection')
(lab/Subjects/)subject/date/number/(collection/)(#revision#/)_namespace_object.attribute_timescale.extra.extension
^^^^^^^^^^
COLLECTION
An optional folder to group data by modality, device, etc. This is necessary when a session
contains multiple measurements of the same type, from example spike times from multiple probes.
Label examples include "probe00", "raw_video_data".
When using One.load_object
an object is passed to the method for loading. Other specifiers such as attributes, collection and revision may be passed.
To list all the files in ‘subject/2021-06-01/001’
[9]:
one.list_datasets('subject/2021-06-01/001')
Out[9]:
['alf/probe00/spikes.clusters.npy',
'alf/probe00/spikes.times.npy',
'alf/probe01/#2021-07-05#/spikes.clusters.npy',
'alf/probe01/#2021-07-05#/spikes.times.npy',
'alf/probe01/spikes.clusters.npy',
'alf/probe01/spikes.times.npy',
'probes.description.json']
To list all datasets in the ‘alf/probe01’ collection
[ ]:
one.list_datasets('subject/2021-06-01/001', collection='alf/probe01')
To list all datasets not in a collection
[10]:
one.list_datasets('subject/2021-06-01/001', collection='')
Out[10]:
['probes.description.json']
To list all revisions for a given session
[12]:
revisions = one.list_revisions('subject/2021-06-01/001')
[x or None for x in revisions]
Out[12]:
[None, '2021-07-05']
To list all collections for a given session
[13]:
collections = one.list_collections('subject/2021-06-01/001')
[x or None for x in collections]
Out[13]:
[None, 'alf/probe00', 'alf/probe01']
To load the ‘spikes’ object from the ‘alf/probe00’ collection
spikes = one.load_object('subject/2021-06-01/001', 'spikes', collection='alf/probe00')
To load the ‘spikes’ object from the ‘alf/probe01’ collection, and the last revision before or on July 1st
spikes = one.load_object('subject/2021-06-01/001', 'spikes',
collection='alf/probe01', revision='2021-07-01')
To load ‘spikes.times’ from collection ‘alf/probe00’
spike_times = one.load_dataset('subject/2021-06-01/001', 'spikes.times.npy',
collection='alf/probe00')
Click here for more information on the ALF specification. Click here for information on creating and validating new dataset types.