iblrig.choiceworld.get_subject_training_info

iblrig.choiceworld.get_subject_training_info(subject_name, task_name='_iblrig_tasks_trainingChoiceWorld', stim_gain=None, stim_gain_on_error=None, default_reward=3.0, mode='silent', **kwargs)[source]

Goes through a subject’s history and gets the latest training phase and adaptive reward volume.

Parameters:
  • subject_name (str) – Name of the subject.

  • task_name (str, optional) – Name of the protocol to look for in experiment description, defaults to ‘_iblrig_tasks_trainingChoiceWorld’.

  • stim_gain (float, optional) – Default stimulus gain if no previous session is available, default to None

  • stim_gain_on_error (float, optional) – Default stimulus gain if there was an exception whilst obtaining the previous sessions’ info, default to None

  • default_reward (float, optional) – Default reward volume in uL if no previous session is available.

  • mode (str, optional) – If ‘silent’ returns default values if no history is found, if ‘raise’ raises ValueError.

  • **kwargs – Optional arguments to be passed to get_local_and_remote_paths

Returns:

  • training_info (dict) – Dictionary with keys: training_phase, adaptive_reward, adaptive_gain

  • session_info (dict or None) – Dictionary with keys: session_path, experiment_description, task_settings, file_task_data

Parameters:
  • subject_name (str)

  • task_name (str)

  • stim_gain (float | None)

  • stim_gain_on_error (float | None)

  • default_reward (float)

  • mode (Literal['silent', 'raise'])

Return type:

tuple[dict, dict | None]