Skip to content

Behavioral data and images used to benchmark marmosets, human, macaque, and rats.

Notifications You must be signed in to change notification settings

issalab/kell-et-al-marmoset-benchmarking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 

Repository files navigation

kell-et-al-marmoset-benchmarking

This repository contains and describes data from work that benchmarks the high-level visual abilities of marmosets in comparison with humans, rhesus macaques, and rats. It also includes images used as stimuli for the comparisons, as well as images for animal training. It accompanies the preprint:

Conserved core visual object recognition across simian primates:
Marmoset image-by-image behavior mirrors that of humans and macaques

Alexander J.E. Kell, Sophie L. Bokor, You-Nah Jeon, Tahereh Toosi, Elias B. Issa

The data and images can be downloaded here.

The structure of the data and image directories is briefly described below. For further questions, please email: [email protected] (where first and last are "alex" and "kell").


Images and data for benchmarking marmoset with macaques and humans

Images originally employed in Rajalingham et al., 2018.

Images

Images are in the directory images-marmosetMacaqueHuman, with a subdirectory for each of the four objects (wrench, rhino, camel, leg). The 400 images on which we compared marmosets with macaques and humans are in the evaluation subdirectory (e.g., images-marmosetMacaqueHuman/wrench/evaluation). Also included are the images used for animal training (token, training-low-variation, and training-high-variation). The images used in the decision stage of the task (i.e., the ones subjects touch to indicate their choice) are these token images.

Data

The data are pickled in Python. To load:

import pickle

tmp = pickle.load(open('marmoset_macaque_and_human.pkl','rb'))
arr_n_trials, arr_n_correct, all_fnames = tmp['arr_n_trials'], tmp['arr_n_correct'], tmp['all_fns']

all_fnames is a dictionary where each key is an object and each value is a list of the image filenames, in the order that they are indexed in the data arrays below. I.e.,

[(k,len(all_fnames[k])) for k in all_fnames.keys()] == [('camel', 100), ('leg', 100), ('wrench', 100), ('rhino', 100)]

all_fnames['camel'][0] == 'objectome_camel_01914fb75f1180b0b1d98adc04c617caa2f387d1_ty0.039663_tz0.32309_rxy-64.131_rxz-62.8524_ryz8.6478_s1.0404.png'

arr_n_trials and arr_n_correct are dictionaries with values that are numpy arrays of, respectively, the number of trials and number of correct trials. The keys correspond to which dataset and are as follows:

['human_sr2', 'marmoset_sr2_22dva', 'marmoset_sr2_11dva', 'marmoset_sr2_pooledOverSize', 'macaque_mts24_from_rajalinghamEtAl']

"dva" denotes degrees of visual angle and marmoset_sr2_pooledOverSize is simply the sum of the data from two different image sizes marmoset_sr2_22dva and marmoset_sr2_11dva.

The data is arranged in a 5d array, and axes are as follows: (subjects, days, target_object, distractor_object, image_index). E.g.,

arr_n_trials['marmoset_sr2_22dva'].shape == (5, 42, 4, 4, 100)

The data for marmoset_sr2_pooledOverSize and macaque_mts24_from_rajalinghamEtAl are pooled over subjects. Objects are in the order: ['camel', 'rhino', 'leg', 'wrench'].


Images and data for benchmarking marmoset with rats

Images originally employed in Zoccolan et al., 2009.

Images

Images are in subdirectories for each of the two stimuli one-blob and two-blob. The original 14 training images for each object are in the original-training subdirectory; all 54 images are in the all-images subdirectory.

Data

To load:

tmp = pickle.load(open('marmoset_and_rat.pkl','rb'))

The variable tmp is simply a dictionary with the matrices of rat and marmoset data as plotted in Figure 3 in Kell et al.: rat_proportion_correct and marmoset_proportion_correct.

About

Behavioral data and images used to benchmark marmosets, human, macaque, and rats.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published