THINGS

THINGS Initiative

A global research initiative built on a shared image database

Browse datasets Explore images Read the paper

How do we recognize objects, make sense of them, and act on them meaningfully? These fundamental questions require collaboration across disciplines - psychology, neuroscience, and AI.

THINGS provides a foundation: 1,854 systematically-sampled object concepts with 26,107 naturalistic images, plus millions of behavioral judgments and neural recordings from multiple species and brain imaging techniques.

Labs worldwide contribute data openly. By working from the same objects, we can finally bridge the gap between brain and behavior. Anyone can join the initiative.

THINGS

A global research initiative built on a shared image database

How do we recognize objects, make sense of them, and act on them meaningfully? These fundamental questions require collaboration across disciplines - psychology, neuroscience, and AI.

THINGS provides a foundation: 1,854 systematically-sampled object concepts with 26,107 naturalistic images, plus millions of behavioral judgments and neural recordings from multiple species and brain imaging techniques.

Labs worldwide contribute data openly. By working from the same objects, we can finally bridge the gap between brain and behavior. Anyone can join the initiative.

Datasets

Browse behavioral ratings, neural recordings, and computational tools - all built on the THINGS image set and freely available for research.

THINGS concepts and images

Martin Hebart, Adam Dickter, Alexis Kidder +4 more
National Institute of Mental Health, Bethesda, USA

A freely available database of 26,107 high quality, manually-curated images of 1,854 diverse object concepts, curated systematically from the everyday American English language and using a large-scale web search. Includes 27 high-level categories, semantic embeddings for all concepts, and more metadata.

THINGS similarity

Martin Hebart, Oliver Contier, Lina Teichmann +7 more
National Institute of Mental Health, Bethesda, USA +2 more

More than 4.70 million triplet odd-one-out similarity judgments for 1,854 object images, plus a 66d interpretable embedding. A previous set of 1.46 million triplets served to identify 49 interpretable object dimensions predictive of behavior and similarity (Hebart et al., 2020, Nat Hum Behav). 

THINGSplus

Laura Stoinski, Jonas Perkuhn, Martin Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

New THINGS metadata, with 53 high-level categories, typicality ratings, nameability scores for all images, size ratings, and ratings along several dimensions (e.g. animacy, manipulability, valence, arousal, preciousness, etc.). In addition, 1,854 license-free images were collected that can be used and reproduced (e.g. in publications) without any restriction.

THINGS fMRI1

Oliver Contier, Martin Hebart, Lina Teichmann +6 more
National Institute of Mental Health, Bethesda, USA +2 more

Event-related functional MRI data in 3 subjects for 8,640 images (720 categories, 12 images per category), collected over the course of 12 sessions. Includes extensive anatomical scans, population receptive field mapping, and functional localizers. Optimized for studying object recognition with a broad and systematic range of object categories.

THINGS MEG1

Lina Teichmann, Martin Hebart, Oliver Contier +6 more
National Institute of Mental Health, Bethesda, USA +2 more

Magnetoencephalography (MEG) data in 4 subjects for 22,248 images (1,854 categories, 12 images per category), collected over the course of 12 sessions. Optimized for studying object recognition with a broad and systematic range of object categories.

THINGS EEG1

Tijl Grootswagers, Ivy Zhou, Amanda Robinson +2 more
MARCS Institute, Western Sydney University, Australia +2 more

Electroencephalography responses in 50 subjects for 22,248 images (1,854 concepts, 12 images per category), collected in a single session per participant using an RSVP paradigm.

THINGS EEG2

Alessandro Gifford, Kshitij Dwivedi, Gemma Roig +1 more
Freie Universität Berlin +1 more

Raw and preprocessed EEG recordings of 10 participants, each with 82,160 trials spanning 16,740 image conditions coming from the THINGS database.

THINGS Ventral-stream Spiking Dataset (TVSD)

Paolo Papale, Pieter Roelfsema
Dept of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands

High-channel count electrophysiological recordings of 22,248 images (1,854 categories, 12 images per category) from the macaque visual cortex across V1, V4, and IT, in two animals.

THINGSvision

Lukas Muttenthaler, Martin Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Streamlines the extraction of neural network activations by providing a simple wrapper for extracting activations from a wide range of commonly used deep convolutional neural network architectures.

THINGS memorability

Max Kramer, Martin Hebart, Chris Baker +1 more
Department of Psychology, University of Chicago, USA +2 more

Memorability scores for all 26,107 object images, collected in a large sample of >13,000 participants. Offers a systematic evaluation of memorability across a wide range of natural object images, object concepts, and high-level categories.

THINGS semantic feature norm

Hannes Hansen, Martin Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany

Semantic feature production norms for all 1,854 object concepts in THINGS, generated with the natural language model GPT-3, developed by OpenAI.

THINGS-constellations

Jaan Aru, Kadi Tulver, Tarun Khajuria
University of Tartu

A dataset of "constellation" images that can be used to study inference in human vision and AI. The images are stripped of local details creating a dotted outline of the object that can be inferred from the local pattern. The dataset includes 3533 image sets of a total of 1215 common objects from the THINGS dataset. A selected set of 481 top constellation images and the code to generate more constellation images from photos are also included.

STUFF database and dimensions

Filipp Schmidt, Martin Hebart, Alex Schmid +1 more
Psychology Department, University Gießen, Germany +2 more

600 images of 200 materials sampled systematically and representatively from the American English language. Includes object dimensions and material similarity matrices identified from >1.8 million similarity judgments.

THINGS fMRI2

THINGS fMRI2

Marie St-Laurent, CNeuromod
University of Montréal, Canada

Event-related functional MRI data at 3T in 8 subjects for 4,320 images (720 categories, 6 images per category, 3 repeats per image), using a memory paradigm. Optimized for the study of object recognition for hypothesis-based analyses, data-driven analyses, and representational similarity analysis.

THINGS electrophysiology1

THINGS electrophysiology1

Thomas Reber, Florian Mormann
Dept of Epileptology, University of Bonn

Direct recordings from human entorhinal cortex, hippocampus, amygdala, and parahippocampal cortex in 23 patients for 1,200 images (150 categories, 8 images each).

THINGS macaque EEG

THINGS macaque EEG

Siegel Lab
University of Tübingen, Germany

Scalp EEG recordings of 8,640 THINGS images (720 categories, 12 images per category) in 2 macaque monkeys.

THINGS macaque V4

THINGS macaque V4

Ratan Murty, Sachi Sanghavi
Massachusetts Institute of Technology, Cambridge MA, USA

Electrophysiological recordings of 14,832 THINGS images (1,854 categories, 8 images per category) in area V4 of a macaque monkey.

THINGS iEEG

THINGS iEEG

Avniel Ghuman, LCND team
Laboratory of Cognitive Neurodynamics, University of Pittsburgh, PA, USA

Intracranial EEG from ventral temporal cortex in human patients.

THINGS fMRI3

THINGS fMRI3

PRISME team
Institut Universitaire en Santé Mentale de Montréal (IUSMM), Canada

Event-related functional MRI measurements at 3T in a large sample of psychotic patients, presenting 5,568 images from 720 object categories per patient. 

Join the initiative

The THINGS initiative is an open science project. We invite researchers to use these datasets, contribute new modalities, or expand the database.

Contact us to get involved.