THINGS concepts and images
Martin Hebart, Adam Dickter, Alexis KidderMartin Hebart, Adam Dickter, Alexis Kidder, Wan Kwok, Anna Corriveau, Caitlin Van Wicklin, Chris Baker +4 more
National Institute of Mental Health, Bethesda, USA
A freely available database of 26,107 high quality, manually-curated images of 1,854 diverse object concepts, curated systematically from the everyday American English language and using a large-scale web search. Includes 27 high-level categories, semantic embeddings for all concepts, and more metadata.
@article{Hebart2019THINGS,
author = {Hebart, Martin N. and Dickter, Adam H. and Kidder, Alexis and Kwok, Wan Y. and Corriveau, Anna and Van Wicklin, Caitlin and Baker, Chris I.},
title = {{THINGS}: A database of 1,854 object concepts and more than 26,000 naturalistic object images},
journal = {PLOS ONE},
volume = {14},
number = {10},
pages = {e0223792},
year = {2019},
doi = {10.1371/journal.pone.0223792},
url = {https://doi.org/10.1371/journal.pone.0223792}
}
@article{Stoinski2024THINGSplus,
author = {Stoinski, Laura M. and Perkuhn, Jonas and Hebart, Martin N.},
title = {{THINGSplus}: New norms and metadata for the {THINGS} database of 1854 object concepts and 26,107 natural object images},
journal = {Behavior Research Methods},
volume = {56},
number = {3},
pages = {1583--1603},
year = {2024},
doi = {10.3758/s13428-023-02110-8},
url = {https://doi.org/10.3758/s13428-023-02110-8}
}
About
Repository: OSF ·
DOI: 10.17605/OSF.IO/jum2f
26,107 naturalistic images of 1,854 object concepts systematically sampled from American English. JPEG format, up to 1600×1600 pixels.
What's Included
- Full image database (~5 GB)
- 53 superordinate categories
- Typicality & nameability ratings
- Concept dimensions (animacy, size, etc.)
- 1,854 license-free images for publications
Download
Install the OSF command-line tool and download:
pip install osfclient
osf -p jum2f clone THINGS-database
Images are password-protected. After downloading, check description.txt for the password, then extract:
for fn in object_images_*.zip; do
unzip -P YOUR_PASSWORD $fn
done
⚠️ License: Original images are for academic use only. Use THINGSplus license-free images for publications.
THINGS similarity
Martin Hebart, Oliver Contier, Lina TeichmannMartin Hebart, Oliver Contier, Lina Teichmann, Adam Rockter, Charles Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, Francisco Pereira, Chris Baker +7 more
National Institute of Mental Health, Bethesda, USANational Institute of Mental Health, Bethesda, USA, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, Justus Liebig University Giessen, Germany +2 more
More than 4.70 million triplet odd-one-out similarity judgments for 1,854 object images, plus a 66d interpretable embedding. A previous set of 1.46 million triplets served to identify 49 interpretable object dimensions predictive of behavior and similarity (Hebart et al., 2020, Nat Hum Behav).
@article{Hebart2020SPoSE,
author = {Hebart, Martin N. and Zheng, Charles Y. and Pereira, Francisco and Baker, Chris I.},
title = {Revealing the multidimensional mental representations of natural objects underlying human similarity judgements},
journal = {Nature Human Behaviour},
volume = {4},
number = {11},
pages = {1173--1185},
year = {2020},
doi = {10.1038/s41562-020-00951-3},
url = {https://doi.org/10.1038/s41562-020-00951-3}
}
About
Repository: OSF ·
DOI: 10.17605/OSF.IO/F5RN6
4.70 million triplet judgments from 14,025 participants. "Which object is the odd one out?" responses that reveal similarity structure across all 1,854 concepts.
What's Included
- Training set (90%) & test set (10%)
- Triplet indices (concept1, concept2, odd-one-out)
- Participant demographics
- 37,000 repeated triplets for reliability
Related: SPoSE embedding model
Download
Install the OSF command-line tool and download:
pip install osfclient
osf -p f5rn6 clone THINGS-behavior
Or download directly from the OSF web interface.
THINGSplus
Laura Stoinski, Jonas Perkuhn, Martin Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
New THINGS metadata, with 53 high-level categories, typicality ratings, nameability scores for all images, size ratings, and ratings along several dimensions (e.g. animacy, manipulability, valence, arousal, preciousness, etc.). In addition, 1,854 license-free images were collected that can be used and reproduced (e.g. in publications) without any restriction.
@article{Stoinski2024THINGSplus,
author = {Stoinski, Laura M. and Perkuhn, Jonas and Hebart, Martin N.},
title = {{THINGSplus}: New norms and metadata for the {THINGS} database of 1854 object concepts and 26,107 natural object images},
journal = {Behavior Research Methods},
volume = {56},
number = {3},
pages = {1583--1603},
year = {2024},
doi = {10.3758/s13428-023-02110-8},
url = {https://doi.org/10.3758/s13428-023-02110-8}
}
About
THINGSplus extends the original THINGS database with comprehensive new norms and metadata for all 1,854 object concepts and 26,107 images.
New metadata includes:
- 53 high-level categories (expanded from 27)
- Typicality ratings for all concepts
- Nameability scores for all images
- Size ratings
- Dimension ratings: animacy, manipulability, valence, arousal, preciousness, and more
THINGSplus also provides license-free alternative images for use in publications.
Download
THINGSplus data is available from the same OSF repository as the original THINGS database:
pip install osfclient
osf -p jum2f clone THINGS-database
Or browse the OSF web interface directly.
THINGSplus includes:
- 53 high-level categories (expanded from 27)
- Typicality ratings for all concepts
- Nameability scores for all images
- Size ratings and dimension ratings (animacy, manipulability, valence, arousal, etc.)
THINGS fMRI1
Oliver Contier, Martin Hebart, Lina TeichmannOliver Contier, Martin Hebart, Lina Teichmann, Adam Rockter, Charles Zheng, Alexis Kidder, Anna Corriveau, Chris Baker, Maryam Vaziri-Pashkam +6 more
National Institute of Mental Health, Bethesda, USANational Institute of Mental Health, Bethesda, USA, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, Justus Liebig University Giessen, Germany +2 more
Event-related functional MRI data in 3 subjects for 8,640 images (720 categories, 12 images per category), collected over the course of 12 sessions. Includes extensive anatomical scans, population receptive field mapping, and functional localizers. Optimized for studying object recognition with a broad and systematic range of object categories.
@article{Hebart2023THINGSdata,
author = {Hebart, Martin N. and Contier, Oliver and Teichmann, Lina and Rockter, Adam H. and Zheng, Charles Y. and Kidder, Alexis and Corriveau, Anna and Vaziri-Pashkam, Maryam and Baker, Chris I.},
title = {{THINGS-data}, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior},
journal = {eLife},
volume = {12},
pages = {e82580},
year = {2023},
doi = {10.7554/eLife.82580},
url = {https://doi.org/10.7554/eLife.82580}
}
About
Repository: OpenNeuro ds004192 ·
DOI: 10.18112/openneuro.ds004192
Dense 7T fMRI sampling: 3 subjects × 12 sessions each, viewing 8,740 unique images from 720 object concepts (4.5s per image).
What's Included
- Raw BOLD data (BIDS format)
- T1w/T2w anatomical scans
- Population receptive field mapping
- Functional localizers
- Resting-state runs
Preprocessed Derivatives
Download
Option 1: DataLad (recommended — download specific subjects)
pip install datalad
datalad clone https://github.com/OpenNeuroDatasets/ds004192.git
cd ds004192
# Download one subject
datalad get sub-01/
# Or download everything
datalad get .
Option 2: AWS S3 (fastest for full dataset)
aws s3 sync --no-sign-request \
s3://openneuro.org/ds004192 \
./ds004192/
THINGS MEG1
Lina Teichmann, Martin Hebart, Oliver ContierLina Teichmann, Martin Hebart, Oliver Contier, Adam Rockter, Charles Zheng, Alexis Kidder, Anna Corriveau, Maryam Vaziri-Pashkam, Chris Baker +6 more
National Institute of Mental Health, Bethesda, USANational Institute of Mental Health, Bethesda, USA, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, Justus Liebig University Giessen, Germany +2 more
Magnetoencephalography (MEG) data in 4 subjects for 22,248 images (1,854 categories, 12 images per category), collected over the course of 12 sessions. Optimized for studying object recognition with a broad and systematic range of object categories.
@article{Hebart2023THINGSdata,
author = {Hebart, Martin N. and Contier, Oliver and Teichmann, Lina and Rockter, Adam H. and Zheng, Charles Y. and Kidder, Alexis and Corriveau, Anna and Vaziri-Pashkam, Maryam and Baker, Chris I.},
title = {{THINGS-data}, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior},
journal = {eLife},
volume = {12},
pages = {e82580},
year = {2023},
doi = {10.7554/eLife.82580},
url = {https://doi.org/10.7554/eLife.82580}
}
About
Repository: OpenNeuro ds004212 ·
Size: 377 GB
4 subjects × 12 sessions, viewing all 22,448 THINGS images (1,854 concepts × 12 exemplars). 272-channel CTF MEG with concurrent eye-tracking.
What's Included
- Raw MEG (CTF format, BIDS)
- Eye-tracking data
- T1w anatomical MRI
Preprocessed Derivatives
Download
Option 1: DataLad
pip install datalad
datalad clone https://github.com/OpenNeuroDatasets/ds004212.git
cd ds004212
datalad get .
Option 2: AWS S3
aws s3 sync --no-sign-request \
s3://openneuro.org/ds004212 \
./ds004212/
Option 3: openneuro-py
pip install openneuro-py
openneuro-py download \
--dataset=ds004212 \
--target_dir=./things-meg
THINGS EEG1
Tijl Grootswagers, Ivy Zhou, Amanda RobinsonTijl Grootswagers, Ivy Zhou, Amanda Robinson, Martin Hebart, Thomas Carlson +2 more
MARCS Institute, Western Sydney University, AustraliaMARCS Institute, Western Sydney University, Australia, School of Psychology, University of Sydney, Australia, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany +2 more
Electroencephalography responses in 50 subjects for 22,248 images (1,854 concepts, 12 images per category), collected in a single session per participant using an RSVP paradigm.
@article{Grootswagers2022THINGSEEG1,
author = {Grootswagers, Tijl and Zhou, Ivy and Robinson, Amanda K. and Hebart, Martin N. and Carlson, Thomas A.},
title = {Human {EEG} recordings for 1,854 concepts presented in rapid serial visual presentation streams},
journal = {Scientific Data},
volume = {9},
number = {1},
pages = {3},
year = {2022},
doi = {10.1038/s41597-021-01102-7},
url = {https://doi.org/10.1038/s41597-021-01102-7}
}
About
Repository: OSF · OpenNeuro ds003825
50 subjects viewing 22,248 images in rapid serial visual presentation (10 Hz: 50ms on, 50ms off). Enables millisecond-resolution tracking of object representations.
What's Included
- Raw EEG (BIDS format)
- Stimulus presentation logs
- Analysis code
Subject RDMs: Time-resolved 1854×1854 similarity matrices at Figshare
Download
Raw EEG from OpenNeuro:
pip install datalad
datalad clone https://github.com/OpenNeuroDatasets/ds003825.git
cd ds003825
datalad get .
Or via AWS:
aws s3 sync --no-sign-request \
s3://openneuro.org/ds003825 \
./ds003825/
Analysis code from OSF:
pip install osfclient
osf -p hd6zk clone
THINGS EEG2
Alessandro Gifford, Kshitij Dwivedi, Gemma RoigAlessandro Gifford, Kshitij Dwivedi, Gemma Roig, Radoslaw Cichy +1 more
Freie Universität BerlinFreie Universität Berlin, Goethe Universität, Frankfurt am Main +1 more
Raw and preprocessed EEG recordings of 10 participants, each with 82,160 trials spanning 16,740 image conditions coming from the THINGS database.
@article{Gifford2022THINGSEEG2,
author = {Gifford, Alessandro T. and Dwivedi, Kshitij and Roig, Gemma and Cichy, Radoslaw M.},
title = {A large and rich {EEG} dataset for modeling human visual object recognition},
journal = {NeuroImage},
volume = {264},
pages = {119754},
year = {2022},
doi = {10.1016/j.neuroimage.2022.119754},
url = {https://doi.org/10.1016/j.neuroimage.2022.119754}
}
About
Repository: OSF ·
Paper: NeuroImage 2022
10 subjects × 82,160 trials across 16,740 image conditions. Designed specifically for training encoding and decoding models.
What's Included
raw_data/ — 64-channel BrainVision
preprocessed_data/ — 17 occipital channels, 100 Hz
image_set/ — All stimulus images
DNN_feature_maps/ — Pre-extracted network activations
resting_state/ — Eyes open/closed baselines
Code: github.com/gifale95/eeg_encoding
Download
Download the full dataset:
pip install osfclient
osf -p 3jk45 clone THINGS-EEG2
Or browse and download specific files from the OSF web interface.
THINGS Ventral-stream Spiking Dataset (TVSD)
Paolo Papale, Pieter Roelfsema
Dept of Vision & Cognition, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
High-channel count electrophysiological recordings of 22,248 images (1,854 categories, 12 images per category) from the macaque visual cortex across V1, V4, and IT, in two animals.
@article{Papale2025TVSD,
author = {Papale, Paolo and Wang, Feng and Self, Matthew W. and Roelfsema, Pieter R.},
title = {An extensive dataset of spiking activity to reveal the syntax of the ventral stream},
journal = {Neuron},
volume = {113},
number = {4},
pages = {539--553.e5},
year = {2025},
doi = {10.1016/j.neuron.2024.12.003},
url = {https://doi.org/10.1016/j.neuron.2024.12.003}
}
About
Repository: G-Node GIN ·
DOI: 10.12751/g-node.hc7zlv ·
License: CC-BY
1,024 electrodes recording across V1, V4, and IT in two macaques viewing all 22,248 THINGS images. 30 kHz sampling with 31 Utah arrays.
Coverage
| Area | Monkey N | Monkey F |
| V1 | 7 arrays | 8 arrays |
| V4 | 4 arrays | 3 arrays |
| IT | 4 arrays | 5 arrays |
Download
This dataset uses Git-Annex. Install DataLad and clone:
pip install datalad
datalad clone \
https://gin.g-node.org/INT/things_visual_stream_spiking
cd things_visual_stream_spiking
datalad get .
Or browse files at gin.g-node.org
THINGSvision
Lukas Muttenthaler, Martin Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Streamlines the extraction of neural network activations by providing a simple wrapper for extracting activations from a wide range of commonly used deep convolutional neural network architectures.
@article{Muttenthaler2021THINGSvision,
author = {Muttenthaler, Lukas and Hebart, Martin N.},
title = {{THINGSvision}: A {Python} Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks},
journal = {Frontiers in Neuroinformatics},
volume = {15},
pages = {679838},
year = {2021},
doi = {10.3389/fninf.2021.679838},
url = {https://doi.org/10.3389/fninf.2021.679838}
}
About
GitHub: ViCCo-Group/thingsvision ·
PyPI: thingsvision
Python package to extract activations from 100+ neural network models including torchvision, timm, CLIP, OpenCLIP, DINO, and more.
Features
- Unified API across model families
- Batch processing for large datasets
- Built-in RSA and CKA analysis
- GPU acceleration
Download
Install from PyPI:
pip install thingsvision
Extract features from your images:
thingsvision extract-features \
--image-root "./images" \
--model-name "clip_ViT-B/32" \
--module-name "visual" \
--batch-size 32 \
--device cuda \
--out-path "./features"
See the documentation for Python API usage.
THINGS memorability
Max Kramer, Martin Hebart, Chris BakerMax Kramer, Martin Hebart, Chris Baker, Wilma Bainbridge +1 more
Department of Psychology, University of Chicago, USADepartment of Psychology, University of Chicago, USA, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, National Institute of Mental Health, Bethesda, USA +2 more
Memorability scores for all 26,107 object images, collected in a large sample of >13,000 participants. Offers a systematic evaluation of memorability across a wide range of natural object images, object concepts, and high-level categories.
@article{Kramer2023Memorability,
author = {Kramer, Max A. and Hebart, Martin N. and Baker, Chris I. and Bainbridge, Wilma A.},
title = {The features underlying the memorability of objects},
journal = {Science Advances},
volume = {9},
number = {17},
pages = {eadd2981},
year = {2023},
doi = {10.1126/sciadv.add2981},
url = {https://doi.org/10.1126/sciadv.add2981}
}
About
Repository: OSF
Memorability scores for all 26,107 THINGS images from 13,946 participants in a continuous recognition task.
What's Included
- Corrected recognition scores per image
- Hit rates and false alarm rates
- Participant-level data
Download
Download the dataset:
pip install osfclient
osf -p 5a7z6 clone THINGS-memorability
Or download from the OSF web interface.
THINGS semantic feature norm
Hannes Hansen, Martin Hebart
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Semantic feature production norms for all 1,854 object concepts in THINGS, generated with the natural language model GPT-3, developed by OpenAI.
@article{Hansen2022SemanticFeatures,
author = {Hansen, Hannes and Hebart, Martin N.},
title = {Semantic features of object concepts generated with {GPT-3}},
journal = {arXiv preprint},
year = {2022},
eprint = {2202.03753},
archivePrefix = {arXiv},
url = {https://arxiv.org/abs/2202.03753}
}
About
This dataset provides semantic feature norms for the 1,854 object concepts in the THINGS database, automatically generated using GPT-3.
Key features:
- Semantic features for all 1,854 THINGS concepts
- Generated using GPT-3 language model
- Comparable to human-generated feature norms
- Useful for studying conceptual representations
The generated features rival human norms in predicting similarity, relatedness, and category membership.
Download
Feature norms are available from the OSF repository:
pip install osfclient
osf -p jum2f clone
Look for the semantic feature files in the downloaded repository.
THINGS-constellations
Jaan Aru, Kadi Tulver, Tarun Khajuria
University of Tartu
A dataset of "constellation" images that can be used to study inference in human vision and AI. The images are stripped of local details creating a dotted outline of the object that can be inferred from the local pattern. The dataset includes 3533 image sets of a total of 1215 common objects from the THINGS dataset. A selected set of 481 top constellation images and the code to generate more constellation images from photos are also included.
@inproceedings{Khajuria2022Constellations,
author = {Khajuria, Tarun and Hebart, Martin N. and Battleday, Ruairidh M.},
title = {Constellations: A Novel Dataset for Studying Iterative Inference in Humans and {AI}},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year = {2022},
pages = {4392--4400},
url = {https://openaccess.thecvf.com/content/CVPR2022W/SketchDL/papers/Khajuria_Constellations_A_Novel_Dataset_for_Studying_Iterative_Inference_in_Humans_CVPRW_2022_paper.pdf}
}
About
Repository: OSF ·
Code: GitHub
3,533 constellation images from 1,215 objects — dotted outline representations for studying visual inference and object recognition from minimal information.
What's Included
- Full constellation set
- 481 curated high-quality examples
- Python generation code
Download
Download the images:
pip install osfclient
osf -p qf5tz clone constellation-images
Get the generation code:
git clone https://github.com/tarunkhajuria42/Constellations-Dataset
STUFF database and dimensions
Filipp Schmidt, Martin Hebart, Alex SchmidFilipp Schmidt, Martin Hebart, Alex Schmid, Roland Fleming +1 more
Psychology Department, University Gießen, GermanyPsychology Department, University Gießen, Germany, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany, National Institute of Mental Health, Bethesda, USA +2 more
600 images of 200 materials sampled systematically and representatively from the American English language. Includes object dimensions and material similarity matrices identified from >1.8 million similarity judgments.
@article{Schmidt2025STUFF,
author = {Schmidt, Filipp and Hebart, Martin N.},
title = {Core dimensions of human material perception},
journal = {Proceedings of the National Academy of Sciences},
volume = {122},
number = {10},
pages = {e2417202122},
year = {2025},
doi = {10.1073/pnas.2417202122},
url = {https://doi.org/10.1073/pnas.2417202122}
}
About
Images: OSF (myutc) ·
Triplets: OSF (5gr73)
200 material concepts with 600+ images and 1.87 million similarity judgments. Companion to THINGS for material/texture perception research.
What's Included
- Material images (wood, metal, fabric, etc.)
- Triplet odd-one-out judgments
- 36 derived material dimensions
Download
Download material images:
pip install osfclient
osf -p myutc clone STUFF-materials
Download triplet judgments:
osf -p 5gr73 clone STUFF-triplets
THINGS fMRI2
Marie St-Laurent, CNeuromod
University of Montréal, Canada
Event-related functional MRI data at 3T in 8 subjects for 4,320 images (720 categories, 6 images per category, 3 repeats per image), using a memory paradigm. Optimized for the study of object recognition for hypothesis-based analyses, data-driven analyses, and representational similarity analysis.
About
A large-scale fMRI dataset extending the THINGS-data collection with additional subjects and paradigms.
Download
THINGS electrophysiology1
Thomas Reber, Florian Mormann
Dept of Epileptology, University of Bonn
Direct recordings from human entorhinal cortex, hippocampus, amygdala, and parahippocampal cortex in 23 patients for 1,200 images (150 categories, 8 images each).
About
Human electrophysiology recordings in response to THINGS images, providing high temporal resolution neural data.
Download
THINGS macaque EEG
Siegel Lab
University of Tübingen, Germany
Scalp EEG recordings of 8,640 THINGS images (720 categories, 12 images per category) in 2 macaque monkeys.
About
EEG recordings from macaque monkeys viewing THINGS images, enabling cross-species comparisons of object representations.
Download
THINGS macaque V4
Ratan Murty, Sachi Sanghavi
Massachusetts Institute of Technology, Cambridge MA, USA
Electrophysiological recordings of 14,832 THINGS images (1,854 categories, 8 images per category) in area V4 of a macaque monkey.
About
Neural recordings from macaque area V4 in response to THINGS images, probing mid-level visual representations.
Download
THINGS iEEG
Avniel Ghuman, LCND team
Laboratory of Cognitive Neurodynamics, University of Pittsburgh, PA, USA
Intracranial EEG from ventral temporal cortex in human patients.
About
Intracranial EEG recordings from human patients viewing THINGS images, providing unique spatiotemporal resolution of object processing.
Download
THINGS fMRI3
PRISME team
Institut Universitaire en Santé Mentale de Montréal (IUSMM), Canada
Event-related functional MRI measurements at 3T in a large sample of psychotic patients, presenting 5,568 images from 720 object categories per patient.
About
An fMRI dataset collected as part of an international collaboration extending THINGS neural data collection.
Download