Vision-language models learn the geometry of human perceptual space

arXiv:2510.20859v1 Announce Type: new
Abstract: In cognitive science and AI, a longstanding question is whether machines learn representations that align with those of the human mind. While current models show promise, it remains an open question whether this alignment is superficial or reflects a deeper correspondence in the underlying dimensions of representation. Here we introduce a methodology to probe the internal geometry of vision-language models (VLMs) by having them generate pairwise similarity judgments for a complex set of natural objects. Using multidimensional scaling, we recover low-dimensional psychological spaces and find that their axes show a strong correspondence with the principal axes of human perceptual space. Critically, when this AI-derived representational geometry is used as the input to a classic exemplar model of categorization, it predicts human classification behavior more accurately than a space constructed from human judgments themselves. This suggests that VLMs can capture an idealized or `denoised’ form of human perceptual structure. Our work provides a scalable method to overcome a measurement bottleneck in cognitive science and demonstrates that foundation models can learn a representational geometry that is functionally relevant for modeling key aspects of human cognition, such as categorization.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844