Our ability to recognize sound sources in the world is critical to daily life, but is not well documented or understood in computational terms. We developed a large-scale behavioral benchmark of human environmental sound recognition, built stimulus-computable models of sound recognition, and used the benchmark to compare models to humans. The behavioral benchmark measured how sound recognition varied across source categories, audio distortions, and concurrent sound sources, all of which influenced recognition performance in humans. Artificial neural network models trained to recognize sounds in multi-source scenes reached near-human accuracy and qualitatively matched human patterns of performance in many conditions. By contrast, traditional models of the cochlea and auditory cortex that were trained to recognize sounds produced worse matches to human performance. Models trained on larger datasets exhibited stronger alignment with both human behavior and brain responses. The results suggest that many aspects of human sound recognition emerge in systems optimized for the problem of real-world recognition. The benchmark results set the stage for future explorations of auditory scene perception involving salience and attention.
Virtual reality in treatment of psychological disorders: a systematic review
ObjectiveThe paper aims to systematically review the literature on the efficacy of virtual reality (VR) based therapies to treat mental health disorders in Randomized Control



