Silent visual reading is accompanied by the phenomenological experience of an inner voice. However, the temporal dynamics and functional role of the underlying neural representations remain unclear. Here, we recorded electroencephalography (EEG) data while humans read naturalistic narratives, and applied computational modelling to isolate time-resolved auditory from visual and semantic representations. Our results revealed robust auditory representations during silent reading that were not explained by visual or semantic features, emerging already before word onset. These auditory representations mimicked the sequence of sounds of the corresponding words in a fine-grained manner, revealing a candidate basis for the phenomenological experience of hearing an inner voice while reading. Finally, we show that auditory word representations exhibit a key signature of predictive processing: they are stronger for unexpected than expected words. More specifically, early auditory features contribute to predictions before word onset, whereas later features only contribute after word onset, suggesting distinct prediction stages. Together our results reveal the temporal dynamics and functional role of auditory representations in silent reading.
Generative AI Mental Health Chatbots as Therapeutic Tools: Systematic Review and Meta-Analysis of Their Role in Reducing Mental Health Issues
Background: To date, there is no comprehensive paper that systematically synthesizes the effect of generative AI chatbot’s impact on mental health. Can generative AI chatbots




