arXiv:2603.12279v1 Announce Type: new
Abstract: Intracranial language brain-computer interfaces (BCIs) are a promising route for restoring communication in people with severe motor and speech impairments, but clinical translation remains limited by fragmented evidence and unresolved design trade-offs across neuroscience, hardware, algorithm, evaluation, and clinical deployment. This review synthesizes progress in neural mechanisms of overt, mimed, and imagined speech; decision-oriented hardware comparisons of microelectrode array (MEA), electrocorticography (ECoG), and stereotactic electroencephalography (SEEG) recording modalities; experiment design for cross-subject and multilingual generalization; and neural decoding advances spanning sequence models, transformers, articulatory intermediates, and language-prior-assisted frameworks. We highlight persistent bottlenecks, including weak cross-subject transfer, long-term non-stationarity and recalibration burden, heterogeneous and non-comparable evaluation practices, limited naturalistic expressivity (especially for tonal/logosyllabic languages), and low signal-to-noise ratio (SNR) of neural activity in covert speech decoding. Our contributions are threefold: (1) an end-to-end, decision-oriented synthesis linking neural representations to recording choices, experimental design, decoding model architectures, and translational constraints; (2) a structured framework organized around five coupled design questions, together with a unified evaluation framework and a cross-language/cross-task benchmark template integrating objective, perceptual, expressive, conversational, and longitudinal metrics; and (3) user-centered translational guidance covering agency-preserving shared control, verifiable performance priorities, and scenario-specific minimum viable system (MVP) profiles for reliability-first home communication versus fidelity-first conversational speech restoration.