arXiv:2502.18535v2 Announce Type: replace-cross
Abstract: Machine learning is increasingly deployed through outsourced and cloud-based pipelines, which improve accessibility but also raise concerns about computational integrity, data privacy, and model confidentiality. Zero-knowledge proofs (ZKPs) provide a compelling foundation for verifiable machine learning because they allow one party to certify that a training, testing, or inference result was produced by the claimed computation without revealing sensitive data or proprietary model parameters. Despite rapid progress in zero-knowledge machine learning (ZKML), the literature remains fragmented across different cryptographic settings, ML tasks, and system objectives. This survey presents a comprehensive review of ZKML research published from June 2017 to August 2025. We first introduce the basic ZKP formulations underlying ZKML and organize existing studies into three core tasks: verifiable training, verifiable testing, and verifiable inference. We then synthesize representative systems, compare their design choices, and analyze the main implementation bottlenecks, including limited circuit expressiveness, high proving cost, and deployment complexity. In addition, we summarize major techniques for improving generality and efficiency, review emerging commercial efforts, and discuss promising future directions. By consolidating the design space of ZKML, this survey aims to provide a structured reference for researchers and practitioners working on trustworthy and privacy-preserving machine learning.
Assessing nurses’ attitudes toward artificial intelligence in Kazakhstan: psychometric validation of a nine-item scale
BackgroundArtificial intelligence (AI) is increasingly integrated into healthcare, yet the attitudes and knowledge of nurses, who are the key mediators of AI implementation, remain underexplored.


