arXiv:2603.27414v1 Announce Type: cross
Abstract: Statistical estimation often involves tradeoffs between expensive, high-quality measurements and a variety of lower-quality proxies. We introduce Multiple-Prediction-Powered Inference (MultiPPI): a general framework for constructing statistically efficient estimates by optimally allocating resources across these diverse data sources. This work provides theoretical guarantees about the minimax optimality, finite-sample performance, and asymptotic normality of the MultiPPI estimator. Through experiments across three diverse large language model (LLM) evaluation scenarios, we show that MultiPPI consistently achieves lower estimation error than existing baselines. This advantage stems from its budget-adaptive allocation strategy, which strategically combines subsets of models by learning their complex cost and correlation structures.
A Systematic Taxonomy of Security Vulnerabilities in the OpenClaw AI Agent Framework
arXiv:2603.27517v1 Announce Type: cross Abstract: AI agent frameworks connecting large language model (LLM) reasoning to host execution surfaces–shell, filesystem, containers, and messaging–introduce security challenges structurally


