arXiv:2503.07341v2 Announce Type: replace-cross
Abstract: Recent advances in artificial intelligence (AI) have led to a wide range of predictions about its long-term impact on humanity. A central focus is the potential emergence of transformative AI (TAI), eventually capable of outperforming humans in all economically valuable tasks and fully automating labor. Discussed scenarios range from unprecedented economic growth and abundance (“post-scarcity” or “cornucopia”) to human extinction after a misaligned TAI takes over (“AI doom”). However, the probabilities and implications of these scenarios remain highly uncertain. We contribute by organizing the various scenarios and evaluating their associated existential risks and economic outcomes in terms of aggregate welfare. Our results imply that even low-probability catastrophic outcomes justify substantial investments in AI safety and alignment research. This result highlights that current global efforts in AI safety and alignment research are insufficient relative to the scale and urgency of the risks posed by TAI.
Evaluating LLM-Based Goal Extraction in Requirements Engineering: Prompting Strategies and Their Limitations
arXiv:2604.22207v1 Announce Type: cross Abstract: Due to the textual and repetitive nature of many Requirements Engineering (RE) artefacts, Large Language Models (LLMs) have proven useful

