arXiv:2602.12873v4 Announce Type: replace-cross
Abstract: Generative social robots (GSRs) powered by large language models enable adaptive, conversational tutoring but also introduce risks such as misinformation, overreliance, and privacy violations. Existing frameworks for educational technologies and responsible AI primarily define desired behaviors, yet they rarely specify the knowledge prerequisites that enable generative agents to express these behaviors reliably. To address this gap, we adopt a knowledge-based design perspective and investigate what information tutoring-oriented GSRs require to function responsibly and effectively in higher education. Based on twelve semistructured interviews with university students and lecturers, we identified twelve design requirements across three knowledge types: self-knowledge (assertive, conscientious, and friendly personality with customizable role), user-knowledge (personalized information about student learning goals, learning progress, motivation type, emotional state, and background), and context-knowledge (learning materials, educational strategies, courserelated information, and physical learning environment). Drawing from these results, this work provides a structured foundation for the design of tutoring GSRs, aligning generative AI capabilities with pedagogical and ethical expectations.
Week one of the Musk v. Altman trial: What it was like in the room
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Two of


