• Home
  • Uncategorized
  • Beyond Instrumental and Substitutive Paradigms: Introducing Machine Culture as an Emergent Phenomenon in Large Language Models

arXiv:2601.17096v1 Announce Type: cross
Abstract: Recent scholarship typically characterizes Large Language Models (LLMs) through either an textitInstrumental Paradigm (viewing models as reflections of their developers’ culture) or a textitSubstitutive Paradigm (viewing models as bilingual proxies that switch cultural frames based on language). This study challenges these anthropomorphic frameworks by proposing textbfMachine Culture as an emergent, distinct phenomenon. We employed a 2 (Model Origin: US vs. China) $times$ 2 (Prompt Language: English vs. Chinese) factorial design across eight multimodal tasks, uniquely incorporating image generation and interpretation to extend analysis beyond textual boundaries. Results revealed inconsistencies with both dominant paradigms: Model origin did not predict cultural alignment, with US models frequently exhibiting “holistic” traits typically associated with East Asian data. Similarly, prompt language did not trigger stable cultural frame-switching; instead, we observed textbfCultural Reversal, where English prompts paradoxically elicited higher contextual attention than Chinese prompts. Crucially, we identified a novel phenomenon termed textbfService Persona Camouflage: Reinforcement Learning from Human Feedback (RLHF) collapsed cultural variance in affective tasks into a hyper-positive, zero-variance “helpful assistant” persona. We conclude that LLMs do not simulate human culture but exhibit an emergent Machine Culture — a probabilistic phenomenon shaped by textitsuperposition in high-dimensional space and textitmode collapse from safety alignment.

Subscribe for Updates

Copyright 2025 dijee Intelligence Ltd.   dijee Intelligence Ltd. is a private limited company registered in England and Wales at Media House, Sopers Road, Cuffley, Hertfordshire, EN6 4RY, UK registeration number 16808844