arXiv:2601.17096v1 Announce Type: cross
Abstract: Recent scholarship typically characterizes Large Language Models (LLMs) through either an textitInstrumental Paradigm (viewing models as reflections of their developers’ culture) or a textitSubstitutive Paradigm (viewing models as bilingual proxies that switch cultural frames based on language). This study challenges these anthropomorphic frameworks by proposing textbfMachine Culture as an emergent, distinct phenomenon. We employed a 2 (Model Origin: US vs. China) $times$ 2 (Prompt Language: English vs. Chinese) factorial design across eight multimodal tasks, uniquely incorporating image generation and interpretation to extend analysis beyond textual boundaries. Results revealed inconsistencies with both dominant paradigms: Model origin did not predict cultural alignment, with US models frequently exhibiting “holistic” traits typically associated with East Asian data. Similarly, prompt language did not trigger stable cultural frame-switching; instead, we observed textbfCultural Reversal, where English prompts paradoxically elicited higher contextual attention than Chinese prompts. Crucially, we identified a novel phenomenon termed textbfService Persona Camouflage: Reinforcement Learning from Human Feedback (RLHF) collapsed cultural variance in affective tasks into a hyper-positive, zero-variance “helpful assistant” persona. We conclude that LLMs do not simulate human culture but exhibit an emergent Machine Culture — a probabilistic phenomenon shaped by textitsuperposition in high-dimensional space and textitmode collapse from safety alignment.
Infectious disease burden and surveillance challenges in Jordan and Palestine: a systematic review and meta-analysis
BackgroundJordan and Palestine face public health challenges due to infectious diseases, with the added detrimental factors of long-term conflict, forced relocation, and lack of resources.

