
Source
MIT Sloan (Ideas Made to Matter)
Summary
A study led by MIT Sloan’s Jackson Lu and collaborators shows that generative AI models like GPT and Baidu’s ERNIE respond differently depending on the language of the prompt, reflecting cultural leanings embedded in their training data. When asked in English, responses tended toward an independent, analytic orientation; in Chinese, they skewed toward interdependent, holistic thinking. Those differences persist across social and cognitive measures, and even subtle prompt framing (asking the AI “to assume the role of a Chinese person”) can shift outputs. The finding means users and organisations should be aware of—and guard against—hidden cultural bias in AI outputs.
Key Points
- AI models exhibit consistent cultural orientation shifts depending on prompt language: English prompts lean independent/analytic; Chinese prompts lean interdependent/holistic.
- These cultural tendencies appear in both social orientation (self vs group) and cognitive style (analysis vs context) tests.
- The cultural bias is not fixed: prompting the model to “assume the role of a Chinese person” moves responses toward interdependence even in English.
- Such biases can influence practical outputs (e.g. marketing slogans, policy advice), in ways users may not immediately detect.
- The study underscores the need for cultural awareness in AI deployment and places responsibility on developers and users to mitigate bias.
Keywords
URL
https://mitsloan.mit.edu/ideas-made-to-matter/generative-ai-isnt-culturally-neutral-research-finds
Summary generated by ChatGPT 5

