Effects of Source Labeling on Perceived Bias in Human and AI-Generated Advice
A psychology-based study conducted on the student population at Tufts University
Details
-
This experimental psychology project investigated how labeling advice as AI-generated versus human-written affects people’s trust and willingness to use that advice. Using a mixed-design study with undergraduate participants, identical AI and human-written passages were presented either with clear authorship labels or with no labels at all. Participants rated each passage on trustworthiness, helpfulness, and intention to follow the advice. The results showed that labeling advice as AI significantly lowered trust and behavioral intention, even when the content itself did not change, while unlabeled AI and human advice were evaluated similarly. These findings suggest that bias toward AI is driven more by source labeling than by content quality, and that familiarity with AI can reduce this effect.
-
Fall 2025
-
Experimental Design
Survey Design (Qualtrics)
Statistical Analysis (Mixed-Design ANOVA)
Data Analysis in R
Research Ethics & Informed Consent
Measurement of Psychological Constructs
Human–AI Interaction Research
Scientific Writing (APA Style)
Data Visualization & Interpretation
Collaborative Research Skills