Online Community Moderation
“This is a study of human implications of online moderation systems that deal with disruptive online behaviors, such as offensive language and hate speech, by issuing penalties such as content removal or account suspension to users they determine to be disruptive. These moderation systems usually fail to provide punished users enough support in terms of explaining why they are punished and suggesting how they can improve. Such severe limitations in fairness, accountability, and transparency lead to enormous challenges to online moderation and community wellbeing.”
For more details, please look at Dr. Yubo Kou’s NSF funding page.