YouTube Moderation

Image credit: Unsplash

“YouTuber,” i.e., content creator nowadays could be a profitable job for many people who aim to earn money (primarily advertising income from the platform) from their content creation and sharing. But not all video content is advertiser-friendly or acceptable to communities. When YouTube deems a YouTuber’s video as unacceptable, YouTube will demonetize 💵 💲 a YouTuber’s videos or channels, eventually denying them from earning more future ad revenue through placing limited or no ads on the videos.

Content moderation originally refers to mechanisms of ruling abuse and facilitating cooperation in online platforms, and now thus can become a source of socioeconomic punishment, beyond the suppression of expression online.

As social media uses various algorithms (e.g., machine learning) to implement content moderation, little attention has been paid to how users interact with algorithmic moderation and their post-hoc experience. Especially for the users perform profitable content creation, few studies have understand how they receive, perceive, and react to algorithmic content moderation. This project aims to approach this research gap in the context of socioeconomic implications of YouTube moderation.