YouTube Moderation

Paper 1 is acceptated at CSCW' 2021:
“How advertiser-friendly is my video?”: YouTuber’s Socioeconomic Interactions with Algorithmic Content Moderation
| My role:
- All procedures including data collection and analysis
- Theoretical development, writing, and presenting
| Guided by my advisor:
- Editing, revision and supervision
* Please see video presentation here:
1. Problem Statement
“YouTuber,” i.e., content creator nowadays could be a profitable job for many people who aim to earn money (primarily advertising income from the platform) from their content creation and sharing. YouTube decides this income from the viewing times and viewer engagement rates of videos, constituting the socioeconomic content moderation.

But not all video content is advertiser-friendly or acceptable to communities. When YouTube deems a YouTuber’s video as unacceptable, YouTube will demonetize 💵 💲 a YouTuber’s videos or channels, eventually denying them from earning more future ad revenue through placing limited or no ads on the videos.
As social media uses various algorithms (e.g., machine learning) to implement content moderation, little attention has been paid to how users interact with algorithmic moderation and their post-hoc experience. Especially for the users perform profitable content creation, few studies have understand how they receive, perceive, and react to algorithmic content moderation. This study aims to approach this research gap in the context of socioeconomic implications of YouTube moderation.
2. Research Question
How do YouTubers interact with the socioeconomic side of algorithmic moderation?
3. Methods
-
We scraped online discussion data from the subreddit, r/youtube through a bag of keywords related to content moderation of YouTube. The final dataset contained 2,779 threads with 60,310 individual comments.
-
We inductively ran thematic analysis on the online discussion data.
4. Findings
a. Being Confused about Algorithmic Opacity: YouTubers who experienced algorithmic punishments felt confused and had no clues of how algorithms made decisions. Even if the adjudication of a moderation case is clear-cut to most people, YouTubers who received the penalty could experience it differently and feel hard to retore its algorithmic transparency.
b. Managing Algorithmic Precarity: Algorithmic moderation engenders the work uncertainty of video content creation and how YouTubers manage their precarity.
c. Learning and Applying Algorithmic Know-How: YouTubers collectively made sense of algorithmic punishments, developing and disseminating algorithmic knowledge of YouTube moderation. Specifically, they shared and analyzed their punishment experiences to speculate about moderation algorithms, which in turn informed operations of repairing their past moderation and avoiding future moderation.

5. Discussion and Design Considerations
We shed light on the labor of video content creators, the moderated, in coping with their punishments. We showed how moderation algorithms intersect with YouTubers’ content creation work, engendering a necessary form of algorithmic labor to comply with moderation algorithms on YouTube and make their videos “advertiser-friendly.”
5.1 Design Considerations for Alglorithmic Moderation Systems
-
Transparency of algorithmic decision-making process: detailed moderation explanations.
-
Potentially Reimbursement given false-positive moderation decisions: especially for those reversed false moderation decisions.
-
More advanced tools for channel moderation to prevent unacceptable viewer comment from incurring moderation decisions on YouTubers.
Paper 2 is under review:
How do YouTubers perceive the fairness of YouTube moderation? (this is a tentative alternative title)
| My role:
- All procedures including data collection and analysis
- Theoretical development, writing, and presenting
| Guided by my advisor:
- Editing, revision and supervision
1. Problem Statement
Researchers have started to investigate how moderation is carried out through sociotechnical structure of humans and algorithms as well as how users interact with algorithmic moderation, from questioning its clarity and transparency to using community support to repair or avoid moderation.
What is less discussed in the literature is user’s fairness perception of algorithmic content moderation. Such perceptions germinate in a realistic scenario where algorithms are applied to end-users: users would invoke the notion of fairness to describe their experiences with the (algorithmic) decision-making procedures and outcomes. Thus, this paper aims to approach this research gap.

2. Research Question
How do YouTubers perceive the fairness of YouTube moderation?
3. Methods
-
Interviewed 21 YouTubers who were in the YouTube Partner Program and had ever experienced YouTube moderation through a semi-structured interview protocol.
-
Inductive thematic analysis on the interview data.

4. Findings
a. YouTubers experience unequal moderation treatments through cross-comparisons.
b. YouTubers observe that the moderation system makes inconsistent decisions or that it makes decisions inconsistent with content policies.
c. YouTubers do not have voice or control in multiple algorithmic decisions.
5. Discussion and Design Considerations
Moderation fairness on YouTube presents as a multi-dimensional notion:
-
it has a temporal dimension in terms of when YouTubers invoke the fairness of YouTube moderation;
-
it concerns social context where YouTubers situated their fairness perception;
-
value-based dimensions of moderation fairness surfaced in condition in the social, economic, and technical parts of algorithmic content moderation.
What moderates YouTubers is not just a sole moderation algorithm.
Instead, it is an algorithmic assemblage composed of different classes of algorithms that moderates content creation. Algorithmic assemblage refers to a mixed “infrastructure that supports implementation, maintenance, use, and evolution of algorithms, data, and platforms.”
5.1 Design Considerations for Alglorithmic Moderation Systems
-
To resolve perceived fairness, we thus suggest that YouTube should disclose whether YouTuber’s specific videos are invisible under ‘restricted mode,’ which ought to be listed in the Studio dashboard.
-
Living within an algorithmic assemblage of moderation, YouTubers are entitled to know whenever moderation decisions affect their visibility and monetization.
-
YouTubers’ voice and input should be more valued in an algorithmic assemblage of moderation decision-making processes.