![]() |
Content moderation can have a disproportionate effect on already-marginalized groups of people. Content moderation often results in mass take-downs of speech on social media from a marginalized group of people, while more dominant and well-funded parties could benefit from more nuanced approaches in their communications on major social media platforms. When a platform decides to enforce certain rules, unfortunately, those rules tend to only be applied to marginalized groups, which shouldn't be the case.
Content moderation decisions can cause real-world harm to general audiences as well as various stakeholders when they try to moderate what they deem desirable or undesirable content. These decisions all too often have a disproportionate effect on marginalized groups. Content moderation is supposed to reduce or eradicate hatred and social bias, but sometimes it makes an unwanted increase. While it is important to protect certain audiences like minors from any harmful content, it is equally important to ensure that content moderation decisions do not cause any real-world harm to viewers or marginalized groups.
Content moderation is a complex issue that has been debated for years. Social media platforms are supposed to be self-regulated and employ a combination of algorithmic and human intervention to determine what kinds of content should be delisted or banned. The wide discretion that platforms conduct over content moderation, which is influenced by people behind the scene, can be very dangerous regardless of whether it is done by a machine or human. There have been mild content moderation efforts available to various online platforms for quite some time such as labeling, algorithmic sorting, forbidden word filtering and so forth.
Leading social platforms like YouTube, Twitter and Facebook set a dangerous precedent with their low prioritization of ethical content moderation. These companies are often filtering activists' content in the name of community safety. It is important to note that content moderation is not a difficult task, but there are trade-offs. Doing no or less content moderation would be better. However, ignoring potential trade-offs doesn't mean that content moderation doesn't exist. Having said that, content moderation is a delicate balancing act for social media platforms trying to grow their user base and revenue sources. Social media can't afford to lose eyeballs or engagement on their sites.
Social media platforms typically employ a combination of manual and automated processes to moderate posted content. For example, YouTube flags content through machine learning, which flags any videos which fall within its algorithm, hashing, which flags reuploads by tracking unique identifiers in removed content, and human review, which is used to verify content that has been flagged by the algorithm. Yet there is tremendous public and political pressure to stop disinformation and remove harmful content from those major social media platforms. Meanwhile, smaller platforms are often exposed to relying on automated but mechanical moderation, which can be less effective.
There are arguments for and against content moderation. Some could argue that content moderation should be completely banned because it could risk silencing protected speech and can be used to suppress dissent. Others argue that content moderation is a necessary evil to prevent the spread of harmful messages. However, it is important to note that content moderation is not a black-and-white issue. There are many nuances to consider, such as the role of technologies in content moderation, the challenges of moderating content at scale and the ethical considerations involved in content moderation.
![]() |
OpenAI and ChatGPT logos are seen in this illustration taken, Feb, 3. Reuters-Yonhap |
AI will play a significant role in content moderation. AI can automatically analyze and classify potentially harmful content, increasing the speed and effectiveness of the overall moderation procedure. AI can also make human labor more productive, helping people manage content faster, more effectively and with fewer errors. AI can make decisions, but it is also important to recognize that AI is not perfect and can make mistakes. In content moderation, human moderation is still necessary, in order to ensure that the content is being moderated effectively and accurately.
The value of content should be determined by a variety of factors, such as its relevance, accuracy and usefulness. In content moderation, the goal is to ensure that harmful content, not different opinions, is being removed while allowing people to do more free expression and idea sharing. The determination of what constitutes harmful content can be a complex issue, though and it is often up to the platforms' small content moderation committee to set its own policies and guidelines, which are not widely publicized, nor transparent.
Reviews and ratings can be used to help with content moderation by providing feedback on the quality and relevance of content. Reviews and ratings can be used to identify content that is inaccurate, misleading or harmful, and to help determine what content should be removed or flagged for further review. Reviews and ratings can also be used to help identify patterns of behavior that may indicate fraudulent or malicious activities.
There is a relationship between power concentration and content moderation. However, toothless oversight boards and the general public often legitimize the concentration of power in the hands of only a few infrastructure providers and tech platforms. This gives them, not in democratic processes and structures, the discretion to egregiously shape public opinion. The concentration of power in content moderation can put freedom of speech at risk.
Participatory journalism can be a solution to this problem. Participatory journalism is a type of journalism that allows the general public to participate in news gathering and editorial processes. Participatory journalism opens up the traditional practice of journalism to every stakeholder of society for their work in gathering, analyzing, reporting and sharing information. Building upon that, participatory content moderation could solve many potential content moderation issues. When tech giants act like humongous media platforms, audiences can block and filter content by themselves by leveraging their collective genius.
This particular method is also a long debated topic, but the process of content moderation that involves the participation of the community in the moderation process could lift many worries that content moderation has, as long as it is completed under clear guiding principles, open policies and systematic approaches.
Daniel Shin is a venture capitalist and senior luxury fashion executive, overseeing corporate development at MCM, a German luxury brand. He also teaches at Korea University.