Youtube’s AI Dilemma in Content Moderation
Content moderation is an integral part of a social networking site. In recent months, call for better quality content sent technology firms racing towards reconstructing their mechanisms.
Since the start of the pandemic, Youtube started to rely on the help of artificial intelligence. This is a temporary alternative, while human employees cannot come to the office.
Although most tech enthusiasts think that human moderators can bring their work at home, this cannot happen. The nature of the job is sensitive, therefore requiring a tightly controlled corporate environment.
Similarly, the right technical infrastructure is needed for the efficient execution of tasks.
After the sudden shift in March, the giant video-sharing app gave a warning to the public that it will resort to artificial intelligence for content moderation.
Instead of humans, machines made by modern technology will be responsible for filtering what should and should not be on the platform. With this, several unintentional mistakes will be made.
Nevertheless, the last line of defense will still be humans. Human employees will confirm whether content should proceed to posting or be taken down.
Uploads to be removed are those that violate the platform’s community guidelines, among other considerations.
However, the tech firm’s representatives reiterated that content creators should expect machines to accidentally remove some videos that do not violate guidelines.
Should that happens, the creator may appeal for reconsideration, but the process may take longer than usual due to the reduction of human operations handling the matter.
Today in technology news, Youtube confirmed in its second-quarter Community Guidelines Enforcement Report that it had removed a significant number of videos that did not violate community standards.
Its representatives reinstated its prior claim that heavy reliance on machine learning led to several unwanted mistakes.
Second Quarter Pull Outs
Youtube said that the number of videos pulled out from its interface in the quarter is twice the number of figures recorded in the first quarter.
Over 11.4 million videos were taken down for the period covering April to June 2020. From the same period last year, figures only stand at 9 million.
Similarly, the number of video reinstatements also doubled compared to the previous quarter.
So far, the number of appeals at 3% is significantly lower than actual removals. Consequently, the number of reinstated videos from request ballooned from 25% to 50%.
As a response to the concern, Youtube said that they have resorted to “over-enforcement” in duty execution.
In 2017, the platform announced its plan to hire more than 10,000 content moderators to winnow violent videos targeted to young users.
It acknowledged the pivotal role that humans play as they make much more contextualized decisions on content moderation, something that artificial intelligence lacks.
Now that more people spend more time on their handheld devices, the right balance between human oversight and machine intervention is imperative for Youtube’s sustained growth.
Due to its ubiquitous purpose, the technology app experience growing popularity worldwide. Its services range from educational and recreational provision to content production.
Today, there is a constant pressure to produce quality content regardless of any circumstance.
Thus, Youtube is in a tug-of-war whether to continue its heavy reliance on AI, which often comes with loopholes, or resort to cut its over-enforcement scheme and deduce the platform’s credibility in the long term.