Connect with us

Technology

AI proves it’s a poor substitute for human content checkers during lockdown

Erin Fox

Published

on

AI proves it's a poor substitute for human content checkers during lockdown

The spread of the novel coronavirus around the world has been unprecedented and rapid. In response, tech companies have scrambled to ensure their services are still available to their users, while also transitioning thousands of their employees to teleworking. However, due to privacy and security concerns, social media companies have been unable to transition all of their content moderators to remote work. As a result, they have become more reliant on artificial intelligence to make content moderation decisions. Facebook and YouTube admitted as much in their public announcements over the last couple of months, and Twitter appears to be taking a similar tack. This new sustained reliance on AI due to the coronavirus crisis is concerning as it has significant and ongoing consequences for the free expression rights of online users.

The broad use of AI for content moderation is troubling because in many cases, these automated tools have been found to be inaccurate. This is partly because there is a lack of diversity in the training samples that algorithmic models are trained on. In addition, human speech is fluid, and intention is important. That makes it difficult to train an algorithm to detect nuances in speech, like a human would. Also, context is important when moderating content. Researchers have documented instances in which automated content moderation tools on platforms such as YouTube mistakenly categorized videos posted by NGOs documenting human rights abuses by ISIS in Syria as extremist content and removed them. It was well-documented even before the current pandemic: Without a human in the loop, these tools are often unable to accurately understand and make decisions on speech-related cases across different languages, communities, regions, contexts, and cultures. The use of AI-only content moderation compounds the problem.

Internet platforms have recognized the risks that the reliance on AI poses to online speech during this period, and have warned users that they should expect more mistakes related to content moderation, particularly related to “false positives”, which is content that is removed or prevented from being shared despite not actually violating a platform’s policy. These statements, however, conflict with some platforms’ defenses of their automated tools, which they have argued only remove content if they are highly confident the content violates the platform’s policies. For example, Facebook’s automated system threatened

to ban the organizers of a group working to hand-sew masks on the platform from commenting or posting. The system also flagged that the group could be deleted altogether. More problematic yet, YouTube’s automated system has been unable to detect and remove a significant number of videos advertising overpriced face masks and fraudulent vaccines and cures. These AI-driven errors underscore the importance of keeping a human in the loop when making content-related decisions.

During the current shift toward increased automated moderation, platforms like Twitter and Facebook have also shared that they will be triaging and prioritizing takedowns of certain categories of content, including COVID-19-related misinformation and disinformation. Facebook has also specifically listed that it will prioritize takedown of content that could pose imminent threat or harm to users, such as content related to child safety, suicide and self-injury, and terrorism, and that human review of these high-priority categories of content has been transitioned to some full-time employees. However, Facebook shared that due to this prioritization approach, reports in other categories of content that are not reviewed within 48 hours of being reported are automatically closed, meaning the content is left up. This could result in a significant amount of harmful content remaining on the platform.

News Brig Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

In addition to expanding the use of AI for moderating content, some companies have also responded to strains on capacity by rolling back their appeals processes, compounding the threat to free expression. Facebook, for example, no longer enables users to appeal moderation decisions. Rather, users can now indicate that they disagree with a decision, and Facebook merely collects this data for future analysis. YouTube and Twitter still offer appeals processes, although YouTube shared that given resource constraints, users will see delays. Timely appeals processes serve as a vital mechanism for users to gain redress when their content is erroneously removed, and given that users have been told to expect more mistakes during this period, the lack of a meaningful remedy process is a significant blow to users’ free expression rights.

Further, during this period, companies such as Facebook have decided to rely more heavily on automated tools to screen and review advertisements, which has proven a challenging process as companies have introduced policies to prevent advertisers and sellers from profiting off of public fears related to the pandemic and from selling bogus items. For example, CNBC found fraudulent ads for face masks on Google that promised protection against the virus and claimed they were “government approved to block up to 95% of airborne viruses and bacteria. Limited Stock.” This raises concerns about whether these automated tools are robust enough to catch harmful content and about what the consequences are of harmful ads slipping through the cracks.

Issues of online content governance and online free expression have never been more important. Billions of individuals are now confined to their homes and are relying on the internet to connect with others and access vital information. Errors in moderation caused by automated tools could result in the removal of non-violating, authoritative, or important information, thus preventing users from expressing themselves and accessing legitimate information during a crisis. In addition, as the volume of information available online has grown during this time period, so has the amount of misinformation and disinformation. This has magnified the need for responsible and effective moderation that can identify and remove harmful content.

The proliferation of COVID-19 has sparked a crisis, and tech companies, like the rest of us, have had to adjust and respond quickly without advanced notice. But there are lessons we can extract from what is happening right now. Policymakers and companies have continuously touted automated tools as a silver bullet solution to online content governance problems, despite pushback from civil society groups. As companies rely more on algorithmic decision-making during this time, these civil society groups should work to document specific examples of the limitations of these automated tools in order to understand the need for increased involvement of humans in the future.

In addition, companies should use this time to identify best practices and failures in the content governance space and to devise a rights-respecting crisis response plan for future crises. It is understandable that there will be some unfortunate lapses in remedies and resources available to users during this unprecedented time. But companies should ensure these emergency responses are limited to the duration of this public health crisis and do not become the norm.

Spandana Singh is a policy analyst focusing on AI and platform issues at New America’s Open Technology Institute.

From television to the internet platform, Erin switched her journey in digital media with News Brig. She served as a journalist for popular news channels and currently contributes his experience for News Brig by writing about the tech domain.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

EA allows ‘Madden NFL 21’ Xbox Series X upgrades until ‘NFL 22’ arrives

Erin Fox

Published

on

EA allows 'Madden NFL 21' Xbox Series X upgrades until 'NFL 22' arrives

When EA first introduced an option to upgrade the Xbox One version of Madden NFL 21 to the Xbox Series X release for free, it gave players until March 31st, 2021 to make the decision. That’s not much time if you aren’t dead set on buying the Series X right away, is it? You won’t have to stress quite so much going forward, however. EA has quietly updated

its “Next Level” teaser page to extend the upgrade period until the arrival of Madden NFL 22. You can buy the Xbox One edition a few months after the fact and still have some time to decide if you want Microsoft’s next-gen console.

The site was updated “earlier this week,” an EA spokesperson told Polygon (via Operation Sports). The publisher had intended to formally announce the upgrade extension, but pushed it back as part of efforts to give the spotlight to anti-racism protests. EA delayed its Play event and the Madden NFL 21 introduction to June 11th.

Continue Reading

Technology

Facebook issues new recommendations for discussing racism in groups

Erin Fox

Published

on

Facebook issues new recommendations for discussing racism in groups

Facebook’s efforts to address a surge in social turmoil will extend beyond reviewing its policies. The company has posted recommendations (via The Verge) for group operators to help them address “race and social issues,” such as the Black Lives Matter protests gripping the US and numerous other countries. They’re logical, but are clearly built in direct response to discussions becoming increasingly political between anti-racist activism and the looming US presidential election.

The social network suggests that groups should have administrators and moderators from “impacted communities,” for a start. They should also review rules and outline them for group members, even if it means forbidding certain topics or requiring post approval. Facebook also wants group leaders to be open to member input, however, and they may have to accept that the nature of a group might evolve or even prompt the creation of another group.

Continue Reading

Technology

Some iPhone 11 models display a green tint after unlocking

Erin Fox

Published

on

Some iPhone 11 models display a green tint after unlocking

A number of iPhone users are seeing a strange green tint on their devices’ displays for a few seconds after unlocking, and it’s still unclear what’s causing the phenomenon. Based on the complaints posted on Reddit and the MacRumors forum, the most affected devices are the ‌iPhone 11‌, 11 Pro and 11 Pro Max. However, some iPhone X and XS users seem to be experiencing the issue, as well.

Several users are saying that the green tint only shows up when they have Dark Mode and Night Shift on or if they’re in dark room. Affected users are also reporting that the issue popped up after iOS 13.4 came out, though there are those who’ve only noticed it after upgrading to iOS 13.5. At least one user says the green tint disappeared

upon installing iOS 13.5.5, which is currently in beta, so it’s looking more likely that it’s a software issue. We’ve reached out to Apple for a statement and will update you when we hear back — if it is a software issue, then the tech giant is bound to roll out an update that’ll fix it in the future.

Continue Reading

Trending