Facebook's Artificial Intelligence Doesn't Eliminate Objectionable Content, Report Finds
By David Uberti
An audit commissioned by Facebook Inc. urged it to improve
artificial intelligence-based tools it uses to help identify
problematic content such as hate speech, showcasing the current
limits of technology in policing the world's largest social media
The report, made public Wednesday, examined Facebook's approach
to civil rights and criticized it as "too reactive and piecemeal,"
despite much-publicized investments in AI-based censors and human
analysts trained to track down and remove harmful content.
Facebook says that as of March those tools helped zap 89% of
hate speech removed from the platform before users reported it, up
from about 65% a year earlier, according to the report. But outside
researchers argue it is still impossible to gauge just how many
posts escape the dragnets on a platform so large.
"I could just hop on [Facebook] right now and go to particular
pages and find tons," said Caitlin Carlson, a communications
professor at the University of Seattle who has studied hate speech
on Facebook. "If the tech is getting so much better, why isn't
Facebook getting so much better?"
As powerful as Facebook's AI-based tools are, removing
objectionable posts isn't as easy as hitting a delete button.
Training machine-learning tools to review content as human
moderators would takes time, expertise and reams of data to
identify new words and imagery. Hate groups have also grown more
adept at avoiding the platform's automated censors. Then there is
Facebook's scale -- 2.6 billion users split between numerous
languages and cultures -- and an advertising business that relies
Facebook Chief AI Scientist Yann LeCun said in a March interview
that he is working to develop self-supervised AI that can help
mimic human attempts to grasp it all.
"Current machines don't have common sense," he said. "They have
very limited and narrow function."
He said this research "is very important for Facebook [so it]
can detect hate speech in hundreds of languages."
Facebook's Dangerous Organizations team, which focuses on
terrorists and other organized hate groups, illustrates the hybrid
approach the company has taken in response to the challenges.
The 350-person unit, spearheaded by counterterrorism experts,
used a combination of manual review and automated tools to curb the
reach of jihadist groups like Islamic State. Using "hashes," or
digital fingerprints of content, to identify potential propaganda
in real time, the team also trained machine-learning "classifiers"
to learn to review posts like human analysts.
But it has proven more difficult to reorient those tools toward
white supremacists, counterterrorism experts say, which tend to be
more fragmented and whose irony-laced content often overlaps with
right-wing political speech. Western governments also don't
identify many of these groups as terrorist organizations, removing
a key cue for tech companies to take action.
Those dynamics make judgments about takedowns "much harder to
reach and much harder to reach in real time," said Nicholas
Rasmussen, executive director of the Global Internet Forum to
Counter Terrorism, a partnership between governments and tech
companies including Facebook, Twitter Inc. and Microsoft Corp.
"That's the challenge that the companies face," Mr. Rasmussen
said in an interview last month.
Researchers say the white supremacist terrorist attack in
Christchurch, New Zealand, last year -- livestreamed and shared
widely across Facebook -- illustrates the danger of one piece of
content slipping through the cracks.
Facebook has since redoubled its focus on far-right groups and
increasingly turned to targeted investigations by human analysts
instead of AI-based tools, company officials say. That tactic
resulted in the March takedown of the Northwest Front, a group that
advocated for a white ethnostate in the Pacific Northwest, and the
June removal of a network of accounts affiliated with the loosely
knit boogaloo movement.
Executives have pointed to Facebook's growing ability to remove
such content proactively as evidence of improvement, and a company
spokeswoman said Wednesday the Dangerous Organizations team has
increasingly focused on this area. Still, activists and advertisers
have renewed their criticisms of the company's approach to content
moderation amid a national dialogue about race following the
killing of George Floyd in Minneapolis police custody in May.
"We have made real progress over the years," Chief Operating
Officer Sheryl Sandberg said in a blog post responding to the civil
rights audit on Wednesday. "But this work is never finished and we
know what a big responsibility Facebook has to get better at
finding and removing hateful content."
--Steven Rosenbush contributed to this article.
Write to David Uberti at firstname.lastname@example.org
(END) Dow Jones Newswires
July 09, 2020 05:44 ET (09:44 GMT)
Copyright (c) 2020 Dow Jones & Company, Inc.
Historical Stock Chart
From Jul 2020 to Aug 2020
Historical Stock Chart
From Aug 2019 to Aug 2020