By Sam Schechner
Under intense political pressure to better block terrorist
propaganda on the internet, Facebook Inc. is leaning more on
artificial intelligence.
The social-media firm said Thursday that it has expanded its use
of A.I. in recent months to identify potential terrorist postings
and accounts on its platform -- and at times to delete or block
them without review by a human. In the past, Facebook and other
tech giants relied mostly on users and human moderators to identify
offensive content. Even when algorithms flagged content for
removal, these firms generally turned to humans to make a final
call.
Companies have sharply boosted the volume of content they have
removed in the past two years, but these efforts haven't proven
effective enough to tamp down a groundswell of criticism from
governments and advertisers. They have accused Facebook, Google
parent Alphabet Inc. and others of complacency over the
proliferation of inappropriate content -- in particular, posts or
videos deemed as extremist propaganda or communication -- on their
social networks.
British Prime Minister Theresa May ratcheted up complaints this
month in the wake of a series of deadly terror attacks in the U.K.,
and sought new international agreements to regulate the internet
and force technology companies to preemptively filter content.
In response, Facebook disclosed new software that it says it is
using to better police its content. One tool, in use for several
months now, combs the site, including live videos, for known
terrorist imagery, like beheading videos, to stop them from being
reposted, executives said Thursday. The tool, however, doesn't
identify new violent videos like the Cleveland murder that was
posted on Facebook in April.
Another set of algorithms attempts to identify -- and sometimes
autonomously block -- propagandists from opening new accounts after
they have already been kicked off the platform. Another
experimental tool uses A.I. that has been trained to identify
language used by terrorist propagandists.
Facebook declined to say what portion of extremist material it
removes is being blocked or removed automatically, and what
percentage is reviewed by humans. The firm's moves reflect a
growing willingness to trust machines to help even in part with
thorny tasks like distinguishing inappropriate content from satire
or news coverage -- something firms resisted after a spate of
attacks just two years ago as a potential threat to free
speech.
One factor in the changed approach, Facebook executives say, has
been the improved ability of algorithms to identify unambiguously
terrorist content in some cases, while referring other content for
human review.
While an Isis-propaganda photo posted without a caption may be
an easy removal for an algorithm, the same image with a caption
might for instance require human review, said Monika Bickert,
Facebook's head of global policy management. Similarly a beheading
video that has previously been removed is easy to block. Short
clips of the same video, or a never-before-seen but similar looking
video, might need a reviewer to check if they are part of a news
report or other commentary.
"Our A.I. can know when it can make a definitive choice, and
when it can't make a definitive choice," said Brian Fishman, lead
policy manager for counterterrorism at Facebook. "That's something
new."
Another factor in the fresh A.I. push: a spate of recent
terrorist attacks and scandals involving ads being shown before
jihadist videos.
Just days before a general election in the U.K, for instance,
the campaigns for the country's two main parties pulled political
ads from Alphabet's YouTube video-sharing site after being alerted
those ads were appearing before extremist content.
Germany earlier this year proposed a bill that could fine firms
up to EUR50 million ($56 million) for failing to remove fake news
or hate speech -- including terrorist content. The U.K. and France
published a counterterrorism action plan this week that calls on
technology companies to go beyond deleting content that is flagged,
and instead identify it beforehand to prevent publication.
"There have been promises made. They are insufficient," said
French President Emmanuel Macron on Tuesday.
Facebook has expanded its use of human reviewers to look at what
executives say are difficult cases. In May, the company said it
would add about 3,000 new moderators to its community operations
team that takes down content that violates Facebook policies,
expanding the team by two thirds. Across the company, Facebook says
it has 150 people focused on counterterrorism as their core
job.
Facebook already has rolled out software to identify other
questionable content such as child pornography and fake news
stories. Ahead of French and German elections this year, the
company began tagging "disputed" stories when outside news
organizations ruled them as false.
The issue of content removal remains at times fraught for
Silicon Valley companies, whose values often place a premium on
permitting debate. At times, firms have also acknowledged that
algorithms have gone too far. Last July, Facebook was criticized
for removing live video from Minnesota woman Diamond Reynolds, who
showed her boyfriend, Philando Castile, dying after being shot by a
police officer during a traffic stop. Facebook blamed the removal
on a technical glitch and restored the video.
Social-media firms including Facebook, Yahoo Inc. and Twitter
Inc. are adamant that they want to stamp out terrorism on their
platforms -- and already do a lot to remove such content. Twitter
says it is expanding its use of automated technology to combat
terrorist content, too. From July through December last year,
Twitter said internal tools flagged 74% of the 376,890 accounts it
removed.
YouTube said Thursday that it uses automated software to block
users from uploading videos that have already been flagged and
removed from the site, adding that more than half of the content
removed for terrorism in the last six months was removed at least
in part using such technology.
Along with Facebook, it is collaborating with the other social
media firms on a shared database of previously identified terrorist
imagery, first announced in December, which allows the companies to
more quickly identify posts that use them. But the company doesn't
use technology to screen new content for policy violations, saying
computers lack the nuance to determine the difference between
propaganda and newsworthy or religious speech in a previously
uncategorized video.
"These are complicated and challenging problems, but we are
committed to doing better and being part of a lasting solution," a
YouTube spokesman said.
--Jack Nicas contributed to this article.
Write to Sam Schechner at sam.schechner@wsj.com
(END) Dow Jones Newswires
June 15, 2017 16:12 ET (20:12 GMT)
Copyright (c) 2017 Dow Jones & Company, Inc.
Meta Platforms (NASDAQ:META)
Historical Stock Chart
From Mar 2024 to Apr 2024
Meta Platforms (NASDAQ:META)
Historical Stock Chart
From Apr 2023 to Apr 2024