Historical Stock Chart
6 Months : From Apr 2019 to Oct 2019
By Nat Ives
Major marketers, social-media giants and advertising-agency groups have formed a coalition to tackle hate speech, bullying and divisive fake content online.
The companies said the effort, called the Global Alliance for Responsible Media and announced during the ad industry's annual Cannes Lions festival, will develop specific steps to protect both people and brands from what marketers call "unsafe" content.
"We wanted to go from a position of chasing down breaches in a reactive way to a much more proactive dialogue and concrete steps that are going to drive industry change," said Rob Rakowitz, the head of global media at Mars Inc., one of the members.
Other participating companies include Procter & Gamble Co., General Mills Inc., Diageo PLC, Mastercard Inc., Facebook Inc., Twitter Inc., Alphabet Inc.'s Google, Omnicom Group Inc.'s Omnicom Media Group and WPP PLC's GroupM, as well as several trade associations. The alliance plans to hold its first official meeting in Cannes, France, on Wednesday.
Working together will be more efficient than the usual method of holding a series of uncoordinated meetings, said Carolyn Everson, vice president of global marketing solutions at Facebook.
"We're in different businesses but we have similar objectives," Ms. Everson said of the alliance members. "We want to create an ecosystem for advertisers that is healthy, that consumers feel really positive about -- that they feel safe and secure on the platforms, and feel good about the brands that support them."
Digital advertising will make up more than half of global ad sales for the first time this year, according to the latest forecast by ad-buying group Magna Global USA Inc., part of the Interpublic Group of Cos.
But social-media platforms have been tarred by repeated revelations that they are hosting political disinformation and malicious content. In one of the most recent examples, AT&T Inc., Clorox Co., Nestlé SA, McDonald's Corp. and "Fortnite" publisher Epic Games Inc. paused or halted their YouTube advertising in February following reports that viewers were making inappropriate comments on videos of young girls. YouTube later suspended comments on most videos that feature minors.
The platforms sometimes struggle to police the content users post on their sites, as was the case with video from a gun massacre earlier this year in New Zealand.
At other times, the platforms hesitate to police content, saying they worry about stifling free expression. And the policies vary from platform to platform.
That is part of what the alliance aims to address, said John Montgomery, global executive vice president of brand safety at GroupM. Facebook, Twitter and YouTube aren't likely to adopt identical policies over permissible speech, but the group might help develop a framework to make it easier on all of them to act quickly in certain cases, he said.
Other steps could include developing systems that can recognize policy violations more quickly and across platforms, and publishing a new measurement gauging progress in the effort, participants said.
But it is unclear whether a large, disparate group will make more progress than the players have made individually.
"We'll need to be judged by our actions and not our words," Ms. Everson said.
Write to Nat Ives at firstname.lastname@example.org
(END) Dow Jones Newswires
June 18, 2019 00:15 ET (04:15 GMT)
Copyright (c) 2019 Dow Jones & Company, Inc.