Magazine article Editor & Publisher

How 'Platforms as Publishers' Could Threaten Journalistic Ethics

Magazine article Editor & Publisher

How 'Platforms as Publishers' Could Threaten Journalistic Ethics

Article excerpt

News organizations have got themselves into a tough spot. After years of not valuing page load time, social platforms have begun implementing systems that either host articles directly (Facebook Instant Articles) or dictate technical standards for how story elements are structured (Google AMP). The result is largely the same: news organizations that use these platforms see improvements in load times, but they are losing control over how they distribute and present their journalism.

Although performance is an area that the news industry should dedicate more resources to, using these platforms to solve that problem raises ethical questions.

This issue is most visible with interactive stories: articles that ask readers questions about themselves and use that data to personalize the resulting narrative. If platforms host these articles, they could capture reader responses and do things such as add that data to an advertising profile, or sell it to a third party. If a news organization is aware or suspects this is a possibility, it could chill a newsroom's output of these types of stories, which raises questions of press freedom.

For example, the New York Times ran an interesting article on jury selection that asked the reader questions about their personal values and then showed how those views might affect whether they'd be stricken from a jury. The story specifically states at the top 'Your responses will not be stored." But if this stray ran within, for instance, a future Facebook Instant Articles quiz component, what guarantees would news organizations and readers have that Facebook would honor that promise?

I've worked on stories that ask about readers' experiences with abortion and seen others that ask about trust levels with law enforcement. These are worthwhile pieces of journalism and a format that we should keep experimenting with. But with platforms hosting the story code, could answers of "I am pro-life" or "I don't trust the police" be sold to a political campaign or collected by law enforcement and added to an individual's predictive "threat score" that jurisdiction is using?

Red flags should go up any time one creates structured data around what people believe or value.

Reader answers in these stories are not guaranteed to be truthful, either. Readers could click a button that doesn't reflect their views because they're curious to see how the interactive reacts differently. It would be hard, if not impossible, for a computer or a law enforcement agent to distinguish, however. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.