Amazon Web Services is supposed to proactively remove more content that violates the rules

Amazon plans to take a more proactive approach to determining which types of content violate its cloud service policies, such as rules against promoting violence, and to impose its removal, according to two sources, a move likely to renew the debate over the how much energy technology companies should have to restrict freedom of speech.

In the coming months, Amazon will hire a small group of people in its Amazon Web Services (AWS) division to build expertise and work with outside researchers to monitor future threats, said one of the sources familiar with the matter.

That could turn Amazon, the world’s leading provider of cloud services with a 40% market share according to research firm Gartner, into one of the world’s most powerful referees for Internet-enabled content, experts say.

Amazon made headlines in the Washington Post last week for shutting down an AWS-hosted website that featured Islamic State propaganda celebrating the suicide bombing that killed about 170 Afghans and 13 American soldiers in Kabul last Thursday. They did this after the news organization contacted Amazon, according to the Post.

The proactive approach to content comes after Amazon kicked social media app Parler from its cloud service in the wake of the Capitol riot on Jan. 6 for allowing content that promoted violence.

“AWS Trust & Safety works to protect AWS customers, partners and Internet users from malicious individuals who attempt to use our services for abusive or illegal purposes. When AWS Trust & Safety is informed of abusive or illegal behavior in the services from AWS, they act quickly to investigate and engage customers to take appropriate action,” AWS said in a statement.

“AWS Trust & Safety does not pre-review the content hosted by our customers. As AWS continues to expand, we expect this team to continue to grow,” he added.

Activists and human rights groups are increasingly blaming not just websites and apps for harmful content, but also the underlying technology infrastructure that allows those sites to operate, while political conservatives condemn the restriction of free speech.

AWS already prohibits its services from being used ​​in various ways, such as illegal or fraudulent activities, to incite or threaten violence, or to promote child sexual exploitation and abuse, in accordance with its acceptable use policy.

Amazon first asks customers to remove content that violates its policies or have a system to moderate the content. If Amazon cannot reach an acceptable agreement with the customer, it can take the site down.

Amazon plans to develop an approach to the content issues it and other cloud providers face most often, such as determining when misinformation on a company’s website reaches a scale that requires action by AWS, the source said.

The new AWS team doesn’t plan to sift through the vast amounts of content companies host in the cloud, but will aim to anticipate future threats such as emerging extremist groups whose content could reach the cloud from AWS, the source added.

Amazon is currently hiring a global head of policy on the AWS Trust and Security team, who is responsible for “protecting AWS from a wide variety of abuses,” according to a job posting on its website.

AWS offerings include cloud storage and virtual servers, and it counts large companies like Netflix, Coca-Cola and Capital One as customers, according to its website.

proactive moves

Better preparation against certain types of content can help Amazon avoid legal and public relations risks.

“If (Amazon) can proactively get some of this stuff before it’s discovered and it becomes big news, there’s value in avoiding this reputational damage,” said Melissa Ryan, founder of CARD Strategies, a consulting firm that helps organizations understand extremism and online toxicity threats.

Cloud services such as AWS and other entities such as domain registrars are considered the “backbone of the Internet,” but traditionally they are politically neutral services, according to a 2019 report by Joan Donovan, a Harvard researcher who studies online extremism and campaigns. misinformation.

But cloud service providers have removed content before, as in the wake of the 2017 alt-right rally in Charlottesville, Va., helping to slow down the alt-right groups’ ability to organize, Donovan wrote.

“Most of these companies, understandably, didn’t want to get into content and didn’t want to be the arbiter of thought,” Ryan said. “But when you’re talking about hate and extremism, you have to take a stand.”

© Thomson Reuters 2021


.

Leave a Comment