Talking’s feedback shows that the conversation about content moderation needs to change.

On Tuesday, the House held its first hearing for the committee investigating the Jan.6 attack on the United States Capitol, an attack that was broadcast live.

As the country scrambled to figure out what was going on on Capitol Hill, those who stormed the building uploaded selfies, live videos and real-time status updates to a handful of media platforms. popular social media, including the relatively new, but growing, platform called Parler.

The rally and march that led to crowds fighting the Capitol Police, scaling the walls of the Capitol and ultimately entering the Halls of Congress was the culmination of a months-long #StopTheSteal movement that spanned several months. unfolded across the Internet, from social media platforms to event planning and ticketing websites, crowdfunding campaigns and independent message boards. And while it was clear to many journalists, researchers and community organizers that major platforms like Facebook and Twitter were the primary social media hubs for #StopTheSteal content and organization, the sheer volume of evidence uploaded to Talk January 6 shone the spotlight on the lesser-known platform in the hours and days immediately following the attack. In less than 48 hours, the Talking app had been suspended by both the Google Play Store and the Apple App Store. It wasn’t long before Amazon Web Services, Parler’s web host, followed suit, forcing the operational backend of the platform offline and making it inaccessible to users.

Major social media platforms have faced increasing pressure regarding their content moderation policies and practices from users, activists and, increasingly, members of Congress. In 2020, a wave of election-related and pandemic-related disinformation supercharged calls for more transparency and accountability from popular platforms like Facebook, Twitter and YouTube. But in debates about moderation of internet content, policymakers continue to exclude, or at least relegate to the margins, the rest of the online ecosystem, from payment processors to website hosts to retail stores. ‘mobile applications, involved in the creation, storage and delivery of content. Speaking’s brief exile highlights how businesses in the internet ecosystem not only can, but frequently do, make critical content moderation decisions, often risking collateral and disproportionate effects while failing to fight back. Significantly against the spread of harmful content online.

Under the hood of every website, every article online, every viral video is a complete ecosystem of companies and organizations that allow us to access this content through a device connected to the Internet. As we detail in a new report, written for the Tech, Law, & Security program at the Washington College of Law, these actors all play necessary functional roles in the distribution of content online. Their functions may overlap, with some businesses falling into more than one functional category, and some functions are frequently bundled and sold as a single service. Some of these players are household names, like Amazon and Google. Many others are not. Yet without them, the Internet as we know it would not work.

There are companies that provide access, including Internet service providers like AT&T and Comcast, and virtual private networks, connecting devices to the online world. Other players route users’ Internet traffic to the content they are looking for once these connections are established; this includes the registries and registrars that operate the Internet’s domain name system (something akin to a phone book), as well as content delivery networks like Cloudflare and Akamai that allow sites Web and platforms to operate on a large scale as their user base grows. Web hosts and content delivery networks provide a place where content can sit securely. Meanwhile, there are plenty of other companies providing or enabling features like navigation, financial facilitation (think PayPal or Alipay), and search (like Google or DuckDuckGo).

These categories are important because many of these players are already involved in moderating online content, even if their history of doing so is only sporadically made public. On August 3, 2019, a gunman entered a Walmart in El Paso, Texas, and opened fire, killing 22 people and injuring two dozen others. The suspect was quickly linked to a “long and hateful rant” posted on the famous 8chan bulletin board, prompting one of 8chan’s major service providers, Cloudflare, to start it primarily from the Internet. A few months earlier, following the Christchurch terror attack in New Zealand, internet service providers in Australia and New Zealand had temporarily blocked access to 8chan nationwide to prevent viewing of the attack video streamed live by the shooter (although larger platforms like Facebook were allowed to stay online despite hosting the initial livestream and hundreds of thousands of copies).

Paypal froze Wikileaks affiliate accounts in 2010; Self-proclaimed men’s rights activists have relied on Paypal to freeze the accounts of sex workers and other creators of adult content, who also face continued censorship on social platforms. These are hardly the only money-related content moderation incidents: In April 2020, registries and registrars worked with the US Department of Justice to disrupt hundreds of COVID-19 scam websites, who tried to steal visitors’ money with false information; UK domain registry Nominet did the same on its own, filtering COVID-19-related websites by default when they were registered with the company and refusing to serve those it deemed illegitimate. In December, Mastercard and Visa cut payment services to Pornhub after a New York Times article alleged that the website hosted volumes of child pornography.

Internet service providers may block or limit (slow down) access to particular websites; browsers can filter certain types of content. Domain registrars may stop serving particular domains, app stores may start applications from their markets, and hosting services may decline to support websites, content, or actors. individuals. All of these actors can exercise a range of controls over the creation, storage and delivery of online content. But they rarely have clear policies and frameworks in place to make those decisions, in large part because their ability to exert that influence is often overlooked in discussions about moderating online content.

Overall, Facebook, Twitter, and major social media platforms have a disproportionate influence on the distribution and amplification of harmful content online, but they don’t represent the whole Internet. Shaping the conversation about content moderation around the biggest players in the ecosystem ignores the range of other players, sometimes smaller, sometimes dominant, who also exert control levers over the availability of online content, than they make the headlines or not. The continued failure to include these actors in conversations about when and how content can and should be regulated online leaves the private companies that control large parts of the Internet able to continue to make opaque decisions and ad hoc about what content should or shouldn’t be, stay online: decisions that typically leave users with limited remedies while making efforts to effectively tackle harmful online content incomplete by default.

Parler, meanwhile, has already found a new host.

You can also find it in the Apple App Store.

Future Tense is a partnership between Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.


Source link