Let’s start cleaning the Internet

In January, Meta will succeed him as chairman of the board of the cross-industry counterterrorism organization Global Internet Forum to Counter Terrorism (GIFCT). The upcoming appointment has pushed Meta’s content moderation policies into overdrive, as the company aims to be worthy of his position.

One of the founding members of GIFCT, Meta will share data with other companies to keep the internet free from violent images, terrorism and human trafficking.

Content moderation rules
Meta is working with other companies to control terrorism-related content online. (a woman looks for content moderation on her desktop; Image Credit – Freepik)

Meta content moderation to counter terrorism

Recently, Meta’s growth has been affected by inflationand lawsuits as governments question its content moderation and data policies.

As part of Meta’s effort to keep people safe from harmful content, the company is launching a new free tool to help platforms identify and remove violent content.

Meta’s Hasher-Matcher-Actioner (HMA) will be a free and open source content moderation software tool “that will help platforms identify copies of images or videos and take mass action against them,” said Nick Clegg, president of Meta for global affairs. a release.

The HMA will be adopted by various companies to stop the spread of terrorist content on their platforms. It will be especially useful for small organizations, which don’t have access to better resources like large companies do.

It is an invaluable tool for companies that lack the in-house capabilities to moderate high-volume content. GIFCT member companies will diligently monitor their networks with the HMA and keep their platforms free from harmful and exploitative content.

Meta is estimated to have spent over $5 billion globally on safety and security in 2021. There are over 40,000 people working to improve its features.

Meta content moderation hopes to address terrorist content, as part of its larger plan to protect users from harmful content. The California-based tech giant is also using artificial intelligence to help with content moderation and remove harmful content.

The company also revealed that its content moderation tools have significantly reduced the visibility of hate speech, and it routinely blocks fake accounts to contain the spread of misinformation.

This was stated by Matthew Schmidt, associate professor of national security, international affairs and political science at the University of New Haven ABC News most of the planning of terrorist events or human trafficking happens on the dark web.

Schmidt admitted that open source software is the key to stopping these evil powers from wreaking havoc on society, as it limits their reach. He also said that most content moderation efforts come from private companies rather than the government.

Content moderation rules

On September 13, 2022, California enacted a sweeping Social Media Transparency Act (AB 587) that requires social media companies to post their terms of service and submit semi-annual reports to the California Attorney General’s office.

The legislation applies to social media companies with revenues exceeding $100 million over the previous year. The law does not define whether or how social companies should moderate content.

For now, it expects social media companies to send their current terms of service and semi-annual content moderation reports to the AG office.

Content moderation and data privacy issues have been hotly debated in recent years. Both federal and state agencies have attempted to introduce policies that safeguard users while curbing hate speech.

Earlier, Florida and Texas passed content moderation laws, hoping to bring some order to what is shared on the Internet. Florida law has limited the ability of Internet services to moderate content and required mandatory disclosure.

Texas law, however, prohibits social media platforms from “censoring[ing]” users or content based on the user’s point of view or geographical location in the state. It does not prevent companies from moderating content about illegal expressions or specific discriminatory threats of violence.

As nations are realizing the power of online platforms, social media companies are slowly finding themselves under pressure to enact tougher laws so they don’t indirectly encourage illegal activity.