-1.5 C
New York

Elon Musk disbands Twitter’s Trust and Safety advisory group formed to deal with hate speech

Published:

Free of charge real time breaking news alerts sent straight to your inbox join to our breaking news emails

Enroll to our free breaking news emails

Elon Musk has dissolved a key advisory group at Twitter consisting of about 100 independent organisations that the corporate formed in 2016 to deal with hate speech, child abuse, and other harmful content on the platform.

Twitter’s Trust and Safety Council was expected to convene on Monday but was as a substitute sent an email informing the Council was disbanded shortly before the scheduled meeting, the Associated Press reported.

In the e-mail, Twitter reportedly said it was “reevaluating how best to bring external insights” adding that the council is “not the most effective structure to do that”.

“Our work to make Twitter a secure, informative place will likely be moving faster and more aggressively than ever before and we are going to proceed to welcome your ideas going forward about achieve this goal,” the e-mail, signed “Twitter,” reportedly noted.

The council consisted of over 100 independent civil, human rights, and other organisations which was formed in 2016 to assist Twitter tackle harmful content on the microblogging platform resembling hate speech, suicide, self-harm, and child exploitation.

After buying Twitter in October for $44bn and taking on as the corporate’s recent boss, Mr Musk said he could be forming a content moderation council with “widely diverse viewpoints”, adding that major decisions and account reinstatements wouldn’t occur before this council convenes.

Nevertheless, the multibillionaire modified his mind, reinstating the accounts of several individuals who were previously banned from the platform, including that of former US president Donald Trump.

Several research groups have identified that hate speech on Twitter has surged since Mr Musk’s takeover of the platform.

Earlier this month, the Centre for Countering Digital (CCDH) noted that “Mr Musk’s Twitter has turn out to be a secure space for hate”, observing that racial slurs, antisemitic and misogynistic tweets have increased since he took over the corporate.

The Network Contagion Research Institute (NCRI) also found earlier that the usage of the N-word increased by nearly 500 per cent within the 12 hours immediately after Mr Musk’s deal to purchase Twitter was finalised.

Twitter’s recent boss has also made several sweeping changes to the platform’s content moderation approach.

Following layoffs, during which Twitter slashed its entire workforce from 7,500 to roughly 2,000, including its entire human rights and machine learning ethics teams, in addition to outsourced contract staff working on the platform’s safety, the corporate said it might rely more on artificial intelligence to moderate its content.

The team accountable for removing child sexual abuse content from Twitter has been reportedly cut in half since Mr Musk’s takeover, although he has claimed that removing child exploitation from the platform is his “priority 1”.

One other report by the Wired noted that just one person remained on a “key team dedicated to removing child sexual abuse content from the positioning” in all the Asia Pacific region which is considered one of Twitter’s busiest markets.

Three members of the Trust and Safety Council – Eirliani Abdul Rahman, Anne Collier, and Lesley Podesta – resigned last week, claiming that “contrary to claims by Elon Musk, the protection and wellbeing of Twitter’s users are on the decline.”

“The establishment of the Council represented Twitter’s commitment to maneuver away from a US-centric approach to user safety, stronger collaboration across regions, and the importance of getting deeply experienced people on the protection team,” they said in a joint statement.

“That last commitment isn’t any longer evident, given Twitter’s recent statement that it should rely more heavily on automated content moderation,” the trio said.

The members added that Twitter’s recent approach counting on algorithmic systems can only protect users from “ever-evolving abuse and hate speech” after significant detectable patterns arise.

“We fear a two-tiered Twitter: one for individuals who will pay and reap the advantages, and one other one for individuals who cannot. This, we fear, will take away the credibility of the system and the fantastic thing about Twitter, the platform where anyone may very well be heard, whatever the variety of their followers,” they said.

sportinbits@gmail.com
sportinbits@gmail.comhttps://sportinbits.com
Get the latest Sports Updates (Soccer, NBA, NFL, Hockey, Racing, etc.) and Breaking News From the United States, United Kingdom, and all around the world.

Related articles

spot_img

Recent articles

spot_img