Twitter failed to pick up on 40 child porn images that had already been flagged as harmful over two-month period, Stanford research group claims

  • The Stanford Internet Observatory analyzed 100,000 tweets over two months
  • They claim to have found 40 images that all appear on PhotoDNA – a database that identifies harmful content 

Twitter failed to pick up on 40 images of child sexual abuse over a two month period this year even though they had already been flagged as harmful to the social media site, according to a new report by the Stanford Internet Observatory. 

The group, which monitors safety on the internet and specifically social media, found the images between March and May.

The photos were discovered among a trove of 100,000 tweets that were analyzed, and all already belonged to databases like PhotoDNA, which companies and organizations use to screen for such content, according to The Wall Street Journal. 

Twitter CEO Elon Musk has not commented on the claims in the new Stanford Internet Observatory report

Twitter CEO Elon Musk has not commented on the claims in the new Stanford Internet Observatory report 

‘This is one of the most basic things you can do to prevent CSAM online, and it did not seem to be working,’ David Thiel, of the Observatory, said.

Twitter has not yet commented on the claims. 

The Observatory and its researchers fear oversight of Twitter’s practices will become limited this year when the price of its application programming interface  – API – is increased to $42,000-per-month. Previously, access was free. 

See also  Prime Minister's fortress: Rishi Sunak puts up new a fence around his £2m North Yorkshire mansion to bolster security after eco-activists invaded

Musk has taken aim at the group in the past, calling it a left-wing ‘propaganda machine’. 

The group had been involved in flagging what it considered to be disinformation to Twitter during the 2020 election. 

Its involvement in removing some tweets was exposed in the release of the Twitter Files, documents that were made public by Musk to in an effort to show the public how biased the site was before he took over, and how intrinsically linked it was with government. 

The SIO report claims part of the problem is that Twitter allows some adult nudity, which makes it more difficult to identify and outlaw child sexual abuse materials. 

According to the report, researchers told Twitter bosses in April that there were problems with its systems for identifying harmful content but nothing was done until late May. 

The Twitter findings were part of a larger project that the group says will become public later this week in The Wall Street Journal.  

In February, Twitter announced it had suspended 404,000 accounts that were harmful – an increase of 112 percent – in the month of January alone.  

DailyMail

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign Up for Our Newsletters

Get notified of the best deals on our WordPress themes.

You May Also Like

Woman, 25, dies from her injuries a week after she was involved in late night car crash as police hunt missing driver, 29

By Alesia Fiddler Published: 16:58 EDT, 16 April 2024 | Updated: 19:51…

Congressman George Santos admits he DID dress as a woman in Brazil but denies he was a ‘drag queen’

New York Republican George Santos has responded to photos and videos which…

America has a NEW wealthiest enclave: Ritzy neighborhood tops Beverly Hills in California to the priciest homes in the country as average hits over $19 MILLION

The ritzy Miami-area neighborhood of Coral Gables has replaced regions in California…

Glenbrook Portal Lookout: Person falls off a cliff in the Blue Mountains NSW, another injured

Person falls off a cliff in the Blue Mountains and another is…