being a mom

Twitter Needs To Do Something About Its Child Porn Problem

By  | 

Twitter Goes Public On The New York Stock ExchangeLast Thursday, Canadian officials announced the bust of an enormous child pornography operation that has led to the arrest of 300 people, 76 of them in the United States. Among the people arrested in the investigation, nicknamed “Project Spade” were doctors, school teachers, priests, and actual parents. It’s terrifying, and makes your soul hurt, the very idea of child pornography, and I think that idea of any child being victimized and hurt in the way makes almost every human feel a combination of extreme anger and intense sadness and revulsion. I know it does me.

It’s great when cases like this happen, we like to think that maybe the world is just a little bit safer for kids, and according to an article on RT.com 386 children around the world who were identified as being at risk were rescued. That’s good news. But it’s not nearly enough, because not only does child pornography exist in the shadowy recesses of the internet, it’s also available in plain sight to anyone with a computer and a Twitter account.

Twitter has come under attack recently due to the fact they contribute very little money  to combating child pornography. According to The Independent:

As Twitter prepares for an estimated $1.1 billion flotation on the stock market, the company is dragging its feet over raising its contribution to combatting online child abuse and pornography – from just £5,000.

The company is a member of the Internet Watch Foundation (IWF), an organisation backed by Prime Minister David Cameron to work in collaboration with internet service providers to clear up the murky and chaotic world of online abuse.

And while some other tech companies have pledged up to £1 million in support to the industry-funded charity, Twitter insists it shouldn’t have to pay any more than the tiny sum it already does because it isn’t turning a profit.

From an article on NPR, Twitter uses a Microsoft system called PhotoDNA, which is used to scan every photograph that is uploaded to its site, and it correlates with previously scanned images of child abuse that receive a unique signature before they are entered into a database. If one of these photographs appears on the Internet, it is instantly flagged and removed.

In order for the photograph to be scanned in the first place, it has to be reviewed by an actual human. The people who perform this sort of work are content moderators who work for tech companies, and I’m pretty positive it must be one of the most horrific careers someone can go into. But it’s not enough. 

Pages: 1 2 3