Artificial Intelligence Combines with Human Intelligence to Stop Child Sexual Abuse
Written by Chris Priebe   

Technology companies work with law enforcement to raise the bar on social communities to prevent abuse

THE ORIGINAL IDEA FOR THE INTERNET was to create a universal connection between people, offering a way to access and share information. Its inception, partially driven by scientific needs, ultimately proved to be a way for everyday people to participate in global conversations. Unfortunately, it hasn’t worked out only in the way it was intended; there are pitfalls that come with exposure to vast networks. The plethora of connections in our lives are also potentially invasive threats. The same connectivity and curiosity that opened new worlds to people is now routinely used to hurt them. Children are especially vulnerable.

 

Mobile devices and social media are often the tools of choice for online predators who create and share child sexual abuse material (CSAM). Mobile devices and social media were not built for nefarious purposes; nor were Polaroid cameras, 8mm film, or the internet itself. Yet each was co-opted by predators who saw an opportunity to use technology as a way to more easily exchange images of child sexual abuse while evading detection. The impact of this evolution via the internet has been massive.

According to CyberTip, America’s centralized reporting system for suspected child sexual exploitation, the most common goal (60%) of online child sexual offenders is to find sexually explicit images of children. To this breed of criminal, CSAM is at once a prized possession, an exchangeable commodity, and a means to abuse and control victims.

“The numbers are only going up, so we need to be handling these cases in a much smarter way.”
—Sgt. Arnold Guerin, Canadian Police Centre for Missing and Exploited Children

The RCMP-led National Child Exploitation Coordination Centre (NCECC) received 27,000 cases in 2016. That’s up from 14,000 cases in 2015 and 10,000 cases in 2014, amounting to 270% growth in just two years. Within six weeks of its launch in 2017, Project Arachnid—a digital web-crawler launched by the NCECC—had detected more than five million instances of CSAM. After a year, Arachnid had processed over 1.1 billion webpages, and sent more than 238,000 notices requesting removal of CSAM.

With the rapidly increasing numbers of platforms being created, along with the growing technical savvy of online abusers, the task of monitoring and shielding users from CSAM seems at best daunting and at worst impossible. The pressure on those in law enforcement who investigate cases involving CSAM is immense, and the urgency of trying to identify and catch predators before they can cause harm grows each moment.


Rampant online harassment, an alarming rise in child sexual abuse imagery, urgent user reports that go unheard—it’s all adding up. Now that well over half of Earth’s population is online (4 billion people as of 2018), we’re finally starting to see an appetite to clean up the internet and create safe spaces for all users.

For those tasked with this difficult job, there is a simultaneous and growing threat to their own mental health. Investigators are, after all, human beings. Hardwired for empathy and compassion, we aren’t built for processing endless images of horror and abuse. The volume of content is simply overwhelming; individual cases can have thousands or even millions of images. There is not enough time to vet all of this and the inefficient process involved can take time away from identifying and saving victims. But we still have to stop the bad guys. So, what can be done?

The answer is to augment current police systems with artificial intelligence (AI) tools that empower investigators to become more efficient.

Simply put: using new software that sifts online CSAM and quickly filters out the bulk of dangerous content leaves police to focus their investigative resources on stopping crime and catching more predators. The technology, called CEASE.ai, is the product of collaboration between law enforcement, university researchers, and private industry, including the operators of global online communities.

“Removing child abuse material from the internet and protecting kids is a responsibility we all share, regardless of public or private sector. Collaboration, multi-stakeholder engagement, and investment in technological solutions and prevention strategies are central to expediting identification of victims, erasing material and eradicating CSAM online.”
—Julie Inman-Grant, eSafety Commissioner in Australia

With the advent of CEASE.ai, this sense of shared responsibility is effectively converted into the ability to deliver what we have all been seeking: greater, more reliable protection than what we have been able to achieve in the past.

Moving Out of the Rearview Mirror

Canada’s National Strategy for the Protection of Children from Sexual Exploitation on the Internet was launched in 2004 as a response to growth in online child sexual exploitation. The National Strategy is a horizontal initiative led by Public Safety Canada that brings together the RCMP, the Department of Justice and the Canadian Centre for Child Protection as partners with technology innovators from the private sector.

Until recently, the technology that police used to fight back against online predators had them mostly looking in the rearview mirror. By converting known images of abuse into a hash code, which is like a unique fingerprint for each image, software can identify and flag it anywhere in the world, whether the image is stored online or on a hard drive. This helps both online communities and investigators prevent known images of abuse from being shared, which is an excellent start.

But it is only a start.

If a new image of abuse is uploaded or shared, existing tools can’t recognize it. If a predator takes an offensive picture with their phone and immediately shares it online, neither the police nor the social network operator knows there is a problem. If someone decides to live stream the abuse of a child, no alarm may sound. In short, image hashing alone is inadequate to prevent immediate or imminent threats to safety because it can’t easily distinguish new CSAM images.

To bridge the gap, the RCMP are turning to AI to help investigators identify and rescue victims of child abuse more quickly, and reduce investigators’ workloads.

An Antivirus for CSAM

Built by Two Hat Security and in conjunction with university researchers, CEASE.ai was created to function as the internet’s version of antivirus software for CSAM—heading off images of child sexual abuse before they ever make their way online.

An artificial intelligence model, CEASE.ai mimics human vision. For investigators, this computer “vision” can act as a shield, sifting through material by using algorithms to scan unknown photos and pick out those that have a high probability of being associated with child exploitation.

By training CEASE.ai with real child sexual abuse material, the job of filtering, sorting, and removing non-CSAM is offloaded to machines, allowing investigators to focus their efforts. Functionally, investigators simply upload case images, run hash lists to eliminate known material, then let AI identify, suggest a label, and prioritize images that contain previously uncatalogued CSAM. These improved tools help reach victims faster.

“If we seize a hard drive that has 28 million photos, investigators need to go through all of them, but how many are related to children? Can we narrow it down? That's where this project comes in. We can train the algorithm to recognize child exploitation.”
—Sgt. Arnold Guerin, Canadian Police Centre for Missing and Exploited Children

The AI tools can also help to prioritize cases, according to Sgt. Dawn Morris-Little, an investigator at the RCMP-led National Child Exploitation Coordination Centre (NCECC). "For every single one of our files, there's a child at the end of it," she said. “The algorithms analyze new images of abuse for investigators and prioritize anything that looks homemade or anything that indicates a child could be in immediate risk. Before, this was impossible.”

What started as a partnership with law enforcement has expanded to working alongside social networks to identify and label uploaded images containing child abuse. This cooperation is an essential component of the master strategy behind CEASE.ai. Why? In social-networking communities, where anonymity reigns, social consequences are few, and where there may be many millions of daily visitors, the impact of those with bad intentions is greatly amplified.

A Pervasive Social Media Disease

Most social networks and online communities would love it if CEASE.ai or any other solution could prevent even one more image of child sexual abuse from ever reaching their platform. Whether it be a professional or student network, or a messaging app, they all want to continually improve the user’s experience of their product and provide a secure, pleasant environment. While our motivations for solving the problem may be somewhat different, police and social networks are of one mind when it comes to the mission of wiping CSAM off the internet.

In an ideal world, law enforcement and social networks would work together, leverage the CEASE.ai platform and protect vulnerable users in all environments and communities.

It is worth noting that this is not only a small handful of well-known social platforms and online communities we are discussing. Every online community can be vulnerable, from major social media to student and professional networks, user communities of popular games and brands, and anywhere else people gather online. Each of these communities wants to provide an enjoyable and harassment-free experience to their users. Unfortunately, child sexual predators have gotten pretty savvy about navigating and preying on online communities to achieve their goals.

According to a 2017 report from the National Center for Missing and Exploited Children, offenders’ goals strongly determine the type and number of social platforms and online tools they use to achieve them: “When offenders were trying to acquire sexually explicit images of children, they commonly tried to move the communication to platforms where they could more easily evade detection, such as anonymous messaging apps, text messaging or livestream sites/apps.”

This makes Facebook, LinkedIn, universities, game makers, and thousands of other digital communities potential partners with the police in detecting and removing images of child sexual abuse. So, too, are the dozens—potentially hundreds—of private business process outsourcing (BPO) companies, a.k.a. call centers, where Facebook and others outsource much of their content moderation work.


Social networks today are undergoing a massive paradigm shift when it comes to content moderation.

How the Private Sector Wants to Deal with CSAM

The civilian equivalent of someone who reviews potentially objectionable content is called a Content Moderator. Just like investigators, they may review hundreds or thousands of images of CSAM each day.

As of this writing, Facebook uses various BPO providers to provide some 15,000 contract staff in the U.S. alone to act as content moderators. These content moderators are not necessarily well-trained or qualified for the work, and make little more than minimum wage, but the fundamentals of their jobs are similar to those of investigators called on to review CSAM. For online communities, managing CSAM is a pure cost center: it’s wildly expensive, an operational headache, and a PR nightmare.

Content moderators spend their work time combing through text, images and video that users have uploaded to Facebook to see if they meet community guidelines, accepting or rejecting submissions based on a large set of criteria. It’s not unlike law enforcement’s approach to categorizing objectionable material. Images of violence, murder, bestiality and child sexual abuse are all too common. Job-related stress is rampant, burnout is high and staff turnover is a huge problem.

Why Should This Matter to Police?

Because Facebook spends millions of dollars every day on content moderation. Perhaps the work is happening domestically now, but the nature of the BPO industry is to move volume operations to areas with lower-cost labor.

It’s not just the expense of content moderation that online communities deplore though, it’s also the risk to which CSAM exposes them. In countries like Germany, if social networks don’t remove hate speech or CSAM within 24 hours they can be fined up to €50 million for each incident.

However, the threat of enormous punitive measures doesn’t really do anything to solve the problem if there isn’t a way for these networks to actually handle the growing river of CSAM running through their content moderation organizations. Even the most advanced networks need technology to deal with the sheer amount of content they oversee, and police have needed breakthroughs in this area for quite some time.

CSAM and abusive content is an unwanted business parasite that social platforms and online communities wish to—and are often compelled to—eradicate. New technology is the pathway to an effective solution for all parties to achieve their shared goals.

Police aren’t motivated by profit and loss, but they do share a common objective with these communities: preventing images of child sexual abuse from ever getting online. There are many potential allies who are available to assist police in this fight. They come from all across society: parents who don’t want their kids exposed to CSAM, schools that don’t want it near their computer systems, and governments that want to protect and build society. Everyone is invested in this issue whether they’re aware of it or not.

The challenge for investigators is to find innovative ways to work with AI and other technology across traditional boundaries and in recognition of their own human limitations.

Artificial Intelligence Needs Human Guidance


Artificial intelligence (AI) has advanced to a point where it can handle incredibly high volumes of content while maintaining precision and accuracy. Combined with human eyes to make the difficult, nuanced decisions that machines alone can’t yet make, we’re better positioned to protect online communities.

From a technological standpoint, the way to address the problem of online CSAM is to build and train computers to do what they do well—process billions of units of information and look for patterns—and let human investigators do what they do well—provide empathy, think in non-linear ways and make critical and timely decisions.

The whole future of AI in policing CSAM depends on humans training AI models to sniff out CSAM the same way we train dogs to sniff out drugs. AI must learn from us how to do the job better than we can, faster than we ever could, and alert us to evidence we might otherwise miss. AI needs us to guide it; in fact, it thrives off interaction with us. It’s a symbiotic relationship.

“Anything that takes the human element out of cases is going to reduce the risk of mental health injury to an investigator.”
—Sgt. Dawn Morris-Little, Investigator at the RCMP-led NCECC

There will always be some of us called upon to examine the worst of what humanity has to offer.

That’s the job that needs to be done, but damaging the mental health of investigators, content moderators, or anyone else involved in the fight against online CSAM is not acceptable table stakes.

It is essential that the technology for law enforcement to fight CSAM be built inclusive of the perspective of allies sharing the common mission. This issue affects everyone, from the children who are the victims, to the agents who investigate these horrific cases, to the social media platforms and online communities where the images are posted.

CEASE.ai was created as a tool for law enforcement, but it is also an instrument of human compassion. The vision is of a day when the technology to protect people from CSAM simply exists, quietly humming along in the background, detecting hate speech, quarantining offensive images, helping us catch the bad guys, and protecting both victims and investigators.


About the Author

Chris Priebe has over 20 years of experience with fostering positive online communities. He was the lead developer on the safety and security elements for Club Penguin, eventually acquired by Disney. In 2012, Priebe started Two Hat Security, driven by a desire to tackle the major issues of cyberbullying, harassment and abuse across the entire internet. Today, Two Hat Security’s products protect some of the world's leading social platforms. Two Hat is working globally with Law Enforcement on interdiction of CSAM.

This article appeared in the Summer 2019 issue of Evidence Technology Magazine.
Click here to read the full issue.

 
< Prev   Next >






Item of Interest

The language barrier between English-speaking investigators and Spanish-speaking witnesses is a growing problem. (Updated 28 February 2011)

Read more...