WeProtect Global Alliance https://www.weprotect.org Working for a digital world designed to protect children from sexual exploitation and abuse online Thu, 14 Nov 2024 21:14:31 +0000 en-GB hourly 1 https://www.weprotect.org/wp-content/uploads/cropped-WeProtect-favicon-1-32x32.png WeProtect Global Alliance https://www.weprotect.org 32 32 Dangerous shifts in the rapidly evolving world of online child sexual abuse   https://www.weprotect.org/blog/urgent-global-action-needed-to-stop-increasing-threats-to-children-online Mon, 16 Sep 2024 08:59:05 +0000 https://www.weprotect.org/blog/ As new threats continue to emerge, the rapidly evolving world of online child sexual abuse demands more urgent and global action

In the digital age, technology has revolutionised the way we connect and communicate. Yet, alongside its benefits, it has also fuelled a deeply disturbing crisis: the rise of online child sexual abuse and exploitation. This problem is no longer confined to shadowy corners of the internet. It is growing rapidly, evolving in both scale and severity.  

If you’re not already alarmed, you should be. The threats facing children online are increasing and becoming more extreme. Take the recent warning from the Australian Federal Police: sadistic online groups are now targeting children as young as 12 on social media, coercing them into producing explicit content. Once the material is created, offenders extort their victims by threatening to share the images with family or friends unless more content is provided. This vicious cycle often escalates into ever more degrading and violent demands including specific live sex acts, animal cruelty, serious self-harm, and live online suicide.  

Or recent research from the International Policing and Public Protection Research Institute (IPPPRI) which uncovered a worrying trend: offenders are increasingly turning to artificial intelligence (AI) to create child sexual abuse material.  

These online predators are teaching themselves how to use AI to generate child sexual abuse material using freely available resources shared in dark web forums.  As the technology advances, they are moving to more graphic and extreme content.  

Why this matters now more than ever 

Some might argue that AI-generated abuse material is not “real,” but this is dangerously simplistic. AI makes it possible to create child sexual abuse material on an industrial scale, and as it proliferates, so does the risk of normalising and escalating abuse.  

Research from the Internet Watch Foundation (IWF) suggests most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained IWF analysts.  

The evolution of this technology is happening at such a pace that regulatory frameworks and law enforcement are struggling to keep up. Encryption, designed to protect user privacy, is often misused to shield criminal activity, making it harder for authorities to track down offenders. Right now, the balance between privacy and child protection is skewed, and it’s children who are paying the price. 

This abuse leaves lifelong scars on its victims. Beyond the immediate trauma, survivors often struggle with depression, PTSD, and long-term emotional damage. The continued circulation of their abuse online means they are re-victimised again and again. For many survivors, knowing that these images or videos will exist indefinitely is a source of ongoing distress. 

What must be done: a call to action 

The escalating severity of online child sexual abuse and exploitation demands an urgent, coordinated response from governments, tech companies, law enforcement, and civil society. Here’s what needs to happen: 

  1. Strengthen legislation, regulation and international cooperation: Governments must pass and enforce stronger laws to hold offenders to account. This is a global crisis which cuts across borders – international partnerships between governments and law enforcement agencies can help dismantle these online criminal networks. Independent oversight of technology platforms by regulators is critical to ensure they are doing enough to protect children and deter offenders.  
  1. Increase accountability for tech companies: Social media platforms and messaging apps must take greater responsibility for the content shared on their services. This means investing in safety by design, in AI-driven tools to detect and remove CSAM before it spreads, and cooperating with law enforcement in cases of criminal activity. 
  1. Improve reporting mechanisms: Many victims or witnesses don’t know how to report abuse, get images taken down, or fear their reports won’t be taken seriously. Governments, NGOs and tech companies must create user-friendly reporting tools and ensure that victims receive the support they need. 
  1. Expand support services for survivors: More resources are needed to provide long-term support for survivors of child sexual abuse, including counselling, legal assistance and other social services.  
  1. Raise public awareness and education: Educating parents, teachers and children about the risks of online exploitation can help stop abuse before it happens. Schools should also teach digital literacy, focusing on safe internet practices and online risks.  

As we prepare for the first ever Global Ministerial on Ending Violence Against Children in Colombia this November, the worldwide growth of violence online is a key area which must be discussed by Governments.  

Online child sexual abuse and exploitation is an issue we have the power to address—if we move swiftly and decisively. Governments hold a critical key to bringing together tech and civil society solutions to protect the most vulnerable among us.  

The future depends on what we do next, and the time for action is now. Children deserve nothing less.  

]]>
AI-produced child sexual abuse material: Insights from Dark Web forum discussions  https://www.weprotect.org/blog/ai-produced-child-sexual-abuse-material-insights-from-dark-web-forum-discussions Wed, 11 Sep 2024 16:43:37 +0000 https://www.weprotect.org/blog/ By Dr Deanna Davy, Senior Research Fellow with the International Policing and Public Protection Research Institute. 
September 2024

Newspaper headlines and social media feeds are abuzz with stories about the potential positive effects of Artificial Intelligence (AI) on society. But there is scant attention to the detrimental effects of AI on society. A core area for concern, which requires much more scholarly attention, awareness-raising, and deterrence measures is the use of AI to create child sexual abuse material (CSAM).  

Agencies such as WeProtect Global Alliance and  the Internet Watch Foundation (IWF) sounded the alarm regarding AI CSAM in 2023, highlighting it as an area of concern for governments, civil society organizations, private sector agencies, and parents and children. IWF’s 2023 research found that offenders are taking images of children, often famous children, and applying deep learning models to create AI CSAM. 

There are currently two major categories of AI CSAM: (1) AI-manipulated CSAM, where images and videos of real children are altered into sexually explicit content, and (2) AI-generated CSAM in which entirely new sexual images of children are manufactured (Krishna et al., 2024). IWF reported in 2023 that both types of AI CSAM are prevalent.  

Researchers at the International Policing and Public Protection Research Institute (IPPPRI) wanted to understand more about what Dark Web forum members were saying and doing with regards to AI CSAM. To do this we decided to examine Dark Web forum members’ posts and discussions about AI CSAM. We collected this data through Voyager, an open-source intelligence (OSINT) platform that collects, stores and structures content from publicly accessible online sources including Dark Web forums where CSAM related discussions take place. Data collection was conducted in January 2024 using a key word search in the ‘Atlas’ dataset of Dark Web child sexual exploitation (CSE) forums in Voyager. Our search using the search terms “AI” OR “Artificial intelligence”, and the search date range 2023 (12 months), resulted in 19,951 results (this included 9,675 hyperlinks; 9,238 posts; 1,021 threads; and 17 forum profiles). We then looked at a sample in order to conduct some preliminary analysis of what forum members were saying and doing regarding AI CSAM. 

What we discovered is very worrying. First, there is a real appreciation and appetite for AI CSAM. Forum members refer to those who are creating AI CSAM as ‘artists’. What forum members appreciate is that those creating AI CSAM can take an image of, for example, a favourite child film character, and create a plethora of AI CSAM of that child. At present, forum members are particularly interested in AI CSAM of famous children such as child actors and child sports stars.  

We discovered that forum members who are creating AI CSAM are not IT, AI or machine learning experts. They are teaching themselves how to create AI CSAM. They easily access online tutorials and manuals on how to create AI CSAM; these resources are widely shared in Dark Web forums. They then reach out to other Dark Web forum members who already have experience in creating AI CSAM to ask questions about how to ‘train’ the software and overcome challenges that they are experiencing (such as the child in the image having too many limbs or digits). As part of this effort to improve their AI production skills, forum members actively request others to share CSAM so that they can use this material for ‘practice’.  

We also found evidence that, as forum members develop their skills in the production of AI CSAM, they actively encourage other forum members to do the same. This is of particular concern as it can feed demand and lead to a continual upskilling loop whereby as more forum members view AI CSAM and become interested in creating AI CSAM themselves, they then hone their AI skills, share their AI produced CSAM, and encourage others to create and share AI CSAM.  

We also discovered that some forum members are already moving from the creation of what they describe as ‘softcore’ AI CSAM to more ‘hardcore’ material. Driving this pattern may be the normalisation and desensitisation of material and the search for more explicit and violent material.  

It was also clear that forum members are hopeful that AI will continue to quickly develop so that in the near future they won’t be able to tell if a sexual image of a child is real or not. They’re also hopeful that AI will develop to a point that they can create increasingly hardcore and interactive material (such as interactive videos where they can instruct a video character to perform sexual acts).  

On the very day we published these findings, a man was convicted, in a landmark UK case, for the creation of more than 1,000 sexual images of children using AI.  Our analysis of discussions on AI CSAM on the Dark Web suggests that convicted individual is just one of many committing such crime.  

This is not a niche area – on the contrary the creation of AI CSAM is heading towards the mainstream. That’s why we need a rapid and unwavering response. The cat is already out of the bag, so to speak, with regards to AI CSAM. Offenders are adopting this tool into their toolkit.  

Our task now is to limit the expansion of the phenomenon through legal reform, robust deterrence measures, as well as further evidence generation, and awareness-raising of the issue.  

Dr Deanna Davy is a Dawes Senior Research Fellow with the International Policing and Public Protection Research Institute.  Deanna has worked in research on trafficking and child sexual exploitation for a number of government agencies, international organisations, and non-government organisations, including the United Nations Office on Drugs and Crime, the International Organization for Migration, the United Nations Children’s Fund, and ECPAT International. Prior to joining the IPPPRI team, Deanna was employed as a Research Fellow (modern slavery) at the University of Nottingham. 

]]>
Navigating AI regulation: mitigating the risks of generative AI in producing child sexual abuse material https://www.weprotect.org/blog/navigating-ai-regulation-mitigating-the-risks-of-generative-ai-in-producing-child-sexual-abuse-material Fri, 14 Jun 2024 15:04:14 +0000 https://www.weprotect.org/blog/ By Shailey Hingorani, Head of Policy, Advocacy and Research, WeProtect Global Alliance 
June 2024

Last month, the Federal Bureau of Investigation (FBI) charged a US man with creating more than 10,000 AI-generated sexually explicit and abusive images of children. And it’s not just adult perpetrators who are using AI. Cases of teenage boys non-consensually creating and sharing nude photos of their female and male classmates and/or teachers have been reported in the United States, Australia and Spain.  

In 2023, the National Center for Missing & Exploited Children (NCMEC) received 4,700 reports concerning AI-generated child sexual abuse material. While this is currently a relatively small number, last year researchers found that popular AI image-generators had been trained on datasets that contained child sexual abuse imagery. These images are likely to have made it easier for AI systems to produce ‘new’ child sexual abuse material. The ease with which AI can be used means that child sexual abuse material can be produced on an industrial scale, with very little technical expertise.   

To address this, countries worldwide are adopting different regulatory approaches. This blog explores three common approaches to AI regulation, examining their principles, examples and potential effectiveness in mitigating the use of AI for harmful purposes.   

Risk-based regulation 

A prominent approach to AI regulation is risk-based, where regulatory measures are tailored to the perceived risks associated with different AI applications. This model ensures that higher-risk AI systems are subject to stricter oversight, while lower-risk systems face fewer restrictions.  

The European Union (EU) exemplifies this approach with its proposed AI Act, which categorises AI applications into different risk levels. High-risk AI systems, such as those that negatively affect safety, must comply with stringent baseline requirements, including robust data protection, transparency, and accountability measures such as risk assessments.  

Another example of a jurisdiction that might potentially adopt this approach is Brazil. Its proposed AI regulation also categorises AI systems according to different levels of risk (for example, excessive or high risk), and requires every AI system to be risk assessed before being released to the market.  

For generative AI that could create harmful content, including child sexual abuse material, these regulations mandate strict oversight to prevent abuse, with safety measures that can be implemented both at the developer and deployer levels. 

The risk-based approach has the potential to be effective in addressing the misuse of generative AI for harmful purposes by ensuring clear and enforceable liability obligations and safety measures. However, its success depends on effective implementation of regulations, development of common standards on transparency measures, risk assessment and watermarking of generated content, etc. and the ability to adapt to emerging risks. 

Principle-based frameworks 

Another common regulatory approach involves establishing comprehensive ethical frameworks that guide the development and deployment of AI technologies. These frameworks emphasise core principles such as human rights, transparency, and sustainability.  

The United Kingdom has developed a principles-based, non-statutory, and cross-sector AI framework. The UK’s approach integrates broad ethical standards with sector-specific regulations to address unique risks in different areas. The framework outlines five principles: safety, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. These principles guide existing regulators in the responsible design, development, and use of AI.  

Regulators are expected to adopt a proportionate, context-based approach, leveraging existing laws and regulations. Ofcom, the UK’s communications regulator, in its strategic approach to AI published in March 2024, highlights its regulatory powers by noting that the Online Safety Act (OSA), which already mandates that in-scope services take proportionate measures to prevent exposure to illegal or harmful content, could also encompass AI-generated child sexual abuse material.  

Another country using this approach is Singapore, which has drafted a Model AI Governance Framework for Generative AI, which seeks to provide guidance on suggested practices for safety evaluations of generative AI models.   

Comprehensive ethical frameworks provide a solid foundation for responsible AI use. By promoting ethical principles, these frameworks help prevent the misuse of AI for creating harmful content. However, their effectiveness relies on widespread adherence and the integration of these principles into enforceable regulations.  

Sector-specific regulations 

Given the diverse applications of AI, some jurisdictions implement sector-specific regulations alongside general AI guidelines. This dual approach addresses the unique challenges and risks of AI use in different sectors.  

The United States combines broad AI guidelines (e.g. the White House Executive Order on AI) with sector-specific regulations, including multiple efforts at the state level (e.g. Bills in California, Texas and New York among others).  

While there is no comprehensive federal legislation or regulations in the US on the development, deployment and use of AI, there are existing federal laws that address specific uses of AI. The PROTECT Act, for instance, specifically targets the production and distribution of child sexual abuse material, including AI-generated content.  

Sector-specific regulations effectively address the distinct risks associated with AI applications in various sectors. By focusing on targeted measures, these regulations can mitigate the misuse of generative AI for creating harmful content. However, the success of this approach depends on effective coordination and enforcement across sectors.  

Voluntary collaboration 

Several leading AI companies, including Adobe, Amazon, IBM, Google, Meta, Microsoft, OpenAI, and Salesforce, have voluntarily pledged to promote the safe, secure, and transparent development of AI technology. These companies have committed to conducting internal and external security testing of AI systems before release, sharing information on managing AI risks, and investing in various safeguards. Additionally, several of these companies have signed up to Thorn’s Safety by Design principles on generative AI

As highlighted in our latest Global Threat Assessment, cross-sector voluntary collaboration remains critical to enable responsiveness, drive innovation and centre the voices of children and survivors. This should be as transparent as possible to enable greater accountability and user confidence. We see this as a vital complement to regulation. Initiatives like the Global Online Safety Regulators’ Network will encourage growing alignment of regulation and improvement of inter-institutional coordination. Innovation and profits should never come at the expense of the safety and security of children using these platforms, tools and services.  

While it is encouraging to see these companies taking proactive steps, the effectiveness of voluntary regulation remains uncertain. Voluntary action and collaboration will remain a critical complement to legislation and their success heavily depends on the companies’ willingness to adhere to their commitments. Critics argue that without mandatory regulations, there is a risk that some companies may prioritise innovation and profitability over safety and security. Therefore, it remains to be seen how effective these voluntary efforts will be in mitigating the risks associated with AI technology. 

Conclusion 

The regulation of AI, particularly in preventing its misuse for creating child sexual abuse material, requires a multifaceted approach. Risk-based regulation, comprehensive ethical frameworks, and sector-specific regulations each offer valuable strategies to address these challenges. The EU, UK, and US provide examples of these approaches in action, prioritising principles such as human rights, transparency and accountability.  

While each approach has its strengths, their effectiveness ultimately hinges on rigorous implementation, adaptability to new risks and international cooperation. As AI technology continues to evolve, so too must the regulatory frameworks that set minimum standards, ensuring the benefits of AI are realised without compromising safety and ethical standards. 

]]>
World’s first estimate of the scale of online child sexual exploitation and abuse https://www.weprotect.org/blog/worlds-first-estimate-of-the-scale-of-online-child-sexual-exploitation-and-abuse Mon, 27 May 2024 08:54:05 +0000 https://www.weprotect.org/blog/ New report finds 300 million+ children under the age of 18 have been affected by online child sexual exploitation and abuse in the last 12 months

Childlight has produced the world’s first estimate of the scale of online child sexual exploitation and abuse. Whilst many gaps and inconsistencies remain, its CEO Paul Stanfield says it provides a baseline to help combat a crisis that should be treated like a global pandemic.

Child sexual exploitation and abuse is a global health pandemic. It occurs in every country and is global in nature. However, this is a hidden pandemic, one that has been ignored and pushed to the side for far too long because the reality is often too difficult to contemplate.

Having worked globally as a police officer for over 30 years, I have witnessed the true horror and growth of child sexual exploitation. Enabled by technology and a lack of regulation, it has been allowed to pervade every part of our communities, both online and offline. Those working across the sector know this from anecdotal information and experience. But whilst this insight is helpful, it has not been sufficient to drive systematic change across the child protection sector. We need evidence that is indisputable so the problem can no longer be ignored, denied or unhelpfully conflated with issues such as privacy and freedom of speech.

Childlight has been established to take a data-driven, evidence-based approach to understanding the true prevalence and nature of child sexual exploitation and to use that data and evidence to drive transformational and sustainable change to safeguard children globally.

We do not underestimate this task. Data on child sexual exploitation and abuse differs in quality around the world; data foundations are inconsistent, definitions differ and, efforts are hampered by a lack of transparency.

I am, therefore, indebted to the support provided by the Human Dignity Foundation in establishing Childlight at the University of Edinburgh. This has allowed us to move at pace and benefit from the support of world-leading researchers and experts across the field to undertake this complex challenge.

Into the Light Index, the world’s first estimate of the scale of this haunting problem, is a preliminary attempt to produce a global picture based on what Childlight researchers have been able to discover in partnership with others leading the fields of data, law enforcement and safeguarding. It takes the form of an interactive microsite that allows users to navigate maps, charts and table and a companion report containing more detailed statistics and analysis to help inform policy makers, researchers and others.

childlight infographic

It has been audited by Sir Bernard Silverman, chair of the Childlight Technical Sub-Committee, Emeritus Professor at the Universities of Oxford and Bristol and former chief scientific adviser to the UK Home Office. And whilst many gaps and inconsistencies remain, it provides a baseline by which we can measure the sector’s progress in understanding the true scale and nature of child sexual exploitation and abuse. As the data improves and we build our knowledge, we expect to provide more reliable country-by-country estimates and expand into other areas of child sexual exploitation and abuse, both online and offline.

This Index is intended to drive research that enhances our knowledge and understanding of the problem. More importantly, it is intended to have impact by raising awareness and providing frontline workers, policymakers and governments information by which they can make better informed decisions on safeguarding children globally from sexual exploitation and abuse.

Our assessment that at least 300 million children per year are subjected to sexual exploitation and abuse must serve as a wake-up call. So too, our evidence that as many as one in nine men in parts of the world have sexually offended online against children – and that many would also go on to commit sexual contact offences with children if they believed it could be kept secret.

And yet paradoxically, this coincides with the roll out of end-to-end encryption on major file-sharing platforms that are increasingly used to secretly share sexual images of children. Just when more than ever we need to shine a light to protect our children, with reports of child sexual exploitation and abuse material being filed once every second, lights are being turned off. If encryption of file sharing is to be the norm, a balance clearly must be struck that meets the desire for privacy for all users, with adequate proactive detection of child sexual abuse material online.

We are in the grip of a crisis that we believe should be treated like a global pandemic. We see the change that can be made quickly, and how countries and organisations can come together, when there is a worldwide health emergency.

We are in the grip of a crisis that we believe should be treated like a global pandemic. We see the change that can be made quickly, and how countries and organisations can come together, when there is a worldwide health emergency such as AIDS or COVID-19. A public health approach to not just responding to but preventing CSEA is required; we owe that to our children.

We know that you are as appalled as we are at the scale of these initial findings, which are just a piece of the data puzzle. If you work with governments, frontline practitioners and others who can support data collection, we ask you to help us by advocating to enhance the index and fill the gaps in data. If you have data, knowledge or insight that can improve our collective understanding, please share it. If you are in a position to set or influence legislation, safeguarding, prevention, policy or funding, please use this data to inform your plans and recommendations.

If you have questions on the scale or nature of CSEA and need data to shift the dial, get in touch with us, and finally, if you work with children to keep them safe and secure, let us know how we can help you and your amazing work around the world.

Online child sexual exploitation and abuse exists because it is allowed to exist. Together, with sufficient will, we can prevent it.

]]>
No Escape Room: building sextortion awareness https://www.weprotect.org/blog/no-escape-room-building-awareness-of-sextortion Fri, 26 Apr 2024 09:57:14 +0000 https://www.weprotect.org/blog/ The National Center for Missing & Exploited Children (NCMEC) has unveiled “No Escape Room,” a thrilling new interactive experience that plunges parents and caregivers into the reality of financial sextortion, coinciding with the release of new data on child sexual exploitation.

Based on dozens of real-life CyberTipline reports, the interactive film follows the story of a 15-year-old boy’s exploitation online. Throughout the experience, users are prompted to engage in a conversation with someone who appears to be another teenager.

Before they know it, what started out as a friendly, flirtatious chat has them trapped in a blackmail scenario. At key points, “No Escape Room” challenges parents to try and navigate the situation for themselves, as they find out it’s more difficult than it seems.

Despite the increasing occurrence of sextortion, few parents actually know what it looks like when a child is exploited online,” said Gavin Portnoy, Vice President of Communications & Brand at NCMEC. “We created ‘No Escape Room’ to allow parents to see how quickly and easily their own child could fall victim to online exploitation, even if they are just down the hall in your own home.”  

Poll shows parents less aware of threats online

The National Center recently conducted a Harris poll to test what parents are most concerned about when it comes to protecting their children. Of the 5,000 people surveyed, 88% stated child abductions as their biggest concern. Online safety barely registered, shining a light on how most of the public is unaware of how frequent and close to home cases of child exploitation can be. 

The alarming reality is that online enticement and exploitation is growing at an exponential rate. In 2023 alone, NCMEC’s CyberTipline received 36.2 million reports of suspected child sexual exploitation. Between 2021 and 2023, online enticement reports increased more than 300%. Read the full CyberTipline 2023 Report and the Impact Report.  

sextortion risks

No Escape Room highlights online sextortion

As users navigate “No Escape Room” they’ll quickly learn that sometimes, teens don’t always have options to get out of certain situations. At the end of the experience, users will be given the opportunity to connect with NCMEC for resources on sextortion.

We hope that parents and caregivers will take the time to really pay attention to what a child going through online enticement is experiencing and feeling,” Portnoy said. “Then, they can use that knowledge to better inform conversations with the kids in their lives.”  

No Escape Room” was funded by the grant provided to NCMEC from the Department of Justice. To create the interactive experience, NCMEC partnered with Grow, a digital agency that works to create digital “activations” and “destinations” meant to be immersive and innovative. 

The company was proud to partner with NCMEC to shed light on the pressing issue of sextortion.

When we took on this brief, our eyes were opened to just how serious and widespread child sextortion is online — and how little parents know about it,” said Drew Ungvarsky, Grow’s Founder and CEO. “In creating ‘No Escape Room,’ we wanted to go beyond education and bring parents directly into the experience of an unsuspecting child. The interactive film gives parents an immersive viewpoint into the crime, showing them firsthand how a child would struggle to navigate such perilous circumstances. It’s been an honor to put our skills in digital experience innovation to work in helping NCMEC address the rising threat of sextortion.

Take it down

NCMEC also offers a service called Take It Down, which helps remove nude, partially nude or sexually explicit photos and videos of underage people by assigning a unique digital fingerprint, called a hash value, to the images or videos. Online platforms can use those hash values to detect these images or videos on their public or unencrypted services and remove this content. 

NCMEC’s new-interactive experience, “No Escape Room,” can be viewed here: https://noescaperoom.org/.

For more resources on how to talk to the children about sextortion, visit NCMEC’s website: https://www.missingkids.org/sextortion.

]]>
We should all be concerned about the use of predatory AI https://www.weprotect.org/blog/we-should-all-be-concerned-about-the-use-of-predatory-ai Thu, 04 Apr 2024 06:49:56 +0000 https://www.weprotect.org/blog/ Recent news coverage has reported more than 200 artists including Jon Bon Jovi, Billie Eilish, Stevie Wonder, Pearl Jam, Mumford & Sons have signed an open letter protesting the potential harm artificial intelligence (AI) poses to artists.

The letter, put out by the organisation Artist Rights Alliance, highlights that “when used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities and our livelihoods… Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter continues. “This assault of human creativity must be stopped.”

When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities and our livelihoods…

Artist Rights Alliance

Yet the risks posed by the misuse of AI go far beyond this. Last month, Channel 4 News found 4,000 celebrities including female actors, TV stars, musicians and YouTubers, have had their faces superimposed on to pornographic material using artificial intelligence.

Recently, the pop star Taylor Swift became a target of deepfakes. In response, a coalition of US senators introduced a bill criminalising the dissemination of AI-generated explicit content without consent. Under this proposed legislation, individuals depicted in digitally manipulated nude or sexually explicit content, referred to as “digital forgeries”, would have the right to pursue a civil penalty. This penalty could be enforced against those who intentionally created or possessed the forged content with the intent to distribute it, as well as those who knowingly received such material without the subject’s consent.

Global Threat Assessment 2023 ENG Cover

Last year, our Global Threat Assessment highlighted that children are already being put at increased risk due to AI intensifying offending and extending the time and resources required by law enforcement to identify and prosecute offenders and safeguard children. This trend is set to worsen – unless we take collective action now.

Yes, AI offers many exciting possibilities to revolutionalise our lives. But it must not be at any cost. Just as AI models can generate deepfake non-consensual pornographic images or steal from artists, they can also generate photorealistic child sexual abuse material – using synthetic imagery featuring fictitious children, avatars of children and those which include real children.  

At the simplest level, AI allows perpetrators to generate hundreds of child sexual abuse images at an industrial scale in seconds with the click of a button.

This explosion of content will make it increasingly difficult for law enforcement to identify whether or not there is a real child in danger and has significant implications for law enforcement.

Offenders have the potential to use AI tools to groom children at scale. We also know AI-generated child sexual abuse material also plays a significant role in the normalisation of offending behaviour and will potentially create a more permissive environment for perpetrators, putting increased children at risk.

There is also evidence that AI-generated child sexual abuse material has increased the potential for the re-victimisation of known child sexual abuse victims as their images are used over and over again.

Strengthening global responses across law enforcement cooperation, legislative change and regulatory approaches are critical. Industry and tech platforms also have a responsibility to get ahead of this rapidly evolving threat.

Now is the time for safety by design and ensuring children are protected as generative AI tech is built.  From removing harmful content from training data, using AI classifiers and manual review, through to watermarking content, solutions are available.

No matter who you are – an artist, a child, a celebrity, a government minister or a tech platform – the internet and digital platforms should be a safe place for everyone. Now is the time for us to support global, united action to make AI a force for good rather than exploitation by criminals.

Otherwise we will soon reach a tipping point from which there is no return.

]]>
Childlight: using data and insights to shine a light on the road ahead https://www.weprotect.org/blog/childlight-using-data-and-insights-to-shine-a-light-on-the-road-ahead Wed, 21 Feb 2024 15:47:15 +0000 https://www.weprotect.org/blog/ In this opinion piece, Paul Stanfield, CEO of Childlight Global Child Safety Institute, reflects on how the fight to keep our young people safe and secure from harm has been hampered by a data disconnect between research and practice, and how Childight is working in partnership with many others to use their data insight to help join up the system and close the gaps.

Navigating without data is like driving in the dark without headlamps. Childlight’s vision, as a global child safety institute launched last year, is to use the illuminating power of data and insight to shine a light on the road ahead – and help children trapped in the darkness of sexual exploitation and abuse.

The fight to keep our young people safe and secure from harm has been hampered by a data disconnect between research and practice, and our aim is to work in partnership with many others to use our data insight to help join up the system and close the gaps.

Soon we will reach a key point in this journey when we produce the first global report on the extent of child sexual exploitation and abuse, following on from our recent in-depth look at the nature of this hidden pandemic.

It is an imperfect start because data differs in quality around the world; data foundations are inconsistent, definitions differ and, frankly, transparency isn’t what it should be so we can’t pretend to have all the numbers, let alone all the answers. To begin with, most of our figures will be at global and Unicef regional rather than country by country level. But as our partnerships grow, our annual global index – drawing upon ever more government-held data, administrative data and data held by tech platforms among others – will become an increasingly valuable tool. Essentially, by helping decision makers to better understand the scale of this growing crisis, we firmly believe it will better equip them to tackle it because with sufficient will, this problem is preventable.

In the short time since our launch last year, we’ve harnessed the expertise and energy of leading researchers, not only at our University of Edinburgh headquarters where our data team is led by Professor Debi Fry, but also across the world, from Australia to Malaysia, and from Canada to Columbia. Research led by our director-designate Professor Michael Salter, based at the University of New South Wales, served as a wake-up call, with its finding that around one in ten men have sexually offended online against children and that many would also commit contact offences if they believed it could be kept secret.

Uniquely, we also have decades of law enforcement experience at a senior level to draw upon to help ensure the data insights we produce are highly practical as we tackle this growing crisis. For my part, I previously served as director for Interpol’s global organised crime programme and as regional director for the UK’s National Crime Agency (NCA) in Africa. I’m also very fortunate to count among our team Kelvin Lay who, while with the NCA in Kenya, set up Africa’s first dedicated child exploitation and human trafficking units, and Doug Marshall, the former deputy national co-ordinator at Operation Hydrant, the UK response to non-recent child abuse.

Our multi-disciplinary approach means we not only produce high quality data insights but we can rapidly help turn that into action – working with authorities all over the world through our technical advisory programme. Acting on data intelligence, we can in this way help law enforcers pinpoint and arrest perpetrators and safeguard the children they have been abusing.

All of this has been made possible by the generosity of the Human Dignity Foundation whose vision grew into Childlight. We have also benefited very considerably from working in partnership with WeProtect Global Alliance since the start of our journey, and we are excited by the prospect of how much more we can now achieve as a new member of the alliance, working collaboratively with many others.

When the world is chaotic and changing rapidly, and there are mounting concerns about the lights being turned off on child sexual exploitation and abuse, we believe child sexual exploitation and abuse needs to be treated as a global health emergency. And we would like to work urgently with you and others to continue to shine a light on some of the world’s darkest crimes. Because, as our mantra goes, children can’t wait.

Paul Stanfield is CEO of Childlight Global Child Safety Institute and he would be delighted to discuss the work of the data institute with other alliance members. His email is ku.ca.deobfsctd@dleifnats.luap

]]>
Combating abusive AI-generated content: a comprehensive approach https://www.weprotect.org/blog/combating-abusive-ai-generated-content-a-comprehensive-approach Mon, 19 Feb 2024 17:12:30 +0000 https://www.weprotect.org/blog/ In this opinion piece, Brad Smith, Vice Chair and President of Microsoft, presents Microsoft’s work to identify and prevent risks related to generative AI. 

Each day, millions of people use powerful generative AI tools to supercharge their creative expression. In so many ways, AI will create exciting opportunities for all of us to bring new ideas to life. But, as these new tools come to market from Microsoft and across the tech sector, we must take new steps to ensure these new technologies are resistant to abuse.

The history of technology has long demonstrated that creativity is not confined to people with good intentions. Tools unfortunately also become weapons, and this pattern is repeating itself. We’re currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors, including through deepfakes based on AI-generated video, audio, and images. This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying.

We need to act with urgency to combat all these problems.

In an encouraging way, there is a lot we can learn from our experience as an industry in adjacent spaces – in advancing cybersecurity, promoting election security, combating violent extremist content, and protecting children. We are committed as a company to a robust and comprehensive approach that protects people and our communities, based on six focus areas:

1. A strong safety architecture. We are committed to a comprehensive technical approach grounded in safety by design. Depending on the scenario, a strong safety architecture needs to be applied at the AI platform, model, and applications levels. It includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. It needs to be based on strong and broad-based data analysis. Microsoft has established a sound architecture and shared our learning via our Responsible AI and Digital Safety Standards, but it’s clear that we will need to continue to innovate in these spaces as technology evolves.

2. Durable media provenance and watermarking. This is essential to combat deepfakes in video, images, or audio. Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history. Together with other leading companies, Microsoft has been a leader in R&D on methods for authenticating provenance, including as a co-founder of Project Origin and the Coalition for Content Provenance and Authenticity (C2PA) standards body. Just last week, Google and Meta took important steps forward in supporting C2PA, steps that we appreciate and applaud.

We are already using provenance technology in the Microsoft Designer image creation tools in Bing and in Copilot, and we are in the process of extending media provenance to all our tools that create or manipulate images. We are also actively exploring watermarking and fingerprinting techniques that help to reinforce provenance techniques. We’re committed to ongoing innovation that will help users quickly determine if an image or video is AI generated or manipulated.

3. Safeguarding our services from abusive content and conduct. We’re committed to protecting freedom of expression. But this should not protect individuals that seek to fake a person’s voice to defraud a senior citizen of their money. It should not extend to deepfakes that alter the actions or statements of political candidates to deceive the public. Nor should it shield a cyber bully or distributor of nonconsensual pornography. We are committed to identifying and removing deceptive and abusive content like this when it is on our hosted consumer services such as LinkedIn, our Gaming network, and other relevant services.

4. Robust collaboration across industry and with governments and civil society. While each company has accountability for its own products and services, experience suggests that we often do our best work when we work together for a safer digital ecosystem. We are committed to working collaboratively with others in the tech sector, including in the generative AI and social media spaces. We are also committed to proactive efforts with civil society groups and in appropriate collaboration with governments.

As we move forward, we will draw on our experience combating violent extremism under the Christchurch Call, our collaboration with law enforcement through our Digital Crimes Unit, and our efforts to better protect children through the WeProtect Global Alliance and more broadly. We are committed to taking new initiatives across the tech sector and with other stakeholder groups.

5. Modernized legislation to protect people from the abuse of technology. It is already apparent that some of these new threats will require the development of new laws and new efforts by law enforcement. We look forward to contributing ideas and supporting new initiatives by governments around the world, so we can better protect people online while honoring timeless values like the protection of free expression and personal privacy.

6. Public awareness and education. Finally, a strong defense will require a well-informed public. As we approach the second quarter of the 21st century, most people have learned that you can’t believe everything you read on the internet (or anywhere else). A well-informed combination of curiosity and skepticism is a critical life skill for everyone.

In a similar way, we need to help people recognize that you can’t believe every video you see or audio you hear. We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society.

Ultimately, none of this will be easy. It will require hard but indispensable efforts every day. But with a common commitment to innovation and collaboration, we believe that we can all work together to ensure that technology stays ahead in its ability to protect the public. Perhaps more than ever, this must be our collective goal.

]]>
The importance of a truly global Alliance – reflections from Brazil https://www.weprotect.org/blog/the-importance-of-a-truly-global-alliance-reflections-from-brazil Tue, 13 Feb 2024 11:21:03 +0000 https://www.weprotect.org/blog/ Our Executive Director Iain Drennan and Membership Engagement Manager Stephanie Quintao recently travelled to Brazil to mark Safer Internet Day (6 March) at a conference hosted by our friends at, SaferNet Brasil, in São Paulo. In this blog they share their reflections from the conference and the power of the Alliance’s global membership.

Brazilian NGO SaferNet Brasil has been monitoring the internet for almost two decades, with a focus on the exploitation and abuse of children online. In a report published this week, they revealed that in 2023, there were 71,867 new reports of child sexual abuse images in Brazil, an increase of 77% compared to the previous year and the highest number in their data series, which began in 2005.

The report was announced on Safer Internet Day at their conference in Sao Paulo, which brought together hundreds of participants from all corners of this huge and diverse country to talk about key issues. WeProtect Global Alliance had the honour of giving the keynote speech, sharing emerging threats, trends and responses. 

Brazil is a Global Task Force member and an active participant in the Alliance, especially through government and civil society engagement, and it was fantastic to hear and understand the perspectives of our Brazilian members and other organisations working in the sector in person.  

During our very first visit to Brazil, we were able to build relationships with new government ministries, civil society organisations and the tech industry.  We learnt an enormous amount about challenges and opportunities facing Brazil.

For example, in Brazil, 92% of 7–13-year-olds have internet access. But access is not homogenous and can vary from a child in an indigenous community in rural Amazonas state who’s just getting access on a shared mobile, to child with fluent English in São Paulo gaming and communicating on multiple devices with adults and children globally. It is a complex challenge finding the best responses to keep children safe online working across such diverse groups of children.

Iain Drennan speaking
Panel discussion Safer Internet Day Brazil

It may surprise many people to know that Brazil is TikTok’s 3rd largest market in the world. And regional engagement is powerful – what happens in Brazil has big influence in other countries in Latin America and beyond.

Brazil is open to multi-stakeholder partnerships and there is a good tradition of dialogue. There was diverse representation from the tech sector at the conference (TikTok, Meta, Google) and they were open in sharing their challenges and underlined the need for dialogue. As we plan for our upcoming Global Summit, we would welcome more interactive dialogue with the tech sector.  

Like many countries, there is limited government funding and investment in the civil society sector, but Brazil has put in place strong legislation, with a proposal under consideration in the Senate to criminalise those who create and disseminate images (photo and video) of nudity and sexual content of a person using artificial intelligence in Brazil. Online safety also has a strong advocate in Estela Aranha, Secretary for Digital Rights at Ministry of Justice and Public Security, who supported a stronger focus on prevention when she spoke at the conference.

Michael Sheath from INHOPE reflected that the internet was created by “utopians” – focusing on benefits and positives – but being exploited by “dystopians” seeking to leverage vulnerabilities to groom, manipulate and abuse children. He also advocated for safety by design and effective offender deterrence and management.

Virgilio Almeida (Professor Emeritus of Computer Science at UFMG (Federal University of Minas Gerais), Faculty Associate at Berkman Klein Center, Harvard) gave a challenging and inspiring talk on artificial intelligence (AI) and reinforced the urgent need for human review and safety by design.  He cited guidance from eSafety Australia on AI product development, and the need to build partnerships with industry.

It was also wonderful to see a strong example of participation. Safernet, in partnership with the UK government, awarded the Prêmio Cidadania Digital em Ação (Digital Citizenship in Action Award) to a school for their work keeping children safe online. More than 150 schools throughout 27 different regions in Brazil participated and showcased student projects on digital safety, including podcasts, social media and other content highlighting hate speech and cyberbullying. These projects featured genuine youth engagement and empowered young people and teachers to address and participate in digital safety. The projects truly reflected the responses that children and young people want to see and that are relevant to them. It was clear that both teachers and students were incredibly proud of what they had achieved.

Our visit also brought home the importance of language skills in understanding culture and nuances. It was very useful that Stephanie has Brazilian heritage and can speak Portuguese. Our future ambition for the Alliance is to be able to communicate directly with even more stakeholders in more languages. This was also highlighted in our recent Global Threat Assessment where CRISP data highlighted the most prevalent languages for terms signifying risk to children. From January to June 2023, Portuguese grew by 29%. Accurate risk detection requires not only translation but cultural understanding to capture colloquialisms, intentional evasion, associated slang, or veiled language

While not specific to Brazil, it was also timely that on Safer Internet Day Meta announced via a blog on their website it was working to detect and label AI-generated images on Facebook, Instagram and Threads as the company pushes to call out “people and organisations that actively want to deceive people”. It is a welcome announcement and we hope to do more proactive work with our private sector members like Meta this year.

Safer Internet Day is genuine global moment of action and a powerful tool to bring together all forms of online harms and share learning. SaferNet did a brilliant job in bringing together the Brazilian community and providing a forum to engage with the local and international community.

Panel photo

We look forward to continuing to work with our members in Brazil and across Latin America – together we can create a digital world digital world designed to protect children from sexual exploitation and abuse.

About Safernet Brasil

SaferNet is the first-ever NGO in Brazil to establish a multistakeholder approach to protect human rights in the digital environment. It created and has coordinated since 2005 the National Human Rights Cybertipline, the National Helpline and the Brazilian awareness node, and has more than 17 years of experience in capacity building programmes with educators, young people, legislators, policy makers and social workers.

Safer Internet Day Brazil has been coordinated by SaferNet since 2009 in a close partnership with the Brazilian Internet Steering Committee (CGI.br), the Brazilian Network Information Center (NIC.br), UNICEF, the Federal Prosecutors Office and has support from Google, Youtube, Meta, TikTok and Vivo. 

 

]]>
Virtual reality risks to children will only worsen without coordinated action  https://www.weprotect.org/blog/virtual-reality-risks-to-children Wed, 03 Jan 2024 16:17:41 +0000 https://www.weprotect.org/blog/ In this article, our Executive Director Iain Drennan reflects on how without coordinated action, virtual reality risks to children will only worsen.

Only a few days into 2024, we woke to an alarming Daily Mail news report of UK police investigating an alleged rape in the metaverse, after a child’s avatar was attacked in a virtual reality game by several adult men. 

Sadly, this news is indicative of new and emerging threats to the sexual abuse and exploitation of children which we are struggling to curtail as a global community. 

Less than 12 months ago, WeProtect Global Alliance published a briefing on Extended Reality (XR) technologies and child sexual exploitation and abuse. The paper, developed together with Professor Emma Barrett OBE from The University of Manchester, highlighted how XR technologies provide new ways of producing child sexual abuse material, building on existing methods (like live webcam abuse) and extending tools and techniques that are already common in adult XR pornography.  

It outlined how child sexual abuse offenders may take advantage of the same technology to live-stream sexual abuse in virtual reality or augmented, influenced by chat room visitors and including the use of haptic devices. In this way, physical sexual abuse of children could thus be perpetrated by offenders who are not physically co-located with their victims. 

The paper called for regulators and lawmakers to ensure that XR harms are covered by existing or new legislation and policy. It also called for meaningful consultation across a wide range of stakeholders, for XR tech platforms to implement safety by design measures to prevent harm from occurring in the first place, and for further investment in research and development to support criminal investigation and prosecutions.  

In particular, the paper pre-empted the need for “innovative thinking and new tools for digital investigation and digital forensics, and consideration of how evidence of child abuse activity in XR could be laid before a jury.” The same issues now being discussed in the UK case which emerged this week. 

Our concerns about XR were also reinforced in our 2023 Global Threat Assessment report last October, which highlighted that, in a world-first, UK police forces recorded eight instances of VR use in child sexual abuse-related crime reports in 2022.  

According to the Threat Assessment, the global market for XR is forecasted to surpass $1.1 trillion by 2030. It is likely offenders will increasingly exploit XR technologies as they become more accessible and affordable While there are signs of slowing enthusiasm and investment in the metaverse, the general upwards trajectory remains undeniable. 

There is some encouraging proactive work being done to protect children. For example, the XR Safety Initiative is working on a child safety framework that will identify ways of safeguarding children in XR. Also, the IEEE Global Initiative on Ethics of Extended Reality which brings together interdisciplinary, cross-sector groups to identify standards and practice recommendations to increase the ethical and safe development of XR technologies.  Pockets of good practice exist within industry and safety by design principles continue to be at the forefront of regulation in countries like Australia and the UK. 

However regulation and safeguards are still lagging behind the development of new technologies. We are living in the age of steadily expanding borderless crime where child sexual abuse and exploitation online continues grow at an alarming rate, becoming ever more complex as technologies like AI and XR develop, converge and are exploited by offenders. Urgent, coordinated and global action is needed if we are to get ahead of and prevent a growing tidal wave.  

Read the full reports: 

Extended Reality technologies and child sexual exploitation and abuse – WeProtect Global Alliance 

Global Threat Assessment 2023: Assessing the scale and scope of child sexual abuse online – WeProtect Global Alliance 

]]>