top of page

How predators are using AI in gaming chatrooms to target children

  • 2 hours ago
  • 9 min read

By Nick Francis, Co-founder of Digital Safety Squad.


Artificial Intelligence is considered one of humanity’s greatest technological achievements - as world-advancing as the advent of electricity, the printing press and the World Wide Web. 

And deservedly so, I would argue. Free-to-use AI platforms like ChatGPT and Google’s Gemini allow us to perform at levels we never could - we can create professional standard images for work presentations, summarise 50-page legal documents in seconds, help dyslexic children write a story. I used AI the other day to diagnose a strange noise my car was making (I’m not in the slightest mechanical).


But just as the internet’s ability to connect us with strangers was manipulated by groomers to stalk victims in new virtual environments like gaming chatrooms, AI’s capabilities are being twisted by those same bad actors. 


The internet allows predators access to their prey. Artificial Intelligence has given those same predators a frightening new toolkit. 


Different AI-powered grooming methods

Sadly, we’re no longer speculating about the dangers AI poses to our kids, the Breck Foundation’s 2023 AI grooming warnings have materialised. In the first half of 2025 alone, according to National Center for Missing and Exploited Children’s (NCMEC) CyberTipline data, reports relating to AI and child sexual exploitation jumped from 6,835 to 440,419 - a rise the organisation declared a "wake-up call." 


By the end of 2025, the NCMEC had received more than 1.5 million AI-related reports, including over 30,000 involving predators uploading real images of children to generate explicit material with AI. A further 3,000 specifically involved AI being used to groom children through chatrooms, such as those found on Roblox and Fortnite.


Deepfake sextortion

So-called ‘deepfakes’ aren’t a new phenomenon, but the recent arrival of AI has taken it to frightening extremes. Where once a predator needed high-level Photoshop skills to create bogus pictures by combining a victim’s face with a pornographic image, misusing AI image tools makes it as easy as a few clicks of a button. Even scarier is the emergence of AI-powered ‘nudification tools’, apps developed to digitally undress innocent images of people. 

What we’re seeing today is a rise in deepfake porn and blackmail; attackers using AI to manipulate images of kids found on gaming platforms and social media into explicit material, then sending the images to the victim with the threat they will distribute them unless demands are met. At best the attacker demands money, but often they demand real explicit images from the child. 


Research in 2025 by child safety nonprofit Thorn found that 13% of sextortion victims reported being extorted using AI-generated deepfake nudes, and that one in seven victims were driven to self-harm as a result of their experience. In the UK, the Internet Watch Foundation (IWF) processed 245 reports in 2024 containing AI-generated child sexual abuse images - a 380% increase on 2023. Many of the reports came directly from children describing 'fake' or 'AI-edited' images of themselves.

In 2025 in the US, 16-year-old Elijah Heacock took his own life after being sent AI-generated sexual images of himself - his attackers were demanding $3,000 to prevent the images being circulated. The quote from his mother is heartbreaking:  “This person was asking for $3,000 from a child, and now we're looking at $30,000 to bury our son and medical bills."


AI-generated grooming scripts and personas

Online grooming is the abhorrent and predatory behaviour we are sadly all too familiar with. It’s the very reason the Breck Foundation came to be, if you haven’t read Breck’s harrowing and tragic story I urge you to do so. But just like deepfake pornographic images, AI has ferociously accelerated the sophistication and scale of grooming attacks.

Before AI, online grooming took time - Breck himself was groomed over the course of a year by his murderer. What’s more - as disgusting as it is - grooming required a degree of psychological skill on behalf of the attacker. Groomers learn over time what language their victim responds to, what triggers their emotions, where their psychological weaknesses lie. The longer a child is being groomed, the more chance parents or teachers have of spotting the signs.


But today, an attacker can feed a child’s gaming chat log, social media profile or YouTube comment history into an AI platform and in seconds be delivered a tailored script that surfaces the language the child speaks, their interests, plus any emotions or experiences the groomer can mirror back to earn the child’s trust. 

Then there’s the scale. AI chatbots can be deployed to hold authentic-seeming online conversations in chatrooms via fake personas. The chatbots mimic the speech patterns of the child while collating information about them with chilling accuracy, and because the groomer does not need to engage directly, they can use chatbots to groom dozens of kids at the same time.  


The scale of AI-assisted chat grooming is already being tracked by authorities. In 2025, NCMEC recorded 3,000 reports specifically involving AI being used to groom children through chat, with the organisation declaring that AI technology can now "simulate the experience of an explicit chat with a child." 


Identifying vulnerable children


Again speaking about the scale and scope AI has given online predators, AI is being used to instantly find kids that are more susceptible and vulnerable to grooming than others. Earlier in 2026, the United Nations issued a warning that “predators can use AI to analyse a child’s online behaviour, emotional state, and interests to tailor their grooming strategy.” 

Any digital footprint left by a child - whether that be in a gaming chatroom, social media account or forum - is available to AI tools to spot for weaknesses.

Predators can effectively build a psychological profile in seconds based on what sort of things a kid comments on online, which platforms they’re using and how long they spend using certain apps.


This means a seemingly innocent rant about having a bad day at school or an argument with their best friend can be immediately identified and used as a way in for the groomer - catching the child when they’re in an emotionally raw state. 

Gaming platforms are specifically targeted in this way by groomers. Kids feel safer in these environments than they do on social media platforms. They believe they’re only speaking with like-minded peers, rather than posting to the world, and let their guard down accordingly. 


Alarmingly, this isn't always the work of lone predators operating from their bedrooms anymore. The National Crime Agency warns that many of these operations are run by sophisticated, organised criminal networks - groups that use AI to process thousands of potential child victims at scale, identifying the most vulnerable and passing them down the line for targeted grooming. 


AI voice cloning

An emerging threat comes in the form of voice cloning. AI-powered tools such as ElevenLabs can clone a person’s voice with as little as a few seconds of audio, enabling a groomer to perfectly impersonate someone their victim trusts, such as their gaming friends.

Voice-enabled chatrooms on games like Fortnite and Call of Duty give more than enough access for one of these tools to create a fake voice. 

Predators can also use AI to create authentic-sounding kids' voices, without raising suspicion when making contact in chat rooms. A few years ago, a 40-year-old man would struggle to impersonate a 14-year-old girl, but today it’s easy.   

Adults can also be vulnerable to AI voice cloning attacks. Scammers are recording people’s voices during innocent-seeming ‘lifestyle surveys’ over the phone, then using the information and cloned voice audio to simulate consent for things like direct debits.  


The warning signs

The general signs your child is being groomed online are well documented, the Breck Foundation's guidance covers them clearly. But there are a number of signs that your kid could be a victim of AI-enabled grooming, specifically.


They believe they have a special relationship with someone they’ve only met online

AI is enabling predators to create artificial bonds with kids in ways not possible before. Language mirroring, remembering key details from conversations, matching their interests and experiences - it’s incredibly easy for kids to feel they have met someone who uniquely understands them, when in fact it’s an AI chatbot on the other end. If your child starts speaking about a gaming friend in unusually intimate terms or describes them as their best friend despite never having met, you should investigate.


They receive unsolicited images or files from gaming friends

This is a red flag for sextortion. Once a predator has created a fake image of a child using AI, their next step is to send it to them, usually on the platform from where they took the image, such as a gaming chatroom or social media account. If your child becomes visibly distressed over a file they just opened, or immediately closes the screen down, it’s time for a chat.


They receive in-game gifts from strangers

AI is enabling predators to find kids who would respond to being contacted online, and one of the most common opening gambits is sending the child an in-game gift; Robux, V-Bucks, new character skins. If you notice your child suddenly has in-game currency or items you didn’t pay for, you need to find out who gave it to them. 


Distressed or unsettled after voice or video calls with gaming friends

If your child seems distressed, confused or withdrawn after speaking to someone verbally online, especially someone you’ve not heard of before, it could be a sign they are being groomed via a fake AI voice persona. Predators gain access to chatrooms by faking a child’s voice, but often move to discussing unsettling things very quickly.  


How can parents safeguard against AI-powered grooming?

The Breck Foundation recently surveyed 1,000 parents and found that four out of five parents are concerned about their child being groomed online, fears that are heightened by the AI era. Myself and my co-founder Jade Artry launched Digital Safety Squad because we saw how the emergence of AI is making the online world even less safe than it already was, especially for young people. We cover topics like the rise of AI girlfriends (yes, really), AI-enabled cyber bullying and how deepfake and voice cloning attacks work


But there’s a common thread to all the advice we give, no matter which aspect of AI we are discussing; education and conversation. 


Understand AI and talk about it with your kids

We should really feel sympathy for our kids. It’s always been difficult to know what’s real online - the internet is a place where anyone can post anything, there are no guardrails. But AI has made it a thousand times harder. 


The key is to discuss the various tools - chatbots, image platforms, generative AI apps - and make sure your child understands how they can be misused by the wrong people. You don't need to be a tech expert to do this - you just need to ask questions, stay curious yourself and make it clear they can come to you without fear of judgement if something feels wrong. Which leads me to the next, very important point.


Foster a judgement-free home environment

When kids don’t report encountering something dangerous or frightening online, it’s usually because they are scared they will get in trouble, or they feel shame over it. If your kid feels they won’t be judged or punished for something like seeing a pornographic image, or being conned into sending someone money, they are much more likely to tell you when they’re in danger. Be open, non-judgmental, and make sure they know that they won’t get in trouble when they want to tell you something. Having a kid who is confident to talk to their parents is a far more effective safeguard than any parental control tool.  


Check and tighten privacy settings on gaming platforms

Gaming platforms and apps like Roblox and Discord have in-built privacy settings, and it’s important you take full advantage of them. You can limit who is able to contact your child. In particular, most platforms allow you to disable voice chat with strangers. And you can limit who can see or find your kid’s profile. These platforms have simple-to-follow guides to their privacy settings, you can find some of them here:



Of course, privacy settings are not foolproof, and some kids actively revert them when their parents aren’t looking, so regular check-ins with your kid are equally important as the practical steps. 


Get familiar with the reporting tools


If you discover your child is being targeted online, it’s key that you take action immediately, and there are a number of tools to help you do so.


CEOP is a partner with the Breck Foundation and the National Crime Agency's dedicated child protection unit. Head here to report grooming or online attacks to their team of Child Protection Advisors.


Report Remove is a service run by Childline and the Internet Watch Foundation, and helps remove sexually explicit images and videos of under 18s from the internet.  


Childline (0800 1111) is a phenomenal resource for kids themselves, offering guidance and counselling 24/7.


Artificial Intelligence is here to stay, and it’s moving fast. Knowledge is the strongest tool to safeguard against those abusing AI’s capabilities - for both you as a parent and your children. This is why the work of the Breck Foundation is so critical, and also the reason we started Digital Safety Squad. If this piece has alarmed you, that’s good, because it should. But an alarm without action is useless. Use the reporting tools above, have the conversations and make sure your child knows your door is always open.

Take action today and help end online grooming crimes

Only by working together can we help young people reclaim the internet 

You might also be interested in...

bottom of page