top of page

Is artificial intelligence putting children at risk?

Updated: Apr 3

Advances in artificial intelligence (AI) technology can often feel like a double-edged sword, offering exciting digital progress and terrifying science fiction-style scenarios. ChatGPT (Chat Generative Pretrained Transformer), a language generation AI model, is no exception.

OpenAI launched ChatGPT in November 2022, a website featuring a text-generator AI programme. Trained to generate human-like text and critical question responses, ChatGPT generates output based on any prompt it is given. It is the most user-friendly and sophisticated text-generating AI the internet has seen so far.

Microsoft’s chief executive has called this type of AI the next major wave of computing and the company is investing £10 billion into OpenAI. However, there is cause for concern. We must be careful that we are prepared for how ChatGPT’s abilities can be harnessed by online groomers, scammers, and those out to cause harm.

Waiting until after crimes occur to regulate AI, as is happening now with social media regulation in the Online Safety Bill, would be unacceptable.

It is still too early to fully understand the impact that this uncharted technology could have on the safety of children. However, waiting until after crimes occur to regulate AI, as is happening now with social media regulation in the Online Safety Bill, would be unacceptable. We must do everything possible at this early stage to minimise the risks.

Groomers can use ChatGPT to generate content quickly and easily for fake news stories, fake profiles and plausible conversations with young people. To test it, we asked ChatGPT to pretend to be a young girl asking to be friends with another child online, and in less than ten seconds it produced the following:

"Username: HorseLover10

Hi there! I saw your profile and I thought we might have similar interests. Would you like to be friends on this platform? I love playing games, reading books, and learning new things :)"

The Breck Foundation was formed to honour the memory of Breck Bednar, a 14-year-old teenager who was groomed and murdered by an online predator in 2014. Over the course of a year, Breck was groomed online, the groomer used lies, manipulation and false promises to gain Breck’s trust, and eventually, he was lured to the groomer's flat.

On February 17, 2014, Breck was murdered by his online groomer. As a result of this tragedy, Breck’s mother, Lorin, founded the Breck Foundation. The charity empowers children to be able to recognise the signs of online grooming, exploitation and other online harms.

The advanced text-generation powers of ChatGPT, combined with existing free text-to-image AI, will make it easier than ever for groomers to create fake profiles and target children online. Recently, a series of AI-generated images of young partygoers went viral on Twitter because of concern caused by their shocking realism. The potential for AI to be used for more sinister motives is concerning, especially given the rise of online grooming crimes (up 80% in the past 4 years). Whilst ChatGPT in itself won’t encourage people to become online groomers, it does allow anyone to feed the conversations they are having with children online through AI technology to make themselves more persuasive and credible to their victims, aiding manipulation. ChatGPT could contribute to a rise in online grooming cases.

The full scale of the coming danger is both unknown and limitless.

OpenAI, creators of ChatGPT, claim to have strict policies in place to prevent the misuse of their technology and to regularly monitor the platform for any potentially illegal activity. However, AI technology with this capability is new and will be used in ways that are not anticipated by any laws or systems that creators may have put in place for self-regulation, because of this research must be conducted to help us all understand the ways in which AI technology can be used to assist online crimes. Investment is also needed into the development of technologies/systems that can detect AI-generated content. With this knowledge, we will then know what we must teach children to keep them safe from this new danger. Tech companies and the government must lead in these areas and in prioritising education to protect children from AI danger; we know the most effective safety barrier in protecting children is their own education.

The full scale of the coming danger is both unknown and limitless. AI with this level of power and accessibility will without a doubt make changes to our digital landscape, and it is essential that we ensure its safe and responsible use before it becomes another vehicle helping predators online.

Do you think ChatGPT needs regulation?

  • Yes!

  • Not sure yet

  • No.

Written by Michael Buraimoh, Chief Executive of the Breck Foundation.

Take action today and help end online grooming crimes

Only by working together can we help young people reclaim the internet 

You might also be interested in...

bottom of page