Algorithms can prevent online abuse

THIS ARTICLE/PRESS RELEASE IS PAID FOR AND PRESENTED BY NTNU Norwegian University of Science and TechnologyRead more

According to the Norwegian Broadcasting Corporation, the number of cases of child abuse via the Internet has increased by almost 50% in five years. NTNU researchers in Gjøvik have developed algorithms that can help detect planned online grooming by analyzing conversations.

Millions of kids log on to chat rooms every day to talk with other kids. One of those “kids” might just be a man posing as a 12-year-old girl with far more sinister intentions than discussing My little Pony episodes.

Patrick Bours, inventor and NTNU professor at AIBA strives to prevent this type of predatory behavior. AiBA, a digital AI moderator that Bours helped find, may offer a tool based on behavioral biometrics and algorithms that detect sexual abusers in online chats with children.

And now, like recently reported by Dagens Næringsliv, a national financial newspaper, the company raised capital of NOK 7.5. million, with investors including Firdah and Ski Capitaltwo companies based in Norway.

In its most recent efforts, the company is working with 50 million chat lines to develop a tool that will find high-risk conversations where abusers are trying to get in touch with children. The goal is to identify the distinguishing characteristics of what abusers leave behind on gaming platforms and social media.

“We are targeting major game producers and hope to get a few hundred games onto the platform,” Hege Tokerud, co-founder and managing director, told Dagens Næringsliv.

Cybergrooming, a growing problem

Patrick Bours is Professor of Information Security at NTNU in Gjøvik. He came up with the idea of ​​a tool capable of detecting online predators and which is marketed through a start-up called AiBA.

Cybergrooming is when adults befriend children online, often using a fake profile.

However, “some sexual predators come right up and ask if the kid is interested in chatting with an older person, so there’s no need for a fake ID,” Bours says.

The goal of the abuser is often to lure the kids onto a private channel so that the kids can send in photos of themselves, with and without clothes, and possibly even set up a date with the youngster.

Authors don’t care so much about sending in photos of themselves, Bours says.

“Exhibitionism is only a small part of their motivation. Getting images is much more interesting for them, and not just still images, but live images via a webcam,” he says. “Monitoring all these conversations to prevent abuse is impossible for moderators who monitor the system manually. What’s needed is an automation that notifies moderators of the ongoing conversation.”

AiBA has developed a system using multiple algorithms that provides large chat companies with a tool that can discern whether adults or children are chatting. This is where behavioral biometrics comes in.

A grown man can pretend to be a 14 year old boy online. But the way he writes — like his typing rhythm or his choice of words — can reveal that he’s a grown man.

Machine learning key

The AiBA tool uses machine learning methods to analyze all cats and assess risk based on certain criteria. The risk level may go up and down a bit during the conversation as the system evaluates each message. The red warning symbol illuminates the chat if the risk level gets too high, alerting the moderator who can then watch the conversation and assess it further.

This way, algorithms can detect conversations that need to be checked while they’re in progress, rather than after, when the damage or abuse might have already happened. The algorithms thus serve as a warning signal.

Cold and cynical

Bours analyzed many chat conversations from old logs to develop the algorithm.

“By analyzing these conversations, we learn how such men ‘groom’ the recipients with compliments, gifts and other flattery, so that they reveal more and more. It’s cold, cynical and carefully planned,” he says. “Reviewing chats is also part of the learning process, so we can improve the AI ​​and make it respond better in the future.”

Some adults contact children and young people online using a fake profile.  Their goal is often to lure kids onto a private channel so they can send in pictures of themselves, with and without clothes, and maybe eventually meet the youngster.

Some adults contact children and young people online using a fake profile. Their goal is often to lure kids onto a private channel so they can send in pictures of themselves, with and without clothes, and maybe eventually meet the youngster.

“The danger of this type of contact ending in aggression is high, especially if the attacker sends the recipient to other platforms with video, for example. In a live situation, the algorithm would mark this chat as the one who must be monitored,” he said.

Real-time analysis

“The objective is to denounce an aggressor as quickly as possible”, explains Bours. “If we wait for the whole conversation to end and the chatterboxes have already made deals, it might be too late. The monitor can also tell the child in the chat that he is talking to an adult and not to another child.

AiBA worked with game companies to install the algorithm and works with a Danish game and chat platform called MoviestarPlanetwhich is aimed at children and has 100 million players.

While developing the algorithms, the researcher discovered that users write differently on different platforms such as Snapchat and TikTok.

“We need to consider these distinctions when we train the algorithm. Same with language. The service must be developed for all types of languages,” explains Bours.

Review chat models

More recently, Bours and his colleagues looked at cat patterns to see which patterns deviate from what would be considered normal.

“We analyzed cat patterns — instead of texts — from 2.5 million cats, and were able to find several instances of grooming that would otherwise have gone undetected,” Bours said.

“This initial research looked at the data retrospectively, but we are now investigating how we can use it in a system that directly tracks these chat patterns and can make immediate decisions to report a user to a moderator,” he says.

AiBA developed its algorithm with the financial support of NTNU Discovery, NFR FORNY and NTNU Information Security and Communications Technology Department. NTNU Technology Transfer handled the IPR and commercialization work.

What is behavioral biometrics?

Behavioral biometrics relate to how you do things, such as how you type on your cell phone or computer. If researchers understand how you use the keyboard, your rhythm and your typing force, they can determine with 80% accuracy whether you are male or female, young or old.

And if the scan also includes the words you use, it will recognize your gender and age more than 90% of the time.

Sharon D. Cole