Grace Tame urges government to criminalise AI tools used for child abuse

Grace Tame urges government to criminalise AI tools used for child abuse

Former Australian of the Year Grace Tame says there is an urgent national need to act to prevent AI tools being used to create child abuse material, and that the country must criminalise the possession of freely available child exploitation apps.

Child safety advocates including Ms Tame will meet at Parliament House today ahead of a new term of parliament to address the rise of AI being used to sexually exploit children, as well as opportunities to use AI to detect grooming behaviour and child sexual abuse material.

The meeting comes as a spotlight has turned on what governments are doing to protect children in the week of another horrific alleged case of abuse at a Melbourne childcare centre.

Ms Tame, who rose to prominence campaigning for the right to speak under her own name about her abuse, says the government is moving too slowly.

"I don't think previous governments and, unfortunately, the current government, have acted swiftly enough when it comes to child safety online," Ms Tame said.

"We're familiar with the landscape — we're waiting on effective, urgent government action."

The International Centre for Missing and Exploited Children, which is convening the parliament round table, has advocated for Australia to make it an offence to possess or distribute custom-built AI tools designed to produce child sexual abuse material (CSAM).

Similar legislation has been introduced in the United Kingdom, which the government has said it is following closely.

Intelligence company Graphika reported late in 2023 that non-consensual explicit generative AI tools had moved from being available on niche internet forums into a "scaled" online and monetised business.

It found there had been more than 24 million unique visits to the websites of 34 of these tools, and links to access them had risen sharply across platforms like Reddit, X and Telegram.

Worse still, the spread of AI-generated exploitation material is diverting police resources from investigations involving real victims.

'Outdated' child safety plan makes no mention of AI

While possession of CSAM is a criminal offence, advocates say Australia should be following other nations, including the United Kingdom and European Union, in outlawing the AI tools themselves.

"The reason why this round table is really important … is because when we look at the national framework for child protection that was drafted in 2021, it's a 10-year framework and the presence of AI and the harms being caused by AI are actually not mentioned in that framework," ICMEC Australia chief executive Colm Gannon said.

"There has to be regulations put in place to say you need to prevent this from happening, or your platform being used as a gateway to these areas."

"This software [has] no societal benefit, they should be regulated and made illegal, and it should be an offence to actually have these models that are generating child sexual abuse material.

"It is urgent."

A man in a suit stands for a photo.

Colm Gannon says the government must do more to prevent AI being used to abuse children, but there are also opportunities for AI to be used to aid law enforcement. (Supplied)

Ms Tame said currently, perpetrators were able to purchase AI tools and download them for offline use, where their creation of offending material could not be detected.

"It is a wild west, and it doesn't require much sophistication at all," she said.

An independent review of the Online Safety Act handed to the government in October last year also recommended "nudify" AI apps used to create non-consensual explicit material should be banned.

The government has promised to adopt a recommendation from that review to impose a "duty of care" on platforms to keep children safe, though it is yet to be legislated, and the 66 other recommendations of that review have not been responded to.

In a statement, Attorney-General Michelle Rowland said the use of AI to facilitate the creation of child sexual abuse was sickening "and cannot continue".

"I am committed to working across government to further consider how we can strengthen responses to evolving harms. This includes considering regulatory approaches to AI in high-risk settings," Ms Rowland said.

"Australia has a range of laws that regulate AI. These include economy-wide laws on online safety."

Rowland stands in the prime minister's courtyard smiling.

Michelle Rowland says the government is closely following overseas developments. (ABC News: Matt Roberts)

Chance to use AI to catch more offenders

Advocates are also calling for the government to remove barriers limiting law enforcement's use of AI tools to detect and fight perpetrators of child abuse.

Police have limited their use of facial recognition tools to investigate child abuse online since 2021 when the Privacy Commissioner determined Clearview AI breached Australians' privacy by scraping biometric data from the web without consent, and ordered Australian data to be deleted and the app be banned.

Mr Gannon, a former specialist investigator who has helped in national and international child sexual exploitation cases, said, however, there were existing tools that could be used by law enforcement while protecting the privacy of Australians.

"That's something the government need to actually start looking at: how do we actually provide tools for law enforcement in the identification of victims of child sexual abuse [that are] compliant with privacy laws in Australia?

"We shouldn't disregard the idea of using AI to help us identify victims of child sexual abuse.

"There are solutions out there that would also have good oversight by government allowing investigators to access those tools."

Clearview AI continues to be used overseas by law enforcement to identify child abuse victims and offenders, but Mr Gannon said there were solutions that could allow "good oversight by government" while also enabling investigators to access the tool.

He added that Australia should be working with international partners to harmonise its approach to AI safety so that expectations for developers could be clearly set.

Law enforcement in arms race with abusers using AI to evade justice

Advocates have also warned that the spread of unregulated AI tools has enabled child sex offenders to scale up their offending.

Ms Tame said the need for a framework to regulate AI tools extended beyond obviously harmful apps, with even mainstream AI chatbots used by offenders to automate grooming behaviour and gain advice on evading justice or speaking with law enforcement.

"In my own experience, the man who offended against me, as soon as he was notified that he was suspended from my high school, he checked himself into a psych ward," she said.

"We are seeing offenders not only advancing their methods … we're also seeing their sophistication in evading justice."

The government acknowledged last year that current regulations did not sufficiently address the risks posed by AI and it would consider "mandatory safeguards".

Last month, the eSafety commissioner said technology platforms had an obligation to protect children.

"While responsibility must primarily sit with those who choose to perpetrate abuse, we cannot ignore how technology is weaponised," the commissioner wrote.

"The tech industry must take responsibility to address the weaponisation of their products and platforms."

Stay Informed

Get the best articles every day for FREE. Cancel anytime.