Concerns with the use of artificial intelligence on the internet are growing, due to its potential to design powerful toxins, control robo-missiles, perpetuate online scams, spread misinformation and lies, and create AI-generated deepfake imagery and porn.
Children exploiting deepfake imagery for bullying
Australia’s online safety regulator reports AI-generated sexually explicit imagery and deepfake porn are being used by children to bully others.
eSafety Commissioner Julie Inman Grant recently said the regulator had received reports of students using AI-generated sexually explicit content to bully other students. (Please see First reports of children using AI to bully their peers using sexually explicit generated images, eSafety commissioner says, ABC News, 16 August 2023.)
“AI-generated child sexual abuse material is already filling the queues of our partner hotlines, NGOs and law enforcement agencies. The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake,” Ms Inman Grant said.
As it becomes harder to tell the difference between a real image and a deepfake online, realistic images and videos being created with AI to bully children and vulnerable people can cause immense harm.
Lagging laws and regulatory efforts to combat deepfake imagery
While existing laws do compel legitimate websites to remove material depicting children, there are platforms overseas that spread such material. Even if platforms remove the images, it does not guarantee they are deleted from personal devices.
As AI grows around the world, its potential for harm is also increasing. Laws aimed at preventing harm through AI are struggling to keep up. Australia is moving to tighten laws aimed at ensuring AI is used responsibly and safely.
Existing laws on privacy, cybersecurity, data protection, child protection, health, safety and defamation could be used against harmful AI products.
Perpetrators of image-based abuse can be fined up to $110,000 under the Online Safety Act 2021. (For more information, please see Tough new laws against image-based abuse.)
The Australian government plans to introduce legislation regulating AI, including a potential ban on deepfakes and realistic-looking fake content, and closing gaps in laws covering copyright, privacy and consumer protection.
But is it too little, too late? Misleading social media posts using AI have already been used to spread misinformation and disinformation about climate change, vaccination, politics, global conflicts and many other subjects.
AI regulation is a global challenge
Governments around the world are struggling to set laws regulating the use of AI, underlining the need for greater transparency on what is AI-generated material and what is real. (Please see Can I claim copyright if I write a novel or research paper using generative AI?)
Some want AI-generated content to be watermarked, so users know the material was produced by an AI system. Others argue there must be human oversight of AI models, as AI could create serious risks not just to individuals, but all of society.
In June Europe’s parliament became the first in the world to set rules on how companies can use AI, when it passed the Artificial Intelligence Act.
It makes AI companies accountable for breaches of regulations under threat of heavy fines, and also enables governments to ban leaving highly sensitive decisions to machines. (Please see Europe is leading the race to regulate AI. Here’s what you need to know, CNN Business, 15 June 2023.)
But the European Act will take several years to come into force, and may be watered down as it passes through various committees and member nation parliaments.
In the US, President Biden asked tech companies to voluntarily restrict harmful use of AI, a move that has been criticised as ineffective. (Please see White House promises on AI regulation called ‘vague’ and ‘disappointing’, ComputerWorld Australia, 26 July 2023.)
But any laws introduced to curtail the harmful use of AI must be global to have any effect, and so far there is no sign authorities in many nations have the will – or ability – to control the use of AI.