NEWS
UK-Nigeria Policy Dialogue Calls for Urgent Collaboration to Combat AI-Driven Scams and Protect Youths
Stakeholders in the field of Artificial Intelligence (AI) have convened in Abuja for a critical policy dialogue, emphasizing the urgent need to regulate digital tools to enhance productivity rather than deploying them for misinformation, scams, and other social vices.
The event, titled “UK-Nigeria Dialogue on AI, Scams and the Future of the Youths”, was organized under the auspices of the International Science Partnership Fund (ISPF) by researchers from Bangor University, UK. It is part of an ongoing international research and policy initiative focusing on AI-enabled cybercrime and its societal implications.
Participants at the dialogue expressed deep concern over the rapid advancement of AI, noting that it poses significant and multifaceted dangers to the future of young people, including potential cognitive and emotional development challenges as well as broader societal risks. They highlighted that AI is becoming increasingly embedded in education, social interaction, and daily life, which could disrupt traditional learning methods, impact mental health, and create new avenues for scams, threatening the future of Nigerian youths.
Experts warned that AI-enhanced scams are reaching unprecedented levels of sophistication. With tools such as ChatGPT 4.0, fraudsters can now create convincing photorealistic images, fake documents, and realistic deepfake voices, enabling them to deceive at a scale never seen before.
“Fraudsters are always looking for new ways to trick people, and generative artificial intelligence technology is giving them powerful new tools to do so at a larger scale than ever before,” participants noted, underscoring the urgent need for collaborative solutions.
Despite these warnings, facilitators and tech experts stressed that AI is not inherently a threat but a tool for progress. They called for governments, civil society, regulators, and private organizations to work together to implement training programs and digital literacy campaigns designed to equip African youths with the skills needed to thrive in a technology-driven world.
They advocated for comprehensive, intentional, and technology-driven interventions to ensure young people are well-informed about AI-related risks and the potential for technological distortion.
One of the facilitators, Prof. Vian Bakir of Bangor University (UK), highlighted the vulnerability of countries like Nigeria with high illiteracy levels.
She said:
“It is a similar situation because people are people, I would imagine how people cope. The issue is getting people to understand that there is a problem and that it is a deepfake AI.
“I think that it is time that the government, civil societies and other stakeholders should be concerned by first accepting that there is a problem and share knowledge and make concerted efforts in solving the problem.“
Another facilitator, Dr. Ugochukwu Chimezie, a Ph.D. student at Bangor University, called on the Nigerian government to develop a symmetric strategy to ensure that youths are well-informed about AI dangers. He emphasized the importance of education tailored to Nigeria’s diverse linguistic landscape, stating:
“We have different languages in Nigeria, the government can start by teaching people about AI in their different languages it will help.“
At the conclusion of the workshop, participants advised the public to remain vigilant against emerging AI-driven scams, including:
1. Voice cloning scams – where scammers mimic the voices of family members or other trusted individuals.
2. Fake news scams – AI-generated news articles designed to appear as if from legitimate sources.
3. Other fraudulent activities performed through the use of AI technology.
The dialogue ended with a call for continued collaboration, awareness, and proactive intervention, highlighting that the fight against AI-enabled scams requires the concerted efforts of governments, civil society, and technology experts alike to safeguar
d the future of Nigeria’s youth.
