How Generative AI Is Powering a Growing Online Harm
A largely unseen but rapidly expanding digital crisis is unfolding across social media platforms. At its core is generative artificial intelligence, increasingly exploited by users who understand how to push AI systems beyond ethical and legal limits.
Central to the controversy is Grok, the chatbot developed by xAI, the artificial intelligence firm founded by Elon Musk. Promoted as a more open and less restricted alternative to mainstream AI tools, Grok has emerged as a frequent instrument for generating non-consensual deepfake pornography (NCDP).
The method requires little technical skill. Users upload a regular photograph and prompt the system to digitally “remove clothing.” The result is a fabricated sexualised image produced without the subject’s permission. Targets include celebrities, influencers, private citizens, and in some instances, minors.
This misuse is not isolated. It is widespread and escalating.
The Tacha Case and the Limits of Declared Consent
Public attention intensified after Nigerian influencer and reality TV personality Anita Natacha Akide, widely known as Tacha, addressed Grok directly on X. She explicitly stated that she did not authorise the alteration, editing, or manipulation of her images or videos in any form.
Despite this clear declaration, other users quickly showed that Grok could still be instructed to generate altered versions of her images. The episode highlighted a critical flaw in current AI governance: public consent notices offer little protection when platforms lack built-in enforcement mechanisms.
The incident also reopened broader debates around accountability, platform responsibility, and the ethical limits of “open” AI systems.
Legal Analysis: A Widespread and Predatory Threat
Legal insight into the issue was provided by Senator Ihenyen, a technology lawyer and AI advocate, and Lead Partner at Infusion Lawyers.
He characterises the situation as a “digital epidemic,” arguing that loosely restricted AI tools are being deliberately exploited by malicious users. According to him, the damage caused by non-consensual deepfakes is severe, intrusive, and often traumatic for victims.
Importantly, he dismisses claims that AI innovation exists outside the reach of existing laws.
Existing Nigerian Laws That Apply to AI Misuse
While Nigeria has not yet enacted a dedicated AI law, Ihenyen explains that victims are protected by overlapping legal frameworks.
Key among these is the Nigeria Data Protection Act 2023, which categorises biometric identifiers such as facial images, voice, and likeness as personal data. Any AI system that processes this information is subject to regulatory obligations.
When AI is used to generate sexualised deepfakes, it involves the processing of sensitive personal data, which legally requires explicit consent. In the absence of such consent, liability may extend to both platform operators and those facilitating the misuse.
Affected individuals can escalate complaints to the Nigeria Data Protection Commission. Penalties may include fines of up to ₦10 million or two per cent of an organisation’s annual gross revenue—figures substantial enough to concern multinational technology firms.
Cybercrime Risks and Zero Tolerance for Child Abuse Content
Users who create or distribute such content are also exposed to criminal liability. Under Nigeria’s Cybercrimes Act, amended in 2024, AI-enabled harassment or humiliation may qualify as cyberstalking or identity theft.
Where minors are involved, the legal response is absolute. AI-generated child sexual abuse material is treated no differently from physical abuse imagery. Claims of experimentation, satire, or technological novelty provide no legal shield. The offence carries severe criminal consequences.
What Victims Can Do: Legal and Technical Remedies
For individuals affected by AI-generated abuse, the response process can feel daunting. Ihenyen advises a multi-step strategy:
- Issue formal takedown requests
Under Nigeria’s NITDA Code of Practice, digital platforms with local presence are required to respond swiftly to abuse notifications. Ignoring such notices may expose them to direct legal action. - Use content-blocking technology
Services like StopNCII enable victims to generate digital hashes of abusive content, helping platforms prevent further distribution without repeated uploads. - Escalate to regulators
Reporting incidents to regulatory bodies—not just platforms—can trigger enforcement measures, including restrictions or suspension of abused AI functionalities.
Cross-Border Abuse and Regional Enforcement
Although many offenders operate outside Nigeria, enforcement challenges are easing. The Malabo Convention, which became effective in 2023, allows African countries to cooperate on cybercrime investigations, facilitating cross-border tracking and prosecution.
Why “Unfiltered AI” Is a Legal Risk, Not a Defence
xAI has positioned Grok’s permissive design as a feature that promotes openness and free expression. From a legal standpoint, however, this framing offers little protection.
According to Ihenyen, lack of restrictions does not excuse harm or unlawful conduct. As regulators and courts increase scrutiny, the Grok controversy may become a landmark case defining how much responsibility AI developers bear for how their tools are used.
As enforcement tightens, the era of “unfiltered” AI without accountability may be drawing to a close.