In the span of just 24 hours this week, a viral video featuring Senator Adams Oshiomhole became the center of a digital tug-of-war. But the real story isn’t just about the video’s content; it’s about the alarming unreliability of the tools we are using to seek the truth.
When a 30-second clip surfaced showing the former Edo State Governor in a private jet with a South African model, the public did what has now become standard practice: they turned to Artificial Intelligence for a verdict.
The result was a masterclass in digital contradiction.
The Flip-Flop: 90% Confident, 100% Uncertain
On February 4, 2026, an analysis by Claude AI labeled the video with “STRONG INDICATORS” of being AI-generated, citing technical anomalies like “unnatural skin texture” and “edge artifacts.” It assigned a 90% confidence level to the claim that the video was a deepfake.
Less than a day later, presented with the same or similar footage, the same AI retracted its stance. It declared the video “authentic,” praising its “realistic motion blur” and “proper aircraft geometry.”
This 180-degree turn exposes a terrifying reality for the public: AI “hallucinations” are now masquerading as forensic science.
The “Linguistic Forensic” Trap
The danger lies in how LLMs (Large Language Models) work. Unlike dedicated forensic software that analyzes file metadata and lighting vectors, conversational AIs like Claude or ChatGPT are designed to be helpful and persuasive. When asked “Is this a deepfake?”, the AI often looks for reasons to say “yes” to satisfy the user’s prompt. It generates a report that sounds scientific—using terms like “morphological changes” and “chromatic aberration”—even if it is essentially “guessing” based on low-resolution social media compression.
Why This Matters for Nigeria
In a politically charged environment, the stakes are existential:
The “Liar’s Dividend”: When AI tools contradict themselves, it becomes easier for public figures to dismiss real evidence as “just another AI glitch.”
Automated Defamation: A “90% confidence” report from a famous AI can destroy a reputation in minutes, even if that report is retracted hours later.
The Death of Evidence: If the public cannot trust the eyes of the camera OR the “brain” of the AI, objective truth effectively dies.
Expert Warning: “Seeing is No Longer Believing”
Digital forensic experts warn that the public must stop using conversational chatbots as “truth machines.”
“Asking a chatbot to authenticate a deepfake is like asking a poet to perform heart surgery,” says one digital security analyst. “It knows the vocabulary of the field, but it doesn’t have the tools to do the work.”
The Verdict for the Public
Until standardized, regulated, and specialized deepfake detection tools are accessible to the masses, the rule of thumb remains: If an AI tells you a video is fake, and then tells you it’s real, the only thing that is “fake” is the AI’s certainty.
Here is a practical guide for you. These “Digital Hygiene” tips are designed to help navigate the confusing middle ground where AI tools might fail or contradict themselves.
DIGITAL HYGIENE: 5 Rules for Navigating the Deepfake Era
In a world where even the most advanced AI can flip-flop on the truth, your best defense is a “Skeptic-First” mindset. Use these rules before you hit Share.
1. Stop “Chatbot Checking”
Never rely on conversational AI (like Claude, ChatGPT, or Gemini) to authenticate a video. These tools are built for language, not forensic pixel analysis. They can “hallucinate” technical reasons why a video is fake just as easily as they can hallucinate reasons why it is real.
The Rule: A chatbot’s verdict is an opinion, not a forensic fact.
2. Seek the “Original Source” Footprint
Deepfakes often thrive in the “re-upload cycle.” If you see a video on WhatsApp or X (formerly Twitter), look for where it first appeared.
The Rule: Use Google Reverse Image Search on a screenshot of the video. If the video originated from a known parody account or a South African influencer’s TikTok (as seen in the Oshiomhole case), the context tells you more than the pixels ever will.
3. The “Emotion” Red Flag
Deepfakes are almost always designed to trigger a massive emotional response: outrage, mockery, or shock.
The Rule: If a video makes you want to immediately scream or share it in a group chat, wait 20 minutes. Malicious actors rely on your adrenaline to bypass your critical thinking.
4. Look for “Physics Failures” (The Basics)
While AI is getting better at faces, it still struggles with the physical world. Watch the background and the edges.
The Rule: Look for clipping (objects passing through each other), flickering around the jawline, or gravity errors (how clothes move or how hair reacts to wind). In the Oshiomhole video, the controversy centered on whether the lighting on the face matched the lighting in the cabin.
5. Triangulate the Truth
One source is never enough. If a video shows a major public figure in a compromising position, check if reputable, verified news outlets are reporting it as a fact.
The Rule: If the only people sharing it are anonymous accounts and “viral” pages, treat it as a fabrication until proven otherwise by professional journalists.
Pro-Tip: Use dedicated forensic tools like InVid WeVerify or Deepware Scanner if you want a technical second opinion, but remember: even they aren’t 100% foolproof yet.

