ExplainersFeaturedSecurity

AI Fakes Flood TikTok: How Ghana Faces New Era of Digital Disinformation

Getting your Trinity Audio player ready...

Introduction

AI continues to evolve and improve by the day, with both positive and negative implications. Social media, particularly TikTok, has become a global tool where ideas and resources are exchanged in seconds, creating a volatile environment for manipulating reality.

Tools like Google’s Veo 3, OpenAI’s Sora, Synthesia, Runway, Luma, Pika Labs, Veed.io, Steve AI, LTX Studio, Colossyan Creator, and Animaker have made it possible for anyone to craft media that rivals professional productions.

These platforms offer user-friendly interfaces, enabling users to generate videos from text prompts, create AI-generated avatars with lifelike speech, or seamlessly edit existing footage. For instance, Synthesia can produce a news anchor delivering a scripted report in multiple languages, while Sora can create dynamic scenes from simple descriptions.

As noted by Wistia, a video hosting and marketing platform, such tools are designed for businesses and marketers but are increasingly accessible to the public. They enable rapid content creation with advanced customisation and analytics.

However, these tools come with risks. AI-generated content often exhibits subtle artefacts, unnatural facial movements, distorted background text, or inconsistent lighting, but these diminish as technology improves.

For the average TikTok user, scrolling through a fast-paced feed, such imperfections are easily overlooked, making the content appear authentic at first glance. The accessibility of these tools has allowed hobbyists, influencers, and malicious actors to produce convincing fakes that can deceive millions.

The democratisation of content creation

The democratisation of AI tools has empowered creators worldwide, from small-scale content producers to large media houses. Platforms like Invideo AI and Animaker offer affordable subscriptions or free tiers, making them accessible to individuals in developing nations like Ghana.

However, this same accessibility fuels misinformation, as bad actors exploit these tools to craft deceptive content with minimal cost or technical know-how.

Challenges in detection

Detecting AI-generated content is a cat-and-mouse game. While tools like Hive Moderation can flag synthetic media with 93–99% accuracy by analysing visual and audio cues, the rapid evolution of AI-generated content outpaces detection algorithms. For instance, Google’s Veo 3 embeds watermarks to identify its output, but many creators remove or obscure these markers. This poses a significant challenge for platforms like TikTok, which must balance user freedom with the need to curb harmful content.

TikTok as a disinformation hub

TikTok’s global dominance, boasting over 1 billion monthly active users, many of whom are young and highly engaged, makes it an ideal platform for AI-generated content to thrive.

The platform’s algorithm prioritises videos that elicit strong emotional reactions, whether through awe, outrage, or amusement. It often amplifies unverified content before moderators can intervene.

Unlike X, which employs community notes, or Facebook, with its fact-checking partnerships, TikTok’s moderation lags, with only thousands of accounts removed for misinformation in Q2 2024, a fraction of the millions of flagged posts. 

The viral engine

TikTok’s algorithm is a viral engine, designed to maximise watch time by pushing content that keeps users scrolling. Hashtags like #AIContent and #GenTok have garnered billions of views, reflecting the platform’s appetite for AI-generated media.

Moderation gaps

TikTok’s moderation efforts, while improving, remain insufficient in the face of the growing tide of AI-generated content. The platform relies on a combination of automated detection and human reviewers, but these systems struggle to keep pace with the rapid evolution of AI tools. For instance, while TikTok removed over 140,000 livestreams spreading misinformation about the Israel-Palestine conflict, similar efforts for AI-generated videos are less robust, leaving gaps that bad actors exploit.

The Israel-Iran conflict context

The ongoing conflict between Israel and Iran, escalating in June 2025, provides a fertile backdrop for AI-driven disinformation.

On June 13, 2025, Israel launched Operation Rising Lion, a series of airstrikes targeting Iranian nuclear and military facilities to thwart alleged nuclear weapon development. The attacks killed several Iranian military leaders and damaged key sites, prompting Iran to retaliate with Operation True Promise 3 on June 14, launching ballistic missiles and drones at Israeli targets. 

Rooted in decades-long tensions since the 1979 Islamic Revolution, this conflict has drawn global scrutiny from powers like the United States, Russia, China, and Europe, who are monitoring for potential peace talks or further escalation. 

Amid real footage of missile strikes and military manoeuvres, AI-generated videos have flooded TikTok, falsely depicting events like Tel Aviv’s destruction or Iranian strikes on Israeli airports, exploiting the lack of verified visuals and public interest in the crisis.

Disinformation surge

The scarcity of real-time, verified footage from the June 2025 escalations created a vacuum filled by AI-generated content. For example, a TikTok video falsely depicting the destruction of Tel Aviv, shared by multiple platforms, amassed millions of views, illustrating the viral potential of such misinformation.

Ghana’s alleged involvement in the Israel-Iran conflict

On June 18, 2025, Gai.news, (@gai.news), a TikTok account describing itself as a “24hr real News station” with 31.7K followers, posted an AI-generated video claiming “breaking news from the Flagstaff House.” It amassed 401k views and 16k likes.

The video featured a male presenter in a shirt, standing before a building misidentified as Ghana’s Flagstaff House, announcing, “The president has officially agreed to send Ghanaian soldiers and military equipment to support efforts in the escalating conflict between Iran and Israel. The president says that Ghana will support Israel to fight against Iran. If dem born Iran well, make dem try Ghana.”

Fabricated visuals of media OB (outside broadcast) vans branded “Ghana Pulse24”, a non-existent outlet, suggested a live report, lending an air of legitimacy.

Meanwhile, despite a caption labelling it as a “parody” with hashtags #ai #fakenews #flagstaffhouse #ghanatiktok, the video spread rapidly, with approximately 20 accounts from Ghana, South Africa, and Nigeria reposting or reacting to it.

Multiple accounts share fabricated AI content claiming Ghana will fight alongside Israel – TikTok.

Another account, realbloggerroo, reposted the video with a sceptical caption: “How can Ghana support Israel against Iran, please, I think this is false news, Ghana we’re not ready for any war oo.” 

This post garnered 1,687 comments, 25.5K likes, 22,286 bookmarks, and 3,474 shares, indicating significant engagement. Comments on the post reflected confusion, belief, and scepticism, with many users overlooking the parody disclaimer. 

Nonetheless, some users asked, “Is this real?” while others expressed fear, such as “Ghana, we’re not ready for war.” The video’s provocative tone and realistic visuals amplified its impact, highlighting the dangers of AI-generated content when shared out of context.

The TikTok ban hoax

To illustrate the growing danger of AI-generated content, another TikTok account, thisai24, with 39.3K followers, exemplifies the trend of engagement farming through AI-generated fake content. 

Previously averaging 6,000 views per post, thisai24 saw its viewership soar to millions after sharing AI-generated content. On June 20, 2025, the account posted a video falsely claiming that Ghana would ban TikTok, narrated by an AI-generated presenter, citing a supposed TV3 announcement. 

Comments on the post included reactions like “Who else is watching with an empty stomach on 21st of June?” (pretty), “My boyfriend will now get time for me” (Efya), and “Your name is even Ai so how can I believe” (38 BABY), reflecting a mix of humour, belief, and skepticism.

Comparative case: Burkina Faso

Burkina Faso has faced a similar wave of AI-driven disinformation, particularly since President Mahmoud Traoré’s rise following his survival of assassination attempts and partnerships with Russia. 

Videos falsely depicting military advancements, such as an aircraft manufacturing industry, have gone viral, with one garnering 142,500 likes and 9,000 bookmarks. 

Actors behind AI-generated content

The creators and amplifiers of AI-generated content on TikTok form a complex ecosystem, driven by diverse motives and methods.

Individual opportunists: Individual creators, often seeking fame or profit, are the primary drivers of AI-generated content. Gai.news, with its self-description as a “24hr real News station,” exemplifies this group, posting the Ghana-Israel-Iran video to capitalise on the conflict’s global attention. The account’s use of a parody disclaimer suggests awareness of the content’s falsity, yet its realistic visuals and provocative narrative were designed to maximise engagement. 

Similarly, thisai24 transitioned from modest viewership to millions by sharing AI fakes, such as the TikTok ban claim, demonstrating how creators exploit trending topics for clout. Other accounts, such as @evrythingai97 and @3amelyonn, have been linked to AI-generated videos about the Israel-Iran conflict, often labeling themselves as AI content creators to attract niche audiences.


The Role of parody labels: Labelling content as “parody” is a common tactic to evade accountability, as seen with Gai.news’ caption: “Breaking news, Ghana pledges soldiers and weapons to join the Iranian and Israeli Battle front This is a parody #ai #fakenews #flagstaffhouse #ghanatiktok.” Yet, the Gai.news video’s realistic visuals and professional presentation undermined this intent, making it easy for viewers to misinterpret. This loophole allows creators to profit from engagement while minimising responsibility for the consequences.

Regional Amplifiers: Local accounts, particularly from Ghana, South Africa, and Nigeria, play a critical role in amplifying AI-generated content, often unintentionally. Realbloggerroo, for instance, reposted the Gai.news video with a sceptical caption but still drove significant engagement, with 25.5K likes and 3,474 shares.

Potential Coordinated Networks: Although no direct evidence links the Ghana-Israel-Iran video to coordinated efforts, the geopolitical context raises concerns about organized actors. The Israel-Iran conflict has been a target for propaganda, with Iranian-linked accounts using AI to push anti-Israel narratives and vice versa.

Similar tactics could be at play in the Ghana case, where a false claim about a neutral African nation’s involvement could inflame tensions or test disinformation strategies. The lack of transparency in TikTok’s user base makes it difficult to rule out such possibilities, warranting further investigation.

Commercial Entities: Some creators monetise AI-generated content through commercial ventures, such as promoting AI agencies or selling services. For example, @zimba7906, which linked to a fake Burkina Faso aircraft manufacturing advertisement, advertised itself as an “AI Agency and SaaS Builder,” suggesting a business model tied to AI content creation. While not directly implicated in the Ghana case, this trend highlights how financial incentives drive the production and spread of synthetic media.

Narratives Pushed by AI-Generated Content

AI-generated videos on TikTok are crafted to resonate emotionally, leveraging trending topics to maximise engagement. The narratives fall into several categories, each designed to provoke reactions and drive virality.

Sensationalism: Sensationalist videos depict exaggerated or fictional events to shock viewers, such as alien invasions, celebrity meltdowns, or natural disasters. These clips thrive on TikTok’s fast-moving feed, where users are conditioned to react before reflecting. For example, a video claiming a Rolls-Royce climbed an impossible hill went viral due to its absurd visuals, drawing millions of views despite its obvious artificiality.

Political Manipulation: False geopolitical claims, such as Ghana’s alleged support for Israel against Iran, aim to mislead or provoke. These videos exploit global tensions, sowing confusion and mistrust. Similar narratives have targeted other nations, with AI videos claiming a Nigerian missile parade in Abuja, each designed to inflame regional sentiments.

Hope Exploitation: Some AI videos prey on public aspirations, promising unrealistic achievements to garner engagement. In Burkina Faso, a video falsely claiming the country had launched an aircraft manufacturing industry went viral, with 142,500 likes and 9,000 bookmarks, by appealing to national pride. Similar narratives in Ghana, such as fabricated economic booms, exploit optimism to drive shares.

Parody Misuse: When shared out of context, these videos lose their humorous intent, morphing into believable falsehoods. Despite its disclaimer, the Ghana-Israel-Iran video was taken seriously by many, illustrating the limitations of parody labels in a viral ecosystem.

Engagement Analysis

AI-generated videos achieve outsized engagement on TikTok, driven by the platform’s algorithm and the content’s emotional appeal.

Metrics of the Ghana-Israel-Iran Video: Realbloggerroo’s repost provides concrete metrics: 25.5K likes, 1,687 comments, 22,286 bookmarks, and 3,474 shares, indicating high virality. Comments like “Is this real?” and “Ghana, we’re not ready for war” reflect the video’s ability to spark confusion and fear, driving engagement through debate and shares.

Platform Limitations: TikTok’s lack of robust moderation tools, such as X’s community notes, exacerbates the spread of AI-generated fakes. On TikTok, where moderation relies on automated systems and user reports, fakes like the Gai.news video often go unchecked for weeks, allowing them to dominate feeds.

Moderation Shortfalls: The platform’s removal of 14,350 accounts for misinformation in Q2 2024 is a small fraction of the millions of daily uploads.

Verification and Indicators of AI Content

Visual Inconsistencies: The Gai.news video contained several visual clues of its artificial nature: The building depicted as Flagstaff House bore no resemblance to the real structure, which is a modern, low-profile administrative complex. The OB vans branded “Ghana Pulse” were fictitious; no Ghanaian media outlet, such as GTV, JoyNews, or Citi TV, uses this name or logo. Background elements, such as skyscrapers or vehicles, appeared non-Ghanaian, resembling settings standard in Nigeria or India, a frequent artifact in AI-generated videos.

Contextual Discrepancies: The video’s narrative was inherently implausible:

Military Capacity: Ghana has no history of military involvement in Middle Eastern conflicts and lacks the advanced weaponry to engage in a war between Israel and Iran, known for their sophisticated arsenals.

Foreign Policy: Ghana maintains a neutral stance in global conflicts, prioritising diplomacy through organisations like the African Union and the United Nations.

Media Silence: No credible Ghanaian outlet reported the claim, starkly contrasting the coverage major policy shifts would receive.

Absence of Corroboration: A critical red flag was the lack of corroboration from reputable sources. Major Ghanaian media outlets, like JoyNews or Citi FM, would extensively cover any shift in foreign policy.

AI Detection Tools:

Tools like Hive Moderation and Deepware AI scanner flagged similar videos with 93–99% likelihood of being AI-generated, citing unnatural facial movements, distorted text, or abrupt transitions. 

The Gai.news video likely exhibited these artefacts, such as inconsistent lighting or unnatural speech patterns, which are detectable by advanced algorithms. Google’s Veo 3, a likely candidate for the video’s creation, embeds watermarks, which can be removed, complicating detection.

Linguistic and Geographical Clues: The video’s language and geographical markers further betrayed its falsity:

  • The presenter’s provocative tone—“If they’re born Iran well, make dem try Ghana”—was uncharacteristic of official Ghanaian media, which uses formal language for government announcements.
  • The “Ghana Pulse” microphone tag was a fictional creation and was not associated with any known broadcaster.
  • Geographical inconsistencies, like non-Ghanaian vehicles or architecture, were evident to those familiar with Accra’s landscape.

Societal Impact

The spread of AI-generated misinformation on TikTok has profound implications for society, particularly in Ghana, where digital vulnerabilities amplify the harm.

Diplomatic Tensions: False claims of Ghana’s military support for Israel could strain diplomatic relations with Iran or other Middle Eastern nations, undermining Ghana’s neutral foreign policy. Such narratives risk real-world consequences, as misinterpretations by foreign actors could lead to diplomatic protests or economic repercussions.

International Perceptions: Misinformation about Ghana’s involvement could shape international perceptions, portraying it as a belligerent actor. This could deter foreign investment or aid, particularly from Middle Eastern countries, at a time when Ghana seeks to bolster its economy.

Erosion of Public Trust: AI-generated fakes erode trust in media and institutions, as noted by UC Berkeley’s Hany Farid, who warns of a “liar’s dividend” where pervasive scepticism undermines even credible sources. In Ghana, where social media is a primary source of news for many, such content fosters distrust, complicating efforts to disseminate accurate information.

Social Polarisation: The emotional appeal of AI-generated videos can polarise communities, as seen in the realbloggerroo post’s comments, which ranged from fear (“Ghana we’re not ready for war”) to confusion (“Is this real?”). In Ghana, where nationalist sentiments can clash with calls for neutrality, such content risks fueling division and unrest.

Risk of Unrest: In polarised environments, misinformation can escalate tensions, potentially leading to protests or violence. In Ghana, where economic challenges already fuel discontent, a fake war narrative could spark unrest if believed, highlighting the need for rapid debunking.

Economic and Social Risks: Misinformation about conflict or instability can deter economic activity, as seen in Burkina Faso, where AI fakes about leadership changes created uncertainty. In Ghana, false war claims could discourage tourism or foreign investment, exacerbating economic challenges.

Vulnerability in Ghana

Compared to advanced nations, Ghana’s lower digital literacy heightens its susceptibility to AI-driven misinformation. A Lavnch study notes that 90% of viewers globally are concerned about AI-generated content, a sentiment echoed in Ghana, where many rely on social media for news without critical scrutiny.

Mitigation Strategies

Combatting AI-generated misinformation requires a multi-faceted approach, combining platform reforms, public education, individual responsibility, regulatory standards, community fact-checking, and counter-narratives.

TikTok Community Guidelines

DUBAWA reached out to TikTok, and they maintained their Community Guidelines clearly promote integrity and authenticity by regulating AI-generated content (AIGC) and edited media to prevent deception.

The guidelines mandate clear labelling through an AIGC label, caption, watermark, or sticker for content depicting realistic scenes or people, such as the Gai.news video, which falsely claimed Ghana’s president pledged military support for Israel against Iran.

The guidelines further prohibit misleading AIGC that misrepresents authoritative sources, depicts fake crisis events, or falsely portrays public figures in contexts like political endorsements.

Nonetheless, the platform also bans realistic depictions of minors or private adults without consent to protect privacy. 

Despite these rules, the Gai.news video’s viral spread, amplified by accounts like realbloggerroo, reveals enforcement gaps, as its “parody” caption (#ai #fakenews) was often ignored.

But in every aspect, TikTok preaching their guidelines on their website had this to say:

“We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock. However, AI and other digital editing technologies can make it difficult to tell the difference between fact and fiction, which may mislead individuals or harm society. We require you to label AIGC or edited media that shows realistic-appearing scenes or people. This can be done using the AIGC label, or by adding a clear caption, watermark, or sticker of your own.

Even when appropriately labelled, AIGC or edited media may still be harmful. We do not allow content that shares or shows fake authoritative sources or crisis events, or falsely shows public figures in certain contexts. This includes being bullied, making an endorsement, or being endorsed.

We are committed to protecting people’s privacy. We do not allow content that contains the likeness of young people, or the likeness of adult private figures, used without their permission.”

Platform-Level Interventions: TikTok must strengthen its moderation to curb AI-driven disinformation, building on its existing commitments.

  • Enhanced AI Detection: Deploy advanced tools like Hive Moderation or Cantilux to analyse visual and audio cues, such as unnatural eye movements or voice anomalies, flagging synthetic content with high accuracy.
  • Mandatory Labelling: Require prominent, unremovable labels for AI-generated videos, beyond captions, to alert viewers, similar to Meta’s “Altered or Synthetic Media” disclaimers.
  • Emergency Flagging Tools: As suggested in the original document, develop “rapid response” features that enable users to flag AI-generated content, facilitating swift takedowns of harmful material.
  • Stricter Enforcement: Suspend accounts that repeatedly post misleading AI content without disclaimers, per TikTok’s Community Guidelines.

Public awareness campaigns: Educating users is critical to building resilience against AI fakes, particularly in Ghana.

  • Media Literacy Training: Launch TikTok campaigns that teach users to recognise AI cues, such as distorted faces, unnatural transitions, or fake branding (e.g., Ghana Pulse).
  • Influencer Partnerships: Collaborate with Ghanaian influencers to promote media literacy, reaching young audiences through relatable content.
  • Verification Tools: Encourage use of reverse image searches (e.g., Google Lens, TinEye) and fact-checking services like Dubawa Ghana (ghana.dubawa.org).

Individual Responsibility: Users must take a proactive role in combating misinformation by verifying content before sharing it.

  • Pause and Check: Question sensational claims, such as Ghana’s war involvement, and verify with fact-checkers or credible media sources.
  • Report Suspicious Content: Utilize TikTok’s reporting tools to flag AI-generated content, supporting moderation efforts.
  • Contact Fact-Checkers: Reach out to DUBAWA Ghana via email or WhatsApp for free verification of doubtful videos.

Regulatory and industry standards: Global and local regulations are needed to curb AI-driven disinformation.

  • Global Standards: Advocate for AI content labelling norms, building on EU and US efforts, such as the Digital Services Act.
  • Local Laws: Ghana could mandate that platforms remove harmful deepfakes within hours, particularly those that incite political tensions.
  • Media Ethics: Urge news outlets to avoid using unlabelled AI visuals, as seen with Iranian media sharing fake Tel Aviv footage.

Crowdsourcing verification: Community reporting, combined with DUBAWA’s expertise, can facilitate crowdsourced verification, thereby reducing reliance on automated systems. A Ghanaian volunteer network could monitor TikTok trends, flagging fakes for rapid debunking.

Official responses: Official statements from Ghana’s government, delivered via TikTok, can help counter false narratives, such as the one presented in the Gai.news video. Short, engaging videos from verified accounts would resonate with youth, reinforcing Ghana’s neutrality and debunking war claims.

Conclusion

The surge of AI-generated content on TikTok, epitomised by the Gai.news video falsely claiming Ghana’s involvement in the Israel-Iran conflict, represents a critical challenge to information integrity.

Driven by opportunistic creators, regional amplifiers, and potentially coordinated networks, these hyperrealistic fakes exploit TikTok’s viral ecosystem, achieving massive engagement through sensationalist, manipulative, or hopeful narratives. Ghana can mitigate this threat by implementing platform reforms, public education, individual vigilance, establishing regulatory standards, fostering community fact-checking, and promoting counter-narratives.

As AI continues to evolve, decisive action is essential to ensure it serves as a force for truth, not deception, in Ghana and beyond.

Show More

Related Articles

Make a comment

Back to top button