Author
Ghazala Yousafzai
On March 16, 2025, a video surfaced on social media in which a self-proclaimed soldier claimed to have resigned from the military after witnessing alleged atrocities against civilians. In his video statement, he claimed to have killed 12 civilians on the orders of his superiors. The video was widely circulated by pro Pashtun Tahafuz Movement (PTM) and Baloch separatist social media activists (SMAs), who later alleged that the soldier was assassinated by intelligence agencies for exposing military actions.
On March 17, 2025, aligned SMAs began countering this narrative, revealing that the original video was first uploaded on TikTok by the handle ‘Parachinar News’. Forensic analysis and further investigation indicated that the video was AI-generated, and the individual depicted was not a real person. This fact-check report provides a detailed forensic analysis of the video, exposing the fabricated nature of the content.


After the video went viral, multiple social media accounts falsely claimed that the individual in the footage had been killed, allegedly as retribution for his confession. This narrative was aggressively pushed by accounts linked to anti-state propaganda networks, aiming to amplify distrust and resentment toward state institutions. The fabricated claim, lacking any credible evidence, further fueled tensions and intensified anti-state sentiments, particularly among targeted linguistic and regional communities. This deliberate misinformation campaign highlights the strategic use of AI-generated content to manipulate public perception and incite unrest.


Analysis of the Video
Misidentification of the ‘Martyred Soldier’
Falsely Linked Image of Police Constable
- Following the viral spread of the video, a picture of an injured police constable was widely circulated with claims that he was the soldier from the video and had been ‘eliminated’ by intelligence agencies.
- The injured individual was later identified as a police constable who was wounded in an unrelated incident and was under medical treatment at the time his picture was taken.
- The constable later released a video confirming that he was alive and had no connection to the viral video.
- This deliberate misidentification was an attempt to add credibility to the AI-generated video and push an anti-state narrative.

Another clear proof of manipulation is the misused image of the injured police constable. When he was taken to the hospital, his picture was taken at that moment. However, for propaganda purposes, the image has been cropped to remove key identifying details. Notably, in the original picture, a police cap is visible on the right top side of his head, clearly indicating that he is a policeman, not a soldier. To fabricate his death and falsely link him to the army, his cap has been deliberately cropped out, distorting the original context of the image.


A deep examination of the viral video exposed multiple technical and logical inconsistencies, proving that it was generated using AI-based morphing and deepfake technology. The following observations substantiate this claim:
Facial Morphing and AI-Based Image Manipulation
- The beard structure of the individual inconsistently changes throughout the video. At times when the subject moves, the chin appears slightly-shaven, while in other frames, the beard appears to grow back, an anomaly that is not naturally possible in a continuous recording.
- This fluctuation in facial features suggests the use of AI-based morphing tools, where pre-existing image is manipulated to create an artificial facial representation.


Inconsistencies in Facial Expressions and Eye Movements
- The video features a crying filter, enhancing the emotional appeal of the fabricated message. However, there is no tear flow and facial expressions do not match the natural body language of distress.
- The lack of muscle movement in static facial areas (such as the cheeks and forehead) further supports the hypothesis that this video was AI-generated.

The sharp edges of his nose and the unnaturally shiny appearance of his lips are major inconsistencies commonly found in AI-generated content, highlighting the artificial nature of the video.

In the initial frames of the video, the subject’s eyebrows appear thick and well-filled, but as the video progresses and the subject moves, they become noticeably thinner and more distant. This inconsistency is a common artifact in AI-generated content, where facial features fail to maintain structural coherence due to imperfect rendering and morphing techniques.

1st frame)

subjects moves)

subjects moves)
The subject’s face appears unnaturally smooth and flawless throughout the video, lacking the natural texture, pores, and blemishes that are typically present on real human skin. This overly airbrushed appearance is another hallmark of AI-generated content, where skin rendering often fails to replicate the subtle imperfections of a real face.
Discrepancies in Lip Movement
- A close review of the lip movements indicates a mismatch with the spoken words, a common flaw in AI-generated videos.
AI Detection Tool
AI detection tools suggest that the video was manipulated using filters and an overlay of someone else’s image to conceal the actual identity of the person and mislead viewers. This technique, commonly used in deepfake propaganda, helps fabricate a false narrative while evading immediate detection.

Absence of Authentic Proof of Military Identity
- The individual in the video provides no evidence of being an actual soldier – no uniform, ID card, or any credible reference.
- Military resignations require official documentation and verification, which was entirely absent in this case.
- The video’s anonymous nature and lack of verifiable credentials raise serious doubts about its authenticity.
Coordinated Disinformation Campaign and Strategic Targeting
- The video was initially uploaded via a TikTok account named ‘Parachinar News’.

- The dual-language dissemination strategy (Urdu and Pashto) indicates a deliberate attempt to amplify anti-state sentiments among different linguistic groups.
- The promotion of this video by separatist social media activists suggests a coordinated effort to mislead the public and fuel mistrust against the military.
- Several accounts that shared the video have been previously involved in spreading anti-state narratives.
Conclusion
Based on the forensic evidence and digital footprint analysis, the viral video is not authentic. It is a product of AI-generated disinformation. This incident serves as a textbook example of next-generation propaganda tactics, where deepfake technology is used to manipulate narratives and incite tensions.
Key Takeaways
- The individual in the video is not real, but a digitally altered AI-generated persona.
- The lip movements, facial inconsistencies, no flow of tears on face indicate the use of deepfake tools like Wav2Lip and DeepFaceLab.
- The lack of official identity proof and the unnatural nature of the soldier’s claims further discredits the video.
- The coordinated dissemination via specific accounts and regional targeting strategies suggests a pre-planned disinformation campaign.