AI is a dual-edged sword in today’s world. While it boasts vast positive applications, concerns about its negative impacts are also common. Alongside worries about job displacement and skill loss, there are more obvious aspects, particularly in criminal activities like the creation and misuse of deepfakes.
Deepfakes, a form of synthetic media produced using machine learning and AI algorithms, involve manipulating real content or fabricating entirely false material in videos, photos, or audio recordings.
The concern extends beyond their ease of creation to their potential for significant harm, ranging from spreading misinformation to damaging reputations and causing chaos. This includes generating inappropriate content involving minors, creating pornographic videos featuring celebrities, maliciously sharing private content, spreading fake news, perpetrating hoaxes, enabling bullying, and committing financial fraud. This emphasises the need to prioritise the ethical and legal dimensions of deepfake use, demanding attention and regulation.
The recent surge in AI development has streamlined the creation of deepfakes. Dedicated AIs now possess the capability to craft convincing synthetic media. While the potential for harm outweighs the positives, some benefits exist. These include enhancing accessibility, promoting inclusivity, enabling artistic expression, aiding in training, preserving history, and advancing scientific research. Even though there are a handful of uses of this AI there are still many threats.
How are deepfake made using deepfake AI?
Deepfakes are created using sophisticated AI techniques, primarily Generative Adversarial Networks (GANs). Initially, a large dataset comprising videos of a specific person is collected for training. The AI learns intricate details such as facial features, expressions, and mannerisms from this dataset. Through repetitive learning processes, the AI refines its comprehension, understanding how the person’s face moves and speaks.
Using GANs, it then generates content that closely resembles the target individual. This continuous learning and refinement enable the AI to craft increasingly realistic deepfake material, blurring the line between genuine and manipulated content. The technology’s capabilities surpass traditional editing software, facilitating the creation of compelling yet potentially deceptive content, and raising concerns about its misuse for nefarious purposes.
Previously, photo editing software was utilised for simple face swaps and related deepfakes. However, the use of generative AI has made these manipulations much easier, potentially leading to criminal activities.
Audio manipulation has also become effortless with generative AI. AI can be trained to mimic someone’s voice by learning from recordings, creating a new speech that nearly replicates the person’s voice to perfection.
Can AIs identify deepfake videos?
In a conversation with Robert Vanwey, Senior lecturer of Ethical Hacking and Cybersecurity at Softwarica College and former law enforcement senior technical analyst and investigator practising in reputed American institutions, he replied, “I wish I had a good answer for that, but I don’t.”
According to Vanwey, identifying deepfake videos remains challenging for individuals due to the complexity and context-dependent nature of detection. Verifying the authenticity of a video often requires cross-referencing with other reliable sources, a cumbersome task for individuals amidst a flood of online content.
“Even dedicated computer programmes struggle, achieving about 80 per cent accuracy in controlled settings but significantly less in the unpredictable online space. And so if AI-driven programs can’t figure it out, what chance do you or I have?”
He adds that the struggle would be the same for Nepal [and Nepali technical experts and law enforcement officers too].
Speaking more on that, he adds, “When it comes to the individual, the difficulty is compounded by underreporting of the cases, especially if the content is sensitive or embarrassing, hindering the assessment of the scale of misuse.”
However, he says, individuals can find themselves rather safe. “If we see the trend, celebrities are targeted more rather than individuals. And in the case of Nepal, deepfake has not created the problem just yet. Having said that, the awareness and consideration of what the repercussions might be is the need of the time.”
Can AGIs be used to identify deepfake videos?
The struggle between those creating and stopping deepfakes remains uncertain, amplifying the challenge of detecting or preventing their proliferation. As technology evolves, the balance between the ease of creating deepfake content and the effectiveness of detection will determine the severity of this issue, emphasising the need for advancements in detection mechanisms and heightened awareness among individuals.
But for now, using even cutting-edge AI technology, language models, AGI (Artificial General Intelligence), and advanced computing including quantum computers for deepfake content detection can be far from practical implementation. “AGI, unlike language models like [Chat] GPT, aims to replicate human-like thinking and decision-making processes, requiring a complex neural network similar to the human brain.”
However, achieving AGI involves numerous hurdles and remains a distant prospect. Even if achieved, transforming AGI into conscious decision-making systems or physical embodiments akin to sci-fi robots like in the Terminator movies requires substantial leaps in technology.
Quantum computing, while promising for AI advancement, also faces significant obstacles due to immense energy demands and cooling requirements. “Present quantum computers operate in controlled environments consuming substantial energy, hindering the feasibility of integrating such systems into mobile or embodied platforms.”
While advancements in technology have historically been rapid, the current trajectory suggests that practical implementations remain distant and their ethical implications remain a key consideration in their actual usage.
But there are a few tips that upon careful consideration might help an individual.
While the detection is not yet possible technologically, humans can still discern whether a video is real or fake (AI-generated or not) with their judgements and some careful observations:
For Photos:
- Facial details: Facial features can show irregularities like mismatched skin tones or unusual angles.
- Image quality: Pixelation or inconsistencies in image quality can be observed.
- Hair and teeth: Hair or teeth might look unnaturally perfect or lacking in detail.
- Body positioning: Awkward body postures or misalignments can be visible.
- Playback examination: Details become clearer when playback is slowed down. It is recommended to watch at a slower speed.
For Videos:
- Eye movements: Eye movements might appear unnatural, with weird alignment or a lack of blinking.
- Facial expressions: Facial reactions might seem odd or out of place, not syncing well with emotions.
- Body gestures: Body movements can appear awkward or out of sync with speech.
- Colour and lighting: Inconsistent shadows or light changes on the face can be noticeable. Also, if and where used, green screens or chroma key lines can be seen on the borders of the face or body.
- Audio and lip sync errors: Voice tones or lip movements might seem unnatural or out of sync.
- Image quality: Blurriness or inconsistencies might be present in the video details. One has to look for it carefully.
For Audios:
- Speech quality: Unnatural pauses or robotic-sounding voices can be detected.
- Background noise: Significant differences in background noise levels can be noticeable.
- Indicators of manipulation: Signs such as odd word pronunciations or tonal inconsistencies can be there.
Above all, one can always do lateral research to verify and confirm if the videos are real or fake. This includes news coverage of the issue or the event from media organisations and IT firms across the world. Apart from that visiting websites and organisations that have joined hands and resources to combat the dissemination of misinformation, disinformation and fake news across the social platforms.