Advertisement
While Delhi Police on Friday registered an FIR against unidentified people in connection with the deepfake video of Mandanna, Amitabh Bachchan and Union minister Rajeev Chandrasekhar among others have expressed concern.
The government stepped in with an advisory to major social media companies to identify misinformation, deepfakes and other content that violate rules and remove those within 36 hours after being reported to them.
The debate, in the backdrop of the Israel-Hamas conflict that saw a surge in the use of deepfake video to spread disinformation and manipulate public opinion, also led to many questions.
Related Articles
Advertisement
Deepfake videos are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. While the technology has been around for several years, it has become increasingly sophisticated and accessible recently, raising concerns about its potential misuse.
WHAT CAN WE DO?
One way to combat the spread of deepfakes is to educate the public about the technology and how to identify fakes.
“Far more than technical expertise or abilities, there is a mindset we need to encourage. People need to be aware that the creation of fakes is rampant and becoming easier all the time.,” Eoghan Sweeney, an open-source investigation (OSINT) specialist and trainer, told PTI.
“That is why, in a fraught atmosphere such as exists around a scenario like the current one, it’s crucial to be aware that a huge amount of the information and content that finds its way to your attention is inauthentic,” he added.
SOME TIPS: Several tools and techniques can be used to detect deepfakes, such as looking for inconsistencies in facial expressions, skin texture, and lighting. However, deepfakes are becoming increasingly sophisticated, making it more difficult to spot them.
Look out for signs that give an idea that videos being shared on social media could be AI-generated fake photos or visuals.
- AI-generated text can sometimes be grammatically incorrect or have odd phrasing. This is because AI systems are trained on large datasets of text, which may only sometimes contain perfect grammar or natural language usage.
- AI-generated text can sometimes go off on tangents or introduce new information irrelevant to the main topic. This is because AI systems may not always understand the context of the text they generate.
- AI-generated photos and videos can sometimes have peculiar lighting, facial gestures, or backgrounds. This is because AI systems may not always be able to generate realistic images and videos accurately.
- AI-generated videos are often created by stitching together different clips, so there may be inconsistencies in the lighting, shadows, or background. The subject’s skin tone may change from one shot to the next, or the shadows may be in different directions.
- AI-generated videos can have difficulty accurately rendering human movements so there may be weird or unnatural movements in the video. The face of the person/people in the video may contort, or their limbs may move strangely.
- AI-generated videos are often low quality, especially if created using a free or low-cost AI video generator. Look for pixelation, blurring, or other video artifacts.
- AI video generation technology is constantly improving, so it’s important to know the latest techniques. You can do this by reading articles and blogs about AI video generation or by following experts on social media.
- Why might it be that this is being shared right now?
- What is the response it is trying to provoke in me and others?
- How susceptible am I to taking it on board, given my existing sympathies, and how does it play on those?