Any of your social media pictures which are easily available on any of the platforms such as Instagram, Facebook could be used against you in creating a morphed picture or video by another person to tarnish your image. Social media stars, politician and celebrities have been targeted at various platforms which have affected their social status and also created mental stress.
The use of deep fake will impact individuals from marginalised groups more severely. Much of this vulnerability will be beyond the existing provisions in criminal and civil law that prohibit impersonation, hate speech, or obscenity. The inadequacy is also a matter of inferior enforcement capacity. Doubts can easily be sowed against acts of investigative journalism that reveal abuse of power and corruption because the journalist's audio or video evidence can be falsely labelled Deep Fake. Together, this will all lead to harm from the manipulation of elections, widening social divisions and lowering trust in institutions.
There is also a foreseeable risk that with sufficient funding and mobilisation an entire universe of synthetic content will be created by multiple adversarial forces, which will construct a series of Deep Fakes to conjure an altogether alternative reality.
In response to a parliamentary question dated July 21,2023. MOS electronics and information technology rules, 2021, provide for a take-down mechanism based on individual complaints or directions by the government for content take-downs. This primary response is limited and censorial. There is also the fact that the recently passed data protection law provides an exemption for publicly available data. Hence, an important protection that would prevent pictures on social media from being taken without the consent of a person and used as training data for Deep Fakes, is not provided under law. Further, The Digital India Act which was created to regulate the digital field provides little detail on how it will regulate AI technologies.
Regulation of such technology is difficult. It calls for an exploration of methods to boost transparency, labelling, and providence in synthetic media. There may also be a need for specific legislation that provides a private right of action, beyond defamation, for non-consensual sexual imagery.
Finally, Deepfakes and other similar uses of artificial intelligence and machine learning technologies also bring to the fore foundational challenges to a democratic society- such as surveillance, exclusion, and disinformation.
No comments:
Post a Comment