Begin typing your search...

    Novelty To Necessity: Unpacking growing privacy risks in India

    In an AI world, protecting fundamental rights will need collaboration between government, industry, and civil society

    Novelty To Necessity: Unpacking growing privacy risks in India
    X

    Representative image (IANS) 

    Artificial intelligence (AI) has rapidly moved from novelty to necessity in India. It now underpins everyday digital experiences—from personalised suggestions on Flipkart and Hotstar, to real-time traffic updates from Google Maps and fraud checks on UPI transactions. Behind this convenience, however, lies a deeper concern: the mounting threat to personal privacy. As AI-driven tools become integral to daily life, they depend heavily onusers’ personal data. Understanding how this data might be misused is crucial.

    Understanding AI-Driven Privacy ThreatsAI systems thrive on large-scale data, much of it personal. This reliance brings with it a range of privacy challenges: Deepfakes have emerged as a potent threat. AI-generated fake videos or voice clips now mimic public figures or private individuals with alarming realism. These manipulations have been used for scams, reputation attacks, and non-consensual content, blurring the line between real and fake with dangerous ease. ‘Bias in AI Algorithms is another concern. These systems learn from past data,which often reflects real-world discrimination. As a result, AI may reproduce bias in critical areas like job recruitment or loan approvals, under the guise of neutral, data-driven decisions—making it difficult to detect or dispute discriminatory outcomes.’ AI-Powered Surveillance tools, such as facial recognition, have transformed how individuals are tracked in public spaces.

    Though useful for security, widespread deployment of such systems raises ethical issues. The ability to identify and monitor individuals in real time can erode civil liberties and create a culture of constantobservation.’ Cybercrime Enhanced by AI is on the rise. Criminals use AI to create realistic phishing schemes, voice clones, and dynamic chatbots that mimic humaninteraction. These techniques make online fraud harder to detect and more convincing than ever before. Opaque AI Decision-Making compounds these problems. Many AI tools operate as ‘black boxes,’ making it difficult to understand how decisions are made, why certain outcomes occur, or how personal data is being used—leaving users with little recourse for errors or injustices.

    Indian examples illustrate the risks. Recent incidents in India show the real-world consequences of these technologies. In Bengaluru, scammers used deepfake videos of NR Narayana Murthy and Mukesh Ambani to promote fraudulent trading platforms, tricking victims into losing Rs 95 lakh. The realism of the fake endorsements made the deception highly effective. At Teleperformance India, call center employees had their voices altered in realtime using AI from Sanas to sound more Western. While aimed at customersatisfaction, this raises concerns about biometric data privacy and employee consent, especially when sensitive voice data is stored or altered without clear safeguards. LinkedIn faced legal challenges after a 2025 lawsuit accused it of using premium users’ private messages to train its AI without proper consent. A silent privacysetting change allegedly enrolled users automatically, raising questions about informed consent and ethical data use.

    How India is Responding India has begun responding through a mix of legal reforms and institutional development: The Digital Personal Data Protection Act, 2023 (DPDP Act) outlines principles for dataprocessing, emphasising consent and user rights. However, it lacks provisions tailored to AI-specific risks like algorithmic discrimination or opaque decision-making.The Information Technology Act, 2000 governs cybersecurity and electronic data, and while relevant, it is not designed for the complexities of modern AI technologies. The Ministry of Electronics and Information Technology (MeitY) has issued advisories surging platforms to combat harmful AI content, reduce bias, and clearly label AI-generated material. These guidelines, however, are non-binding. The government also announced the IndiaAI Safety Institute in January 2025. This body will advise on AI safety, promote research, foster collaboration, and create risk-assessment tools to improve oversight and accountability.

    How Individuals Can Protect Their Privacy

    Until regulations become more robust, individuals must take initiative to protect their own data in the AI-driven world:

    Limit Sharing: Avoid uploading sensitive documents like health or financial recordsto public AI tools.

    Use Strong Security Practices: Enable two-factor authentication, update softwareregularly, and use password managers.

    Control App Permissions: Periodically check what data apps can access and revokeanything unnecessary.

    Verify Content: Treat sensational audio or video with skepticism—deepfakes areincreasingly common.

    Employ Privacy Tools: Use VPNs, privacy-focused browsers like Brave, andend-to-end encrypted apps like WhatsApp or Signal.

    Review Privacy Policies: Skim terms for clues about how data is collected or used,and opt out when possible. Tools like privacyhq.xyz can help decode dense policies.

    Stay Informed: Follow trusted sources for updates on tech laws, data rights, and AIethics in India.

    A Balancing Act: Innovation vs PrivacyAI offers enormous potential for innovation and efficiency.

    But without adequate safeguards, it risks compromising privacy at a societal level. Cases like deepfake scams and AI surveillance in India expose real vulnerabilities that demand attention—not just from lawmakers but from tech companies and everyday users alike.While the DPDP Act is a promising start, it cannot fully address the distinctive challenges posed by AI. India’s path forward must include frameworks that ensure algorithmictransparency, reduce bias, and create clear accountability for AI-driven decisions. It should also invest in public awareness and privacy-first technologies. Striking a balance between fostering AI innovation and protecting fundamental rights will require collaboration between government, industry, and civil society. Only then can India build a digital future where technological progress doesn’t come at the cost of privacy.

    (The author is a Product Manager on Salesforce’s Data Security Service team, building infrastructure products that protects customer data in the cloud. She earned aMaster’s from Carnegie Mellon University, where her CyLab research centered onusable-privacy solutions for online services)

    SANJNAH ANANDA KUMAR
    Next Story