Bridging AI Governance Gap

Bridging AI Governance Gap

Bridging AI Governance Gap

Artificial Intelligence is no longer an imagined future but a factor that is actively influencing and shaping decisions in healthcare, finance, social welfare, public services, etc. In India, for example, national missions such as IndiaAI and the passage of the Digital Personal Data Protection (DPDP) Act, 2023, are building foundational intents for responsible use of data and technology. However, the speed of AI innovation is outpacing the pace of growing operational clarity of protections. For instance, certain cities in India have deployed AI-driven facial recognition systems that have generated both benefits for policing and concerns regarding mass surveillance. This case illustrates the requirement for transparent accountability systems. This places great pressure on Data Protection Officers (DPOs), privacy professionals and security practitioners to decide how to enable AI systems to transform and provide more value while also respecting those individual’s rights. 

The India policy context shows some promising intent. The DPDP Act establishes principles such as consent, purpose limitation, and data minimization into law. IndiaAI espouses to engender a safe and inclusive AI ecosystem. In January 2025, the Ministry of Electronics and Information Technology (MeitY) published and sought public consultation on a ‘Report on AI Governance Guidelines Development’; establishing in principle the first steps at operational governance of AI. They have also established, under the ‘Safe & Trusted AI’ pillar of IndiaAI, the AI Safety Institute. The objective of the AI Safety Institute would be to establish technical standards, develop systems for threat detection and monitoring, research potential harms, and create processes for oversight. Sector advisories are also in place to support those who are looking to deploy AI in vulnerable populations within the government and technology.

A potential pathway could start with making ‘privacy by design and default’ a requirement for all AI systems. Developers ought to embed privacy risk assessments within each phase from data collection, model training, and inference through to deployment. For high-risk use cases such as medical diagnostics, credit scoring, or public service delivery, India could mandate security and algorithmic audits, adversarial testing, and impact assessments. Under the DPDP Act’s proposed framework, entities that process personal data using AI could even be categorized as ‘Sensitive Data Fiduciaries,’ which would entail more stringent compliance requirements. 

An AI Governance Coordination Cell could provide additional assurance of interoperability between data protection and sectoral oversight frameworks. It could take on registration of high-risk AI systems, keep a registry of incidents, and work with the AI Safety Institute to establish technical standards. The Institute could, in turn, publish model security benchmarks, adverse robustness testing protocols, and evaluation libraries to create standardization and transparency. In addition, regulatory mechanisms must allow individuals to contest algorithmic decisions, rectify inferred data, and seek remediation, sustaining the DPDP’s rights-based strategy addressing harms specifically arising from the use of AI.

For DPOs and privacy professionals, this shifting landscape represents a challenge and an opportunity. Organizations will need to start assessing their AI systems for compliance readiness in terms of DPDP and any forthcoming regulations related to AI. DPOs should take the lead in assessing operational readiness through internal audits, instruct data scientists on privacy-by-design principles, and navigate audits of third-party AI vendors to ensure any AI platform used meets security and audit standards. They will also be expected to provide input into public consultations on AI policies, help develop internal frameworks for accountability in the use of AI, and introduce AI-related risks into their incident response plan. Internationally, the European Union AI Act has established clear roles and responsibilities through the AI value chain and assigned function-specific responsibilities to each stakeholder community, including developers, importers, distributors, and deployers. India may find it useful to create and develop a similar taxonomy, which will ensure accountability is established across stakeholder communities.

India, as a nation, currently stands at a crossroads. AI holds incredible promise for transforming lives, the economy, and society (precision agriculture, accessible health care, smarter governance, and financial inclusion). In the proposed framework in the DPDP Act, AI entities that process personal data may also be classified as Sensitive Data Fiduciaries which may have even greater requirements for compliance. As a case in point, the EU AI Act employs a risk-based classification that designates certain uses of AI as high risk and gives enforcement authorities the ability to impose civil penalties based on company turnover exceeding 7% of global annual turnover perhaps something India may consider when developing its deterrence framework for violations of non-compliance.

However, without accountability, the costs to privacy, fairness, and trust in the larger community are likely to continue, if not get worse. The passage from principle to enforceable practice requires regulatory architecture, technical mitigations, and collaboration among government, industry, and civil society. Recent worldwide incidents – such as the Google Gemini event and other AI misuses leading to biased and unsafe output  demonstrates the necessity for standardizing safety audits and explainability frameworks. The DPO Club community, located at the intersection of technology, policy, and compliance, can play an important role in operationalizing the lofty principles of AI ethics and developing standards and audits. If we do these things well, India can lead the way in harnessing the benefits of AI while building a Safe, Secure, and Trusted AI ecosystem to protect and promote innovation and protect individual rights.