Smita Jha:
Hello everyone, this is Smita Jha. I am a Partner with the Banking and Finance Practice at Khaitan & Co. Joining me today is Supratim Chakraborty, Partner with our Data Privacy Practice.
Supratim Chakraborty:
Hello Smita – very happy to be here.
Smita Jha:
It’s always a pleasure to sit down with you on matters like legal tech, fintech, and AI as a trend these days. If I am to start this discussion, I would say that AI in finance is no longer an abstract research area — it’s present in underwriting, credit scoring, fraud detection, process automation, customer support, risk management, and even compliance functions. The pace of adoption is staggering. And yet, as we both know, when technology races ahead, regulation struggles to catch up.
Thankfully, RBI’s FREE-AI Framework, released on August 13, 2025, is an attempt to change that dynamic. This time, instead of waiting to play catch-up, it lays down the core principles, called Sutras in the Framework, and concrete actionable recommendations that blend innovation and accountability. Personally, I see it as a watershed moment for financial regulation. It also sends a clear signal on the fault lines AI-based governance carries for the financial world and why early regulation is imperative.
Supratim Chakraborty:
I completely agree. When you say that it’s not just a research area anymore, it’s our reality. I was reading an NY Post article sometime back, and was startled to know that globally, AI-driven crypto scams surged 456% between May 2024 and April 2025, exploiting AI-generated voices, forged credentials, and impersonations—total crypto fraud reported at $10.7 billion in 2024 alone. So it’s definitely something we need to build an armour against. And I must say, it’s encouraging that we’re having this conversation from finance and privacy aspects together. Because neither can stand alone. Finance without privacy risks collapsing trust, and privacy without financial innovation risks suffocating growth.
The framework is clever in its architecture: the seven guiding Sutras have been attempted to be operationalised through two sub-frameworks — Innovation Enablement and Risk Mitigation. These two strands cover the six strategic pillars which are essential to any efficient governance.
Smita Jha:
Yes — and those two strands really define how this story unfolds. Maybe we can explore them one by one, looking at the impact, then the loopholes or risks, and finally the future ahead. Shall we start with Innovation Enablement?
Supratim Chakraborty:
Perfect. Please go ahead — and I’ll add from the privacy side once you’ve set out the financial implications.
Smita Jha:
Sure thing! To start off, the impact of the innovation agenda is huge. The recommendation for setting up and enhancing the financial-sector data infrastructure could be transformative for credit access, especially for MSMEs and new-to-credit customers. Instead of opaque, fragmented silos, we would have interoperable, high-quality, trusted datasets — which will be a backbone for more reliable AI models. This levels the playing field for institutions, big and small.
Also, the AI Innovation Sandbox recommended in the Framework offers real-world value. It creates a safe zone for testing AI before deployment. Instead of letting risky models loose on the market, you can experiment in a controlled environment, much like Hong Kong’s GenAI Sandbox, which has already helped firms refine their models under supervision. For startups, who often abandon pilots due to high compute costs, this can be transformative. While for regulators, it offers a window into risks before the products hit the market.
But there are definitely some gaps. A shared data infrastructure, if poorly managed and utilised, could turn into inaccurate assessments or misleading customer communications. Bias in datasets, or their lack of representativeness, is likely to cascade into bias in credit models and customer communications and amplify the model errors. Besides, sandboxes, if participation isn’t inclusive, risk being monopolised by large incumbents. And liability can’t become a shield for negligence.
Looking forward, I think a well-structured Standing Committee and dedicated AI institutions, regulatory mandate for sharing and management of data sets, and AI governance frameworks recommended by the RBI will be key. They ensure coordination and prevent regulatory silos. But institutions themselves will need to have AI model risk management, data set lifecycle governance policies and practices. Otherwise, innovation remains on paper.
I’ve spoken from the financial angle — what’s your take on the privacy side of this innovation agenda?
Supratim Chakraborty:
From a privacy standpoint, the data infrastructure is both promising and perilous. Built with privacy-enhancing technologies and strict governance, it could democratise access without diluting safeguards. But without those controls, it risks turning citizens into datasets, vulnerable to misuse.
The sandbox is another opportunity. For privacy professionals, it’s a way to pre-empt breaches. Imagine testing models not just for performance, but for compliance with the DPDP Act and for bias mitigation before consumers ever touch them.
Yet, the loopholes are real. If access rules are opaque, smaller innovators may be locked out. If privacy isn’t stress-tested in the sandbox, we’ll simply move problems downstream. And on liability, yes, regulators may forgive first-time lapses — but consumers won’t. For them, one breach can mean lifelong consequences.
Therefore, the future lies in literacy and accountability. Not just boardrooms and regulators, but consumers too. The framework remarkably talks about capacity building and voluntary best-practice sharing. I’d personally go a step further and suggest embedding AI literacy into consumer rights frameworks so that innovation translates into empowerment, not just efficiency.
Smita Jha:
That’s a very interesting take, and yes, I agree — it’s clear innovation only works if governance is in its DNA. I feel there is so much that we can discuss from the Innovation aspect, but we do need to address the second sub-framework, which is the Risk Mitigation aspect.
Supratim Chakraborty:
I completely agree, innovation is only half the story. The tougher half is risk mitigation — and here the FREE-AI Framework is equally clear. Risk mitigation is where the framework gets teeth, and I personally believe it begins with governance. Under AI Governance, every regulated entity must adopt a board-approved AI policy. This efficiently anchors responsibility at the very top. If an AI system denies a legitimate loan or approves a fraudulent transaction, the question won’t be “was the algorithm wrong?” but “did the board put in place the right safeguards?” Accountability, in other words, stays human, and in my opinion, this is a paradigm shift.
Next comes the data lifecycle governance. Through this, the DPDP Act principles are carried into financial AI: from collection to deletion, every stage must be controlled. Vendor contracts must now include AI-specific obligations — confidentiality, explainability, audit rights. No financial innovation can bypass scrutiny simply because it’s “new.” From the launch stage itself, accountability is embedded.
And then, consumer protection, which I like to call the heartbeat of trust. The framework enshrines rights: now you have a right to know when you’re interacting with AI, to access redressal, and to demand a human override. This ensures algorithms assist, but do not replace, human responsibility.
Finally, and very importantly, I would like to mention the recommendation regarding incident reporting and disclosures. Interestingly, the RBI has borrowed the idea from aviation, where even “near misses” are reported and studied. That’s a brilliant import. Say, if a model misclassifies thousands of transactions but no direct loss occurs, should that be ignored? The answer is no. Because the absence of immediate harm does not mean the absence of risk. Annual disclosures in reports will also soon detail how AI is used, what safeguards exist, and what failures occurred. That’s transparency in action.
But this is not to say that the Framework is perfect; loopholes remain. Policies on paper won’t matter if boards see them as checkboxes. Vendor risk may still be underestimated — many financial firms rely heavily on third-party models. And disclosures, unless they’re specific, may devolve into boilerplate statements, undermining trust of the stakeholders involved.
In my opinion, the way forward is cultural internalisation. With governance anchored at the board, enforceable consumer rights, extended vendor liability, incident reporting to catch problems early, and transparent disclosures that keep institutions honest, it will build the scaffolding of trust around the technological change. Done right, risk mitigation doesn’t slow innovation — it makes it sustainable.
But enough from my end, I would like to hear what’s your take from the finance lens?
Smita Jha:
From where I sit, the impact of these measures is directly tied to stability, optimising profits, and investor confidence. A board-approved AI policy signals to markets that AI is being managed like credit or liquidity risk. That assurance can lower operational costs and improve resilience.
Data lifecycle governance that you mentioned earlier isn’t just privacy compliance, it’s sustainable business expansion and systemic risk management. To take an example of the credit community of Banks and NBFCs, poor or non-compliance data lifecycle governance can eventually impact credit portfolio quality, exacerbate NPA stress, and in turn trigger liquidity runs for them. And in some instances, has the potential to cause reputational crises.
Which brings me to the point of consumer protection — I agree when you call it the heartbeat of trust because this is inclusion in action. If disclosures are clear and transparent, human intervention overrides the AI mechanisms, and grievance redressal mechanisms become efficient, we expand the trust in digital finance. Otherwise, we exclude the very populations AI was meant to empower. Without this, finance risks becoming an unchallengeable black box.
But yes, loopholes are inevitable. Boards may adopt policies but fail in implementation. Some firms may under-report incidents to avoid scrutiny. Cybersecurity remains a moving target, especially with deepfake-enabled frauds. The deepfake case of the scandal involving $25 million in Hong Kong shows how sophisticated the threat has become.
So, the future then is about effective implementation of policies and disclosure maturity. In fact, on disclosures, markets should punish opacity more than bad news and that can only be built as culture. If institutions report transparently, they’ll build credibility. The RBI’s proposed templates will help, but culture of accountability, and most importantly, internalisation of such practices, will matter the most.
I think we have spoken at length about the innovation enablement and the risk mitigation aspects of the Framework, Supratim. So, where does this leave us? To me, the FREE-AI framework is a delicate balance. Too much caution suffocates innovation; too much freedom destroys trust. The opportunity is to make innovation and risk mitigation not adversaries, but complements.
Supratim Chakraborty:
Yes, and as far as the privacy aspect is concerned, my final remarks would be that the future lies in alignment. If financial innovation can coexist with rights and dignity, India can set a global benchmark for emerging economies. But that requires urgency from boards, regulators, and innovators alike.
Smita Jha:
Because in finance, trust is the true currency.
Supratim Chakraborty:
And without privacy, trust cannot last.
Smita Jha:
Well, on that we are perfectly aligned!