A deep dive into the Money 20/20 Philippines Summit panel where industry leaders debated the promises, perils, and practical realities of building a more inclusive financial future.
Panel Discussion: AI in Philippine Fintech
It’s one thing to talk about AI in the abstract. It’s another to be on the ground, in a market as dynamic and diverse as the Philippines, and see where the silicon meets the road. At the recent Money 20/20 Manila event, a powerful panel of experts convened to discuss the practical integration of AI in the Philippine fintech sector. The discussion wasn’t just about shiny new models or aspirational roadmaps; it was a candid look at the messy, complicated, and often contradictory reality of using AI to drive financial inclusion while maintaining responsible governance.
The Panel
Biswanath Banik (Biswa), Chief Data Officer at Tonik Bank, brought the practitioner’s perspective on how AI is deployed inside a digital bank operating at scale. Imelda (Ida) Ceniza Tiongson, President & Trustee of OPAL Portfolio Investments SPV and FinTech Alliance.Ph, offered a crucial policy and governance lens on responsible innovation. Michelle Alarcon, President of the Analytics and AI Association of the Philippines, represented the broader AI ecosystem and industry adoption landscape. Together, they provided a 360-degree view on how businesses can harness AI ethically and effectively for competitive advantage.
The Reality Check: Where is AI Actually Working?
The first dose of reality came early. When asked where AI is delivering real impact today versus where it remains aspirational, the answer was clear. The most mature applications aren’t in customer-facing utopian features, but in the back office, fighting fires.
Ida pointed out the global top three uses of AI in banking and financial institutions: fraud prevention, cybersecurity, and AML/KYC. The common denominator? They are all defensive.
“The three are the ones that are being implemented because the financial institutions or the banks, they don’t want to use money for attacks,” Ida explained. “But there’s more than that. So one part is profit creation… The other part of it is preventing an attack.”
This is the current state of play: AI as a shield. But the promise, the aspiration, lies in using AI as a bridge. And that bridge is financial inclusion.
The Bridge and the Barrier: AI for Financial Inclusion
This is where the conversation got to the heart of the matter. In a country like the Philippines, with credit bureau coverage under 30%, AI isn’t a luxury; it’s a necessity for growth. The real opportunity is in leveraging alternative data sources.
How can you extract signals from telco, utilities, and devices? How can you mine or drive the credit risk signals and make a better decision?”
This is the dream: using the digital footprints people already create to build a more inclusive credit system. But this is also where the first cracks appear. What happens when the data itself is biased, or when the model makes an assumption that doesn’t fit the local context?
Ida provided a powerful, real-world example that illustrated the gap between model design and lived experience:
“There are areas in the Philippines where you only have one cell phone for a family, a shared phone. But if person A, the mom, and then the dad uses it, and then the one of their children would use that phone, that is flagged as fraud, because it’s three different personalities, so stuff like that.”
Suddenly, a tool for inclusion becomes a barrier. A family sharing a single device to save money is flagged as fraudulent. This isn’t a hypothetical risk; it’s a daily reality that highlights a critical gap between model assumptions and lived experience in the Philippines.
The implication is clear: without data from underserved communities, AI systems will continue to exclude them, no matter how sophisticated the algorithms.
The Fintech Disadvantage: When Bias Moves at the Speed of Light
This is where the panel covered a sobering concept: the fintech disadvantage. In a traditional bank, a loan officer can look a customer in the eye, understand their unique situation, and override a system’s recommendation. Fintechs don’t have that luxury.
The decision is instantaneous and also the risk is instantaneous.
When a biased model is deployed on a digital platform, that bias scales immediately. It’s not one bad decision; it’s potentially millions. The very thing that makes fintech so powerful—its scale and speed—also makes it incredibly dangerous if not governed properly.
Ida illustrated this with a hypothetical example about hiring algorithms:
“If your algorithm would say the ones that you want to employ would only come from four universities… then automatically the rest will be taken out, depending on the algorithm that you put in. And this is where… you really have to have critical thinking. So expect bias in the beginning. You’ve got to have critical thinking, you’ve got to have human touch in the beginning up until machine learning would begin.”
The key insight: bias is not just a technical problem to be solved with more data or better algorithms. It’s a governance problem that requires human judgment, critical thinking, and oversight.
Accountability and Governance: Who is Responsible When AI Gets It Wrong?
The panel then covered a critical question that the Philippine fintech industry has yet to fully address: Who is accountable when AI-driven decisions harm customers?
In the European Union, there’s a registry system where customers who feel they’ve been unfairly discriminated against by an AI system can report it. In the Philippines, there’s no such framework. Customers don’t even know why they’ve been denied a loan.
This lack of transparency and accountability creates a hidden cost: false exclusion. People are denied financial services without knowing why, without the ability to appeal, and without any recourse. The industry needs a framework—whether through the Bangko Sentral ng Pilipinas (BSP) or the Department of Information and Communications Technology (DICT)—to track and address AI safety incidents.
The Futility of the Perfect Model: Why MLOps is the Real Competitive Advantage
Perhaps the most crucial insight from the panel was the pivot from model creation to model maintenance. Many organizations are obsessed with building the most predictive model, but they neglect what happens after deployment.
Michelle argued that this is a fatal flaw in how most fintech companies approach AI:
“People take a lot of time to build and deploy, and then they finally think, okay, I have a very good, shiny model… So now this will solve all my problems. And by the time you deploy it, your data have shifted, your kind of population have shifted, you know, all along.”
In the fast-moving world of fintech, a model can become obsolete in months, if not weeks. The competitive advantage, therefore, isn’t the model itself, but the operational infrastructure around it—DataOps and MLOps.
DataOps: Continuously monitoring how customer behavior and market conditions are changing.
MLOps: Tracking how a model’s performance degrades as the data it was trained on becomes stale.
“From a practitioner point of view, it’s really, really important that you… have a solid MLOps and model monitoring in place before you deploy it,” Biswa stressed. “And you have a lot of test and learn at every testing. So don’t deploy on the entire… pool. Maybe like deploy for one segment and see how it works, and learn and improve your model, and go from there.”
This is the shift from treating AI as a product to treating it as a process—a continuous cycle of learning, monitoring, and adapting. The companies that win aren’t those with the most sophisticated models; they’re the ones who realized early that continuous monitoring and iteration are non-negotiable.
Responsible Lending in Practice: The Credit Reader Model
Biswa shared a concrete example of how Tonik is approaching responsible lending in the age of AI. Rather than immediately denying loans to customers who don’t meet traditional criteria, Tonik launched a product called Credit Reader.
“So we have launched a product called Credit Reader. So this is predominantly targeted for those customers to whom we could not give loan. But, I mean, we could give them loan with a very high interest rate, but that is not a response to lending.”
Instead of extracting maximum value from underserved customers through predatory lending, Tonik offers an alternative: self-regulation through credit building.
“So instead of giving them a loan today, what we are saying is that, okay, you do some other increment, you make some small amount of deposits for X number of days, and we see how you do, and then you come back and apply for us. So this is a self-regulation.”
This approach reflects a philosophy that Biswa articulated clearly:
“I think that this is exactly what digital lenders or fintechs should be doing, that we exist because we want to serve the customers in the right way. I mean, there is a way to lend in a very high interest rate, or you could wait a bit, let the person… prove the point that the person is financially viable, and give them the amount that you think that that person can take back.”
This is responsible innovation in action—using AI not to maximize extraction, but to maximize inclusion.
Governance Frameworks: Finding the Balance
The panel also discussed a critical question: Which AI use cases in fintech need strict guardrails, and which can be more permissive?
Ida introduced the concept of the Four Ts of risk management:
- Tolerate: Accept the risk because the impact is small
- Transfer: Pass the risk to another party
- Treat: Mitigate the risk through controls
- Terminate: Eliminate the use case entirely because it’s too harmful
The key is matching the governance approach to the potential harm. A recommendation engine that suggests the wrong product to a customer? That’s tolerable—the customer just gets annoyed. But a lending decision that denies someone access to credit? That needs strict guardrails.
As Michelle shares, “So I think it’s still not a yes to having light guardrails just because it’s for marketing use cases. I think that’s not necessarily true. It should still be in the context of, for the actual purpose of the transaction, I think we still need to respect why the user is there.”
This principle—respecting the user’s intent and protecting their interests—should be the foundation of all AI governance in fintech.
A Look to the Future: From Dystopia to Zero Poverty
So, what does success look like in five years? The panelists’ vision wasn’t just about better tech, but about better outcomes.
For Michelle, it was about having the right foundations in place from the start:
“If we had these [consent management, respecting privacy] ingrained in our businesses much earlier on… we should be in a much better place right now when we’re talking about technical challenges or finding workarounds, right?”
For Biswa, the ultimate goal is an open finance ecosystem that truly serves the unbanked.
“I think in five years, if I’m very optimistic about it, so maybe open finance or some form of open data, so not only led by BSP, it’s kind of my wish list, but also the other, you know, the government agencies, including the private players. So I think this would be definitely my number one wish list, and that would be leveraged to create some products and services for the customers, which would be available for them, you know, like without them asking for it. So embedded finance and all these others, and the financial inclusion. So I think that’s also the mission of Tonik, and I think it is very much achievable in five years.”
The Bottom Line: Technology Serves Humanity, Not the Reverse
The Money 20/20 Manila panel made one thing abundantly clear: AI is not a neutral tool. It is a powerful force that will shape our financial future, for better or for worse. The path we take depends not on the sophistication of our algorithms, but on the wisdom of our governance, the humility of our approach, and our unwavering focus on the human problems we are trying to solve.
The Philippine fintech industry stands at a crossroads. It can continue down the path of maximum extraction, using AI to identify and exploit the most vulnerable. Or it can choose the harder path: building systems that genuinely serve the unbanked, that respect privacy and consent, that are transparent and accountable, and that prioritize human flourishing over algorithmic optimization.
The panelists at Money 20/20 Manila clearly are building for the latter. The question now is: will the rest of the industry follow?
Paulo Joquiño is a writer and content producer for tech companies, and co-author of the book Navigating ASEANnovation. He is currently Editor of Insignia Business Review, the official publication of Insignia Ventures Partners, and senior content strategist for the venture capital firm, where he started right after graduation. As a university student, he took up multiple work opportunities in content and marketing for startups in Asia. These included interning as an associate at G3 Partners, a Seoul-based marketing agency for tech startups, running tech community engagements at coworking space and business community, ASPACE Philippines, and interning at workspace marketplace FlySpaces. He graduated with a BS Management Engineering at Ateneo de Manila University in 2019.