What does AI transformation mean for your organization? And what is the future of AI solutions for business with new, better models coming out every week?

The Magic of “Small”: AI Transformation for Business | Call 178

What does AI transformation mean for your organization? And what is the future of AI solutions for business with new, better models coming out every week?

Welcome to the Manila edition of our AI Transformation event. This all started as a playbook that we developed together with our partners at FileAI, WIZ.AI, NetSuite, and AWS. We launched that playbook last year, and as part of that launch, we organized an event in Singapore, providing businesses from different industries the opportunity to dive deeper into what AI transformation really means for them.

Thanks to our partners, we decided to hold this event again here in Manila. Manila is a particularly interesting market to discuss AI because there are a lot of challenges and misconceptions about implementation here. Our guests today are more than qualified to address these topics.

We have with us three experts in AI transformation. They’ve helped businesses across various industries, from F&B to banking, and across multiple markets—from Southeast Asia to Japan. WIZ.AI even has clients in South America, bringing a wealth of experience to the discussion. And of course, NetSuite is a global company with deep expertise in this space.

Timestamps

(00:00) Introduction;

(03:49) Clare from File.ai on AI Transformation;

(11:10) Aldo from NetSuite on Automation and Upskilling;

(16:57) Alex from Wiz.ai on Conversational AI;

(21:05) Cybersecurity and Data Privacy in AI;

(29:51) Q&A Session: What is really the competitive moat for enterprise AI solutions?;

(45:14) Q&A Session: Is it possible and worth it to verticalize as an application layer solution?;

(51:32) Q&A Session: How does Gen AI impact tools for VC portfolio management (Business Intelligence)?;

(56:12) Q&A Session: How close are we to a 5-6 person multi-billion dollar company?;

(01:01:40) Q&A Session: Who really controls the AI supply chain?;

(01:05:15) Q&A Session: How do I think about AI Transformation for SMEs?;

(01:11:24) Q&A Session: What are AI use cases for professional services?;

(01:15:01) Q&A Session: How do I eliminate manual data handling?;

Follow us on LinkedIn for more updates

Subscribe to our monthly newsletter for all the news and resources

Directed by Paulo Joquiño

Produced by Paulo Joquiño

The content of this podcast is for informational purposes only, should not be taken as legal, tax, or business advice or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any ⁠⁠⁠⁠⁠⁠Insignia Ventures⁠⁠⁠⁠⁠⁠ fund. Any and all opinions shared in this episode are solely personal thoughts and reflections of the guest and the host.

Transcript

Clare from File.ai on AI Transformation

Paulo: In the context of this conversation, let’s start with Clare. File.ai recently raised a Series A round along with a whole new look, a brand reboot, and a renewed focus on its expertise—data processing. You work with many different types of documents and formats, and you’ve mentioned that your product is a single solution that can serve multiple use cases across different markets. However, the way you sell and implement it varies significantly by market.

Could you speak to the challenges of this, particularly in the Philippines?

Clare: Good afternoon, and thanks for joining us. It’s a small group, so please don’t be shy—ask questions as we go along and relate this back to what you’re doing in your work.

I’m Clare, COO and co-founder of File.ai, which recently rebranded from Bluesheets. As Paulo mentioned, we’re doing a lot. Even in the way you describe it, we’re doing a lot. To simplify, let’s take a step back. When I think about how we’re solving these problems and deploying our solution, we have customers in 18 countries and teams in five of those. We’re working across multiple languages, currencies, and use cases. There’s a lot to consider.

When I think specifically about what we’re doing in the Philippines, it comes down to the core problems we’re solving, what AI is actually helping with, and why businesses should adopt AI in the first place. With our rebrand, that mission has become even more explicit—it’s in the name now. File.ai is AI for file-intensive processes.

Officially, we describe it as a horizontal file processing agent and workflow automation platform, but that’s a mouthful. Simply put, we do two things very well in the back office: we drive cost savings and operating efficiency while improving productivity and time-to-value for businesses.

One of our key capabilities is specialized unstructured data processing. AI is only as good as the data it’s trained on, and that’s the very first layer that anyone thinking about AI transformation needs to consider—your data infrastructure and access to quality data. A significant portion of this data sits in unstructured formats.

Once we surface that data, we help operationalize it. How is it being used? This is why we love events like this and partnerships with companies like NetSuite. Businesses aren’t sitting on a File.ai platform all day—they’re using their core operating systems, like NetSuite, to run their businesses. Our role is to automate that experience by surfacing the right data at the right steps in their workflow so they can use NetSuite faster and more efficiently, ultimately improving their platform experience.

That’s why our rebrand is so important. Our mission is to eliminate manual processing of files—whether it’s data entry, review, checking, or comparison. There’s a lot involved in file management, even in simple tasks like opening an email, downloading an attachment, checking its accuracy, and uploading it again. Now, imagine the complexity within an ERP system—there are so many processes within that.

For us, the rebrand puts the “file” aspect front and center because we solve so many different use cases. We are a file-agnostic system, meaning that regardless of file type, language, currency, or format, File.ai can read, understand, digitize, and structure that data. This makes it available for process automation downstream.

When thinking about AI transformation, I always take a step back. We spoke with some of the NetSuite team before this event, and someone asked, Why use File.ai? Why use automation at all? That’s a great question because regardless of the market or how far along a company is in its AI transformation, the evolution follows a common path.

We started with purely manual processing—reading invoices, keying in data, mapping to ledger codes, sending approvals—all done manually. We’ve all been there. Then came digitization—optical character recognition (OCR) software that simply scanned and digitized text. Robotic Process Automation (RPA) followed, automating some steps but still relying on rule-based processes.

Now, we’ve made a leap forward. Why does AI-driven automation matter? Because it enables so much more. The return on investment is significant, whether in OPEX savings, workforce augmentation, or faster time-to-value.

I won’t dive too deep into legacy systems, but when looking at a market like the Philippines and the question of why businesses should automate with a system like File.ai, the reality is that manual processing is still widespread here. It’s not scalable—if you want to increase transaction volume, you have to increase headcount and costs accordingly. There’s also an opportunity cost. If a process takes 10 days to complete, that delay could impact customer experience. More data, faster data, and faster time to outcome—that’s the value proposition, whether in the Philippines or any other market.

In the AI landscape, we often compete against agentic solutions that are built for specific use cases. Businesses may wonder, Why use File.ai when I could replace my manual process with an advanced AI solution tailored to my needs? The answer is simple: scalability. If a company has 20 core business processes, no one wants to go through the procurement process 20 times or manage 20 different vendors with different security and performance standards.

That’s why being a horizontal, file-agnostic solution is so important for us. It allows us to add value across multiple processes, making AI transformation worthwhile, particularly in a market like the Philippines, where companies are still in the early stages of cloud and AI adoption.

Paulo: So would it be correct to say that File.ai serves as a foundational tool to standardize how organizations approach AI transformation before incorporating other agentic AI solutions?

Clare: Yes, and that’s the beauty of File.ai as an entry point into AI transformation. We work with companies at every stage—whether they’re just starting to implement automation or they’ve already invested in highly specialized predictive models for niche business functions.

At the end of the day, those AI investments are only as good as the data they’re built on. File.ai plays a crucial role in structuring and surfacing that data, automating workflows, and bridging different steps in an organization’s digital transformation journey.

Paulo: That’s a key takeaway. As much as AI solutions are becoming ubiquitous, implementation is still highly case-specific. I encourage everyone here to ask questions later about your own use cases.

Aldo from NetSuite on Automation and Upskilling

Paulo: Before we dive into that, let me turn to Aldo. Clare just mentioned one of the primary reasons for automation—boosting productivity and reducing OPEX. NetSuite works with organizations of different sizes and headcounts. How do you advise clients on implementing automation at scale, especially within ERP systems? And how do you help their teams adapt and upskill during this transformation?

Aldo: Thanks, Paulo. Let me introduce myself again—I’m Aldo, a solutions consultant at NetSuite.

To your question, NetSuite is an accounting and inventory management system, and whenever we meet prospective clients, one of the first things they ask is, How will this help us reduce manpower or be more efficient?

With a system like NetSuite, we streamline operations and automate tasks, improving efficiency. Now, with AI transformation accelerating across industries, a lot of people have concerns about being replaced by artificial intelligence. Many employees feel apprehensive about AI taking over their jobs, which is understandable. But when we talk to people with a growth mindset, we emphasize that AI is a tool to leverage—it’s about managing change effectively.

Company leaders play a crucial role in communicating this to employees. They need to reassure their teams that AI isn’t something to fear; instead, it should be used to make their jobs easier. AI is already capable of many things—whether it’s predicting the weather, automating repetitive tasks, or even facilitating cashier-less grocery checkouts. In some places, we’re seeing AI-driven stores where no employees are needed at checkout. And of course, in the U.S., self-driving cars are becoming more prevalent.

Given all this, many employees worry about their job security. This is why leadership must guide teams toward upskilling. Employees need to develop new skills to remain competitive in an AI-driven workplace. One crucial skill is knowing how to ask AI the right questions.

In the past, education focused on memorization—absorbing knowledge from books and lectures. But now, a key skill is learning how to effectively prompt AI. They call this prompt engineering—the ability to ask AI, such as ChatGPT, the right questions to get the most valuable answers.

Another essential skill is critical thinking. AI is powerful, but it isn’t perfect. You can’t blindly accept its responses—you need to analyze and evaluate its outputs. AI can make mistakes, so users must apply logic, verify information, and continue probing for better insights.

A third area where humans still have an advantage over AI is emotional intelligence (EQ). AI might have an IQ of over 200, but EQ remains a uniquely human trait. Emotional intelligence allows us to navigate social interactions, understand emotions, and make decisions that require empathy—something AI struggles to replicate. These are key elements to focus on when managing change in organizations.

Paulo: That’s an important point. Do you have any examples of how you’ve seen this balance between IQ and EQ play out in real-world applications?

Aldo: A great example is in medicine. AI is now capable of diagnosing illnesses and even assisting with surgeries. But a human doctor can read a patient’s emotions, detect subtle physical cues, and provide comfort and reassurance—something AI cannot do. This human element remains essential in fields that require empathy and nuanced judgment.

Paulo: I completely agree. One key takeaway from Aldo’s point is that AI’s effectiveness is only as good as the user’s understanding of the subject matter and business context. AI isn’t a plug-and-play solution that guarantees success—it depends on the data you have and how you apply it.

Alex from Wiz.ai on Conversational AI

Paulo: With that in mind, I want to turn to Alex from Wiz.ai. Wiz.ai started out as a conversational voice AI solution, but now you’re doing much more, including vertical AI, LMS, and AI agents.

Fun fact—back in 2019 or 2020, I was one of the people who tested Wiz.ai’s Tagalog voice AI. So, somewhere in Wiz.ai’s training data, you’ll find my voice! It’s been interesting to see how the landscape has evolved since then.

When we spoke in October, we were still talking in terms of AI agents. But now, just two months later, we have Satya Nadella talking about agentic AI, the idea that “SaaS is dead,” and so much more. Could you help differentiate these concepts for us? And how is Wiz.ai adapting to these rapid developments in AI?

Alex: First, let me introduce myself. I’m Alex from Wiz.ai. Wiz.ai is a conversational AI company, and our focus is on helping customers improve customer engagement.

Specifically, we help customers increase their revenue. For example, our conversational AI bots can be used for telemarketing sales and lead filtering, generating new revenue streams for businesses. On the other hand, we also help customers save costs by using AI agents to handle repetitive tasks more efficiently.

This is especially valuable for large-scale enterprise customers. Our AI talkbots, for instance, can handle one million phone calls—whether outbound or inbound—in just one hour. If you have a massive customer database that requires engagement, our AI bots can significantly enhance efficiency.

The third key area we focus on is our proprietary AI technologies, including Automatic Speech Recognition (ASR), Text-to-Speech (TTS), and Natural Language Understanding (NLU). We have developed AI solutions in 17 different languages across Southeast Asia, South America, the U.S., and even Africa.

I’m particularly excited to be here in the Philippines because it is one of our most important markets in Southeast Asia. We work with Tier 1 telcos, banks, and insurance companies here. What sets our conversational AI bots apart is their ability to communicate naturally in Tagalog, Taglish, and English. When we demonstrate our technology to clients, we find that 95% of people can’t distinguish between our AI talkbots and a human agent.

Going back to Paulo’s question—back in October, AI agents were still in the early stages of discussion. People were considering AI agents as a possible direction for large language model applications. Now, just a few months later, this has become a major focus in AI development.

Initially, AI agents were designed to handle specific tasks or use cases. But today, conversations are shifting toward agentic AI frameworks. This means integrating multiple AI agents with hybrid large language models, APIs, and different enterprise systems. By doing this, AI agents can become even smarter, solving complex problems and driving greater efficiency.

I’d be happy to dive deeper into this topic later in the discussion. Thank you.

Paulo: Yeah, I think that ties back to Clare’s point earlier. There are many different use cases for AI solutions, but for organizations to implement them effectively, they need to think holistically—from a framework perspective, as you mentioned.

I’d like to take this moment to encourage everyone to start thinking about questions you’d like to ask. But before we open the floor, I have one more question.

Cybersecurity and Data Privacy in Enterprise AI

Paulo: Cybersecurity and data privacy are top priorities for many enterprises and businesses—sometimes even more so than the cost of products or services. Could you speak to this in the context of your own solutions? What insights have you gained when onboarding companies and reassuring them that their data will remain secure as they implement these technologies?

Clare: Great question. This is such an important and timely discussion, relevant across all industries and markets. Concerns about cybersecurity and data privacy are among the biggest barriers to AI adoption.

When people think about AI, they often base their understanding on personal experiences—ChatGPT on their phones, AI-powered image generators, and other open-source tools. The mainstream conversation revolves around open-source and generative AI, but enterprise AI transformation is a completely different landscape.

Historically, enterprise tech adoption was top-down. Companies made strategic decisions, implemented new technologies, and handed them down to employees. But with the rise of consumer AI products like ChatGPT, AI has become accessible to everyone. This shift has created new expectations—users already have preconceived ideas of how AI should perform, what interactions should look like, and how security should be handled.

For us, security and data privacy are at the core of our solution. We operate primarily in the enterprise space, working with finance teams, banks, and insurance companies—some of the most highly regulated industries. As a result, we’ve built data governance and security compliance into our system from the very beginning.

File.ai runs on a fully encrypted server environment, ensuring that no data leaves the system. Many of our deployments are private cloud environments, meaning our solution operates entirely within an enterprise’s existing tech infrastructure. For SaaS deployments, we allow customers to choose where their data is hosted, ensuring full control over their information.

Because we started in a highly regulated space, we’ve always had to adhere to strict governance and compliance standards. This has actually given us an advantage—there were already established frameworks for data privacy that we could build upon. In contrast, B2C AI products are now scrambling to address security concerns after launching in an open-source environment.

For companies concerned about cybersecurity, it’s critical to understand how AI integrates into their existing tech stack. We work closely with enterprises to ensure their private data remains contained and never interacts with open-source AI models. We’re also SOC-certified, which further reinforces our commitment to security.

Ultimately, managing these concerns is as much about technical safeguards as it is about change management. It’s about guiding enterprises through the process, breaking down misconceptions, and ensuring they understand how AI can be implemented securely.

Paulo: Aldo, given that NetSuite has been handling enterprise security for a long time, how has the perception of cybersecurity evolved, especially with AI now in the picture?

Aldo: Whenever we present NetSuite to prospective clients, data security is always one of their biggest concerns. Companies want to know where their data is hosted and how it is protected.

For NetSuite, we store customer data in Oracle’s data centers. For new customers in the Philippines, we host their data in Tokyo. We assure them that Oracle’s data centers have all the necessary security certifications and encryption protocols in place.

It’s important for businesses to know that all data within NetSuite is contained, meaning there’s no risk of unauthorized access or data breaches. We also reinforce this commitment in our subscription service agreements, which explicitly state our security policies. These are the reassurances we provide to our clients.

Paulo: That makes sense. Now, shifting to agentic AI—Alex, could you talk about how Wiz.ai ensures security, and why agentic AI might offer a more secure approach compared to traditional AI agents?

Alex: Our customer base overlaps significantly with the financial services sector, similar to Clare’s. We primarily serve BFSI (banking, financial services, and insurance), e-commerce, healthcare, and telecommunications—industries that are highly regulated.

Beyond compliance, we’ve also developed AI solutions tailored to regulatory requirements. For example, we provide quality assurance (QA) and quality management (QM) products for certain industries, helping customers comply with government regulations.

In terms of data security, we offer both SaaS and on-premise deployments. Since we handle conversational data—often including highly sensitive financial information—many customers prefer to install our solutions on-premise to ensure data never leaves their environment.

Additionally, we strictly adhere to data residency requirements. Most of our enterprise clients require that their data be stored within their own country, and we ensure that no data is transferred externally.

There are also regulatory mandates regarding data retention. In many industries, recorded conversations and transaction logs must be securely stored for at least five years. We strictly follow each country’s compliance guidelines, whether they come from central banks, financial regulators, or telecommunications authorities. Our approach ensures that all data remains secure and fully compliant with local regulations.

Paulo: Clare, did you want to add something?

Clare: Yes, Alex makes an excellent point. While some companies worry that AI introduces security risks, the reality is that AI often strengthens compliance, risk management, and governance processes.

Many of the AI-driven solutions we implement are specifically designed to enhance security. AI can improve audit logs, increase transparency, strengthen data storage protocols, and enhance fraud detection. For example, we’ve built fraud validation layers into our processing capabilities.

Rather than posing a security risk, AI can actually help companies comply with ever-changing regulations and improve their overall data governance. In many cases, security and compliance are among the strongest business cases for AI adoption.

Paulo: That’s a great point. It’s interesting how generative AI is often associated with consumer-friendly applications—fun tools, creative apps, and drag-and-drop AI agents. But what’s reassuring about enterprise AI companies is that security has been a fundamental consideration from day one. And I assume that as AI continues to evolve, these security frameworks will influence consumer AI applications as well.

Q&A Session: What is really the competitive moat for enterprise AI solutions?

Paulo: On that note, I’d like to open the floor to questions. If anyone has any questions or would like a consultation—yes, Peter?

Audience: I guess to start off with a question from the investor side—since I see a few VCs here—what I’ve been noticing is that LLMs have been progressing rapidly, and their datasets are becoming larger and larger.

I read the AI playbook that you guys published, and my question is: do you see proprietary data as something that will slowly diminish in importance as bigger players like Claude and OpenAI improve their LLMs with more data? Or will proprietary datasets still hold significant value two to five years from now?

Clare: I’m happy to kick this one off. I think this is one of the most exciting aspects of AI transformation right now—there’s still so much innovation happening at the foundational model level.

When enterprises embark on their AI journey and evaluate vendors, very few want to make big bets right now. No one wants to commit solely to AWS, for example, and assume that it will perfectly meet their use case in the next two to five years. There’s still a lot of progress to be made, and as a result, we see hesitancy in the market. Companies are reluctant to sign multi-million-dollar, multi-year contracts that lock them into a specific provider.

That’s why File.ai has a fully configurable processing stack. Enterprises can integrate off-the-shelf solutions from major cloud and AI providers while also leveraging our proprietary LLMs and fine-tuned models for their specific use cases.

To answer your question about the value of proprietary data—I think it will continue to shift, but when you look at enterprise adoption, proprietary data will always hold value. Even as public training datasets improve, enterprises operate in siloed environments where fine-tuning on their own data will always be necessary.

For example, one bank’s internal data and processes will differ from another’s, even if the differences are nuanced. Better foundational models mean better initial outputs, but companies will still need to deploy AI internally and fine-tune models with their own proprietary data for optimal performance.

For us, accelerating that process—helping enterprises extract value from their proprietary data without putting the burden entirely on them—is a key advantage.

Alex: I’ll add to that with our perspective on foundation models and smaller, domain-specific models.

Most large language models are built on general knowledge from publicly available sources. They’ve already been trained on a vast amount of internet data. However, in the commercial world, a lot of valuable industry-specific knowledge is proprietary and not publicly available. That’s one of the key differentiators.

Another development we’re seeing is the rapid improvement in reasoning capabilities. Compared to last year, today’s LLMs are significantly better at reasoning through complex problems.

Previously, we designed AI agents for very specific customer use cases. But even with strong domain knowledge, we aren’t the ultimate experts in every industry—that expertise lies within our customers’ organizations.

So rather than focusing solely on large or small models, the key question is: How do we enable businesses to leverage their proprietary knowledge, regardless of the AI model they use?

We believe that over the next two to three years, AI solutions that help businesses integrate their proprietary knowledge into various models—whether large or small—will be the most valuable. There won’t be one dominant model that unifies everything. Instead, we’ll see specialized, domain-specific models emerge that perform exceptionally well in certain industries.

Clare: That’s a great point. There’s also a commercial consideration here.

Over time, as training data improves, people might wonder what happens to the value of proprietary data. We’ve discussed how this plays out in enterprise settings, but from a business perspective, smaller models will always have an advantage in cost, latency, and accuracy.

Even if a large language model can theoretically handle an enterprise process well, a specialized model will usually outperform it in real-world applications. Running a large model on general knowledge is expensive, slower, and more prone to hallucinations.

For enterprises that need to execute a high-volume task repeatedly and with precision, a smaller, fine-tuned model will always be more efficient. That’s why, as Alex pointed out, AI solutions should be designed to integrate with enterprise-specific datasets, whether through fine-tuning a proprietary model or optimizing the processing stack.

Audience: Just a follow-up question—thank you for those insights. Based on what you’ve shared, my question is: What makes AI startups in Southeast Asia competitive?

Many Western AI startups have more funding, and their technology is often more advanced. But you operate in a defensible sector—enterprise AI. You have strong distribution and work with large enterprises, which gives you an edge.

But beyond targeting enterprises, what other playbook could AI startups in Southeast Asia follow? Can smaller AI startups targeting SMEs also become big outcomes?

Alex: That’s a great question.

Since we specialize in conversational AI, our competitive advantage comes from our language capabilities. For example, in the Philippines, very few conversational AI companies can provide high-quality Tagalog and Taglish AI solutions.

Southeast Asia is a unique market. Unlike the U.S. or Europe, where IT budgets are significantly larger, the competition is also much fiercer. Many AI startups launch in the U.S. and Europe, where they have access to major enterprise clients. If you’re building an English-language AI product, you’re competing directly with massive players who already dominate those markets.

For us, we chose to focus on developing Southeast Asian languages and deepening our expertise in specific verticals like BFSI, e-commerce, and telecommunications. By building strong domain expertise alongside our AI technology, we create defensibility.

Eventually, we plan to expand into larger markets, but starting in Southeast Asia allows us to establish a competitive edge before scaling.

Audience: I imagine regulations play a role in this as well. Compared to the U.S., are compliance requirements in Southeast Asia more challenging to navigate?

Alex: Yes and no.

Southeast Asian regulators are becoming increasingly strict on data privacy. For example, Indonesia recently introduced a new data privacy law, adding further restrictions. Developed markets like the U.S. tend to have more established frameworks, which can sometimes make compliance easier to navigate.

In Singapore, for example, there’s a “Do Not Call” list. Every time we make an outbound call, we have to check if the number is on that list. Other countries have similar regulations, requiring businesses to obtain explicit customer consent for marketing outreach.

Eventually, we expect all Southeast Asian markets to implement similar policies. Compliance is complex, but it’s not necessarily a barrier to entry. The bigger challenges for startups entering large markets are competition and customer acquisition costs.

Clare: I think that ties into a larger point about what makes Southeast Asian AI startups competitive on a global scale.

Expanding into the U.S. market requires more than just having great technology. The way we sell, distribute, and position our solutions has to adapt to different markets. For example, our ideal customer profile (ICP) in Southeast Asia might look different from our ICP in the U.S.

One of the advantages of building in Southeast Asia is that we’re solving for complexity from the start. We deal with multi-language, multi-regional, and highly regulated environments early on. This level of adaptability becomes an asset when expanding to more mature markets.

For File.ai, before expanding to the U.S. a few months ago, we went through this process of refining our go-to-market strategy. By the time we entered, we already knew exactly how to position ourselves.

Flexibility is also key. Some enterprises are just beginning their cloud transformation, while others are far along. We can support everything from on-prem to private cloud to SaaS. That adaptability makes our solution more competitive.

Lastly, the first-mover advantage is real. Proprietary data is valuable, but being the first to establish strong relationships with enterprises is just as important. If we build deep integrations with financial institutions now, it becomes much harder for competitors to displace us later.

Paulo: That’s a great perspective. Enterprise AI isn’t just about technology—it’s also about sales strategy, market positioning, and execution.

Q&A Session: Is it possible and worth it to verticalize as an application layer solution? 

Audience: Do you consider yourself a startup operating at the application layer or the foundation layer? If you’re in the application layer, what parameters would you consider when deciding whether to move up or down the stack? Would you ever consider building your own models, or would you stick to the application layer given the high costs of moving deeper into the stack?

Clare: Great question. I can speak from our experience and journey. When I first heard your question, I thought you were asking whether one is better than the other or what would drive a company to move between them. My instinctive response is that it depends on customer demand and market needs.

We very much operate in the application layer, but I’d describe our approach as AI orchestration. We orchestrate between multiple AI products, including our own proprietary models. If you’re going to build at the foundation model level, you need the resources, a competitive edge, and a niche strong enough to compete with the major providers. That would be my first consideration.

We focus on operationalizing AI solutions at the application layer. However, to your point about why a company might move down the stack—there are opportunities to serve our existing customers more effectively and efficiently with proprietary solutions. That’s why we decided to develop our own models.

Clare: Having proprietary models also strengthens our defensibility. If you try to build a similar solution solely by integrating OpenAI, Anthropic, or Bedrock, your business becomes entirely dependent on those providers. If they change pricing, modify policies, or limit access, your business model could collapse overnight.

We wanted to avoid that risk, which is why we built in-house capabilities while still leveraging external innovations from our tech partners. This gives us the advantage of cost efficiency, accuracy, and better control over how our models run. It also ensures that we remain independent and adaptable, even as the AI landscape evolves.

Audience: Got it. So your training is primarily focused on application-level improvements, such as refining retrieval-augmented generation (RAG) models?

Clare: Exactly. We take off-the-shelf solutions, apply fine-tuning, and also incorporate our proprietary models that are specifically designed for our use cases.

Alex: I’ll share some insights from our experience.

Back in 2023, when OpenAI released GPT-4, we found that it was excellent in English but not as strong in other languages. At the start of 2024, we decided to build a language-specific foundation model—specifically an Indonesian Bahasa model, which at the time was the second-best available in the market.

We pitched this model to one of the largest banks in Indonesia, and after extensive testing, they were highly impressed. They even considered purchasing the foundation model from us.

However, the evolution of large language models was happening too quickly. With the rapid release of GPT-4 Turbo and other improvements, enterprises began to feel that foundational models were evolving so fast that investing in custom-built models didn’t make sense.

Eventually, we realized that focusing on foundation models wasn’t the best path for us. Instead, we pivoted back to our core strength—building domain-specific applications with advanced AI capabilities.

We’ve worked on fine-tuning large language models using open-source models, but the key lesson we learned is that you cannot build your business model entirely on the most advanced AI available at any given time. Things change too quickly.

Alex: When working with customers, we emphasize the need to first identify their specific commercial purpose before selecting an AI model.

For example, do they need advanced mathematical reasoning capabilities? Do they need better contextual understanding for customer interactions? Do they need knowledge management? Each use case requires a different AI approach.

In addition, companies must consider data privacy and compliance. This is why we strongly believe in hybrid AI strategies.

In the next two to three years, I expect enterprises to use a mix of different AI models depending on department-specific needs.

For example, an HR team might use Anthropic Claude 3.7 for internal document processing, while the sales team might rely on ChatGPT for customer interactions.

The key takeaway is that businesses shouldn’t depend on a single AI provider. Instead, they should build adaptable AI strategies that allow them to switch between models based on their needs.

Right now, we’re testing and refining this approach with our customers to ensure they have the flexibility to integrate different AI capabilities as the landscape continues to evolve.

Here’s the proofread version of the transcript:

Q&A Session: How does Gen AI impact tools for VC portfolio management (Business Intelligence)?

Audience: This question is more for you, Clare, but I’d love to hear others weigh in as well—especially since I know Oracle has a BI tool.

Beyond just capturing data and reducing costs, how are you thinking about business intelligence? Ideally, I’d want something like File.ai to analyze my entire database and tell me who I need to hire, who I need to fire, which divisions are performing well, and which ones aren’t.

For example, in venture capital, we’re doing portfolio management. If we could connect this to Carta and see fund performance and portfolio performance holistically, that would be a game-changer. Are you going that far with it?

Clare: Sounds like you’ve been looking at our sales deck.

Audience: If not, I need it.

Clare: Actually, when we present our solution to potential customers, we emphasize that while data processing is a major component of what we do, the real value comes from how that data is used.

We’re seeing massive acceleration in predictive modeling capabilities, and File.ai operates across three core layers: processing and surfacing data, automation, and outputs. I could go deep into everything that happens within our data processing step, but for us, that’s just step one—it’s the foundation for the real fun stuff.

You touched on a few key use cases, particularly business intelligence. If anyone is interested in this, I’d love to have a deeper conversation because, in reality, there’s no limit to what we can do with this data.

File.ai functions as both a data processing layer and an agentic workflow system. This means that after we process and extract data, we can integrate it with your existing BI tools, Power BI dashboards, or predictive analytics models.

For example, we work with banks on income verification. Many have invested—excuse my language—a lot of money into fraud detection and validation models. But those models are only as good as the data being fed into them. If you don’t have high-quality, structured data going in, you won’t get meaningful outputs. We help optimize that process, ensuring that the models enterprises have already invested in can actually deliver ROI.

Ultimately, how businesses use and operationalize data is limitless. It depends on the specific business case.

If a company wants real-time predictive P&L analysis of its portfolio companies—showing who to double down on or what corrective actions to take—that’s an amazing application. But before getting to that shiny new output, it all starts with data infrastructure.

The first step is ensuring data is properly processed, structured, and connected across systems. Then comes defining schemas, establishing workflows, setting guardrails, and deciding how insights should be used. It’s a journey, but the foundation always begins with the data layer.

We’re also seeing a lot of exciting developments in post-processing layers. Data enrichment is a great example.

In finance, one of the most common use cases we solve is accounts payable automation. Instead of simply processing invoices, we go further—matching invoices against purchase orders, reconciling requisitions within NetSuite or other ERPs, performing three-way matching, flagging discrepancies, routing approvals, updating the general ledger, notifying inventory systems, and integrating with warehousing or planning tools.

It starts with simple data extraction, but once that data is structured, the workflows can be endlessly configured.

Again, I’d love to dive deeper into this because the possibilities are vast. Whether it’s contract management, share agreements, or investor pitch decks, we’re processing a wide range of document types. It ultimately comes down to defining the problem statement and desired output, and then configuring the necessary workflows and integrations.

Q&A Session: How close are we to a 5-6 person multi-billion dollar company?

Audience: I’ve been thinking a lot about the power of small, elite teams. Some of the best startups I’ve seen are built by just five or six people.

When I see founders asking for 50 new developers, it worries me—because in this day and age, you don’t really need that many people.

How far are we from seeing a multi-billion-dollar company run by just five or six core team members?

I mean, Craigslist has only 50 employees, and it’s worth nearly a billion dollars. The rise of AI agents is going to accelerate this shift.

Alex: Our business model is a bit different.

On one hand, we rely on high-tech AI knowledge, but on the other hand, we also serve multiple countries. For us, professional services are essential—we need local customer experience teams to ensure that we understand the commercial and cultural nuances of each market.

Fifteen people wouldn’t be enough for us, even just for our engineering team.

Aldo: Yeah, at Oracle Philippines, we have around 3,000 employees.

Clare: To build on what we discussed earlier, a company’s team size depends on its distribution model, solution complexity, and regulatory environment.

To answer your question—if I were starting a SaaS company tomorrow, I’d absolutely keep the core team lean, under five people.

We operate with an augmented workforce model. I wouldn’t trust 100% AI-generated code, but I’d invest in highly skilled, high-EQ, critical-thinking engineers to oversee large-scale, AI-assisted development.

That being said, we currently have around 70 people across distribution and innovation. Why? Because in our space, there’s still significant R&D to be done.

Building our own AI models and continuously fine-tuning them is an investment. We’re moving aggressively because we see a first-mover advantage—we want to be the ones securing those 10-year, multi-million-dollar enterprise contracts. If we don’t capture that opportunity now, someone else will.

That’s where team size matters. If we chose to stay lean and only focus on our core solution, we’d miss out on the bigger play.

At the same time, our approach to distribution is highly strategic. We operate in 18 countries, but we don’t have employees in all of them. Relationship-building is critical.

For example, if I want to co-sell with NetSuite, I can’t just have an automated chatbot calling their team and saying, “Hey, let’s partner!” That’s not how trust is built. AI can optimize processes, but it can’t replace high-value relationships—at least, not yet.

Audience: I’d love to see a billion-dollar company run by just five people.

Clare: It’s absolutely possible, depending on the industry and business model. I saw a LinkedIn post last night about a company doing just that. The question is: What space are they playing in?

Audience: I know a gaming startup out of Georgia—one guy coded the entire game in his bedroom, raised $50K from a friend, and now the game generates $3 billion a year. He has a team of just 10 people.

Clare: The real question is defensibility.

Audience: He’s been cloned 20 times already. Now, they’re focusing on relationships and brand to maintain their lead.

Clare: Exactly. If you can get to scale quickly, that’s great—but the long-term challenge is sustaining it.

At the enterprise level, defensibility is built through deep integrations, regulatory compliance, and long-term contracts. That’s why AI is augmenting teams rather than replacing them entirely.

Q&A Session: Who really controls the AI supply chain?

Audience: Let me throw this one at you—how are you thinking about the entire supply chain of the AI industry?

Data centers are growing at just 8-12% annually, despite the AI hype. My concern is that these data centers are monopolized by a few large players. If they change policies, your business could be at risk.

Semiconductors face the same issue—only a handful of companies can manufacture them. So who really controls AI?

Clare: If someone has the definitive answer, I’d love to hear it.

AI infrastructure is highly interdependent. While there’s a risk of major providers pulling the plug, the reality is that it’s a symbiotic relationship—we’re all part of the same ecosystem.

Aldo: Look at DeepSeek—they couldn’t access chips, but they still built an OpenAI-level model at a fraction of the cost. The same could happen with data centers—if access becomes restricted, new solutions will emerge.

Q&A Session: How do I think about AI Transformation for SMEs?

Audience: I’m speaking from a user perspective. My question is about affordability, particularly for small and medium-sized enterprises (SMEs).

Large enterprises have the resources to adopt AI solutions, but for startups like ours, we want to ensure that we don’t overextend ourselves operationally. At the same time, we need to scale efficiently.

I’m asking from a finance perspective. As Clare mentioned earlier, AI has different use cases, but not all of them will apply to every company since each organization has its own structure and operations.

For example, in our case, the biggest bottleneck is in processing and reconciliation. We operate in two industries and manage multiple platforms. Right now, our process is still manual, but we are optimistic about scaling. We want to be prepared for growth, ensuring that when our operations increase, we can handle the expansion without being overwhelmed.

Given that we work across multiple platforms and reconcile data from different systems, we’re interested in AI solutions. But our concern is affordability—how viable is it for small and medium enterprises to adopt AI solutions without breaking the bank?

Clare: That’s a great question, and I’m happy to start. We do a lot in the finance space, which is why we love working with ERP partners like NetSuite.

At the end of the day, AI is a powerful technology, but its primary function is automation. Why automate? Because it reduces risk, improves efficiency, enhances scalability, and drives cost savings. It also unlocks additional data insights for future decision-making.

We actually started as an SME-focused platform. So to your point—how do we think about affordability and accessibility? The answer is modularization.

From a File.ai perspective, we solve automation challenges across finance functions—accounts payable (AP), accounts receivable (AR), statements, and reconciliation, for example. The key difference between SMEs and large enterprises is how we deploy our solution.

For automation to make sense, there must be a clear return on investment (ROI). If automating a process costs more than doing it manually, then something is wrong. In reality, automation should always deliver measurable value.

Let’s break it down. Right now, if your team is handling reconciliation manually, you face several risks and inefficiencies:

  1. Dependency on key personnel – If a critical team member takes leave or resigns, the process is disrupted. Hiring and training new employees takes time.
  2. Human error – Manual processing leads to mistakes, which can result in costly corrections and rework.
  3. Scalability limitations – If transaction volumes double, you need to double your headcount, increasing operational costs.
  4. Processing delays – Lag time in reconciliation affects cash flow and financial decision-making.

Now, compare that to automation. If AI handles 80% of reconciliation tasks, your team only needs to oversee and review exceptions. That means:

  • Faster processing – AI can perform reconciliations in real time, reducing delays.
  • Higher accuracy – Errors and discrepancies are flagged instantly.
  • Scalability – Your existing team can handle a growing workload without needing to double headcount.
  • Lower costs – The automation cost is fixed, while the savings grow as transaction volumes increase.

So, for SMEs, AI solutions should be viewed through an ROI lens. If you take a quick back-of-the-envelope calculation:

  • How much are you paying your finance team?
  • How much time do they spend on manual reconciliation?
  • What’s the cost of errors, delays, or additional hires?
  • If an AI solution costs X per month but saves Y in operational costs, does the math make sense?

If the savings outweigh the cost, then automation is a no-brainer.

For SMEs, File.ai offers out-of-the-box solutions. AP, AR, and reconciliation modules are pre-built and integrate seamlessly with smaller accounting platforms. Even with NetSuite, we have pre-configured modular integrations that don’t require complex customization.

For larger enterprises, customization becomes necessary because they often have proprietary, highly customized systems. But for SMEs, the goal is to provide a plug-and-play solution that delivers immediate value.

So, if you’re an SME considering AI, start by identifying your biggest pain points and cost centers. If reconciliation is the bottleneck, calculate the potential savings from automation. AI should not be a cost burden—it should be an investment that pays for itself over time.

Q&A Session: What are AI use cases for professional services? 

Audience: I work in a boutique investment banking firm specializing in M&A advisory services.

Our firm is exploring AI tools to enhance our operations, but we have several concerns. First, there’s data privacy—given the sensitivity of our work. Second, professional services, especially in client-facing industries, can be hard to standardize with AI. Third, technological adoption in the Philippines—and even within our firm—is still in its early stages.

For example, we currently rely on Google Drive as our primary file management system. Given these factors, have you worked with professional services firms before? What are some AI use cases outside of generative AI for consulting or advisory firms?

Clare: Yes, professional services is one of our key industry segments for exactly this reason.

You’re right—since it’s a service-based industry, expertise from consultants plays a critical role. But ask yourself—how much of your time is actually spent on high-value consulting versus manual administrative tasks?

For example, in M&A, you’re dealing with massive amounts of documentation and system integration. There’s data merging, compliance reviews, and reconciliations. That’s our bread-and-butter use case.

In service-based industries, the real value isn’t in processing data manually—it’s in extracting insights and generating actionable outcomes.

Many firms don’t realize how much time they lose to file handling, formatting, and data aggregation. AI removes those bottlenecks. The goal isn’t to replace expertise—it’s to free up capacity so your team can focus on delivering strategic value.

When we introduce AI into professional services, we start with a practical, procurement-based approach:

  1. Where are the biggest inefficiencies?
  2. Which tasks provide the least value to your team but take up the most time?
  3. How can automation enhance decision-making rather than replace human judgment?

From there, we scale AI adoption based on proven ROI.

Alex: We take a similar approach.

For professional services firms, we start by providing AI-driven operational efficiencies. Once they see the benefits, we enable their internal teams to manage AI-driven processes themselves.

Initially, firms rely on us for implementation, but over time, we open up our platform so their employees can operate it directly.

For larger enterprises, we also partner with system integrators and consulting firms to tailor AI deployments to industry-specific needs.

Q&A Session: How do I eliminate manual data handling?

Audience: My name is Udi, and I work in spend analytics.

Right now, my workflow looks like this:

  1. Extract data from Oracle.
  2. Manually clean and process it.
  3. Load it into a database.
  4. Connect it to Power BI for visualization.

This is time-consuming and inefficient.

Last week, we implemented Power BI Insights, which uses generative AI to analyze data. But my question is:

Can we eliminate manual data handling altogether?

For example, AWS offers a solution where you can simply connect your system, type a prompt, and instantly get insights—like days payable outstanding, sales performance, or accounting discrepancies.

Is something similar available in Oracle?

Aldo: Yes, Oracle NetSuite has a product called NSAW (NetSuite Analytics Warehouse).

With NSAW, you can set up a data pipeline that automatically pulls data from NetSuite. You don’t need to manually extract or clean data—it’s done for you.

NSAW also includes auto-insights. Simply click a button, and it analyzes your data, surfacing patterns, anomalies, and correlations that might not be immediately obvious.

For example, it can:

  • Identify correlations between sales and geographic locations.
  • Show factors affecting revenue growth.
  • Provide heat maps to visualize sales performance by region.

It automates the process so you can get real-time business intelligence without manual intervention.

Paulo: On that note, we’ll wrap up this panel.

Thank you all for joining! Similar to our Singapore edition, we’ll be releasing a recording of this panel on YouTube and LinkedIn.

If you want to share this discussion with colleagues or other organizations who might find it useful—especially those interested in the use cases we covered—please scan the QR code to follow our updates. The recording should be available within the next week or so.

Finally, I’d like to give a special thanks to our panelists for sharing their insights and to NetSuite for hosting us today—with the venue, coffee, and food.

Subscribe to our monthly newsletter to read full transcripts


Website | + posts
***