What does it really mean to develop AI use cases for your business?
For the launch of our latest playbook on AI Transformation (download your copy here!), we gathered a panel of digital transformation experts from established software solution providers and up and coming startups at the frontier of enterprise AI adoption.
We’ve cut the best parts of this hour-long conversation into a podcast episode for you to digest the best practices shared by these leaders.
Timestamps / 10 Best Practices on AI Transformation
(00:00) Structure workflows for them to be mostly autonomous;
(04:07) Implementing AI isn’t about simply adopting a solution and calling yourself AI-centric. It’s about balancing technology with people and ensuring they’re ready to use it strategically;
(08:48) Don’t start by listing competitors; organize internal hackathons;
(13:18) Overcoming the obstacles for AI agents’ learning and self-adjustment remains a major hurdle; explore a multi-model approach for training AI agents;
(21:27) Generative AI is not the ultimate tool [for automation]; some parts rely on traditional machine learning for risk modeling, some on API calls, and others on scoring mechanisms;
(23:17) Break your process into pieces and identify where you need specialist providers, where you might need to update your system, and so on;
(25:10) Integrate AI products into your system thoroughly, rather than treating them as standalone tools. You should use them as part of your business, integrating with your CRM, ERP, and other systems;
(27:16) A unified tool that connects different processes—CRM, ERP, sales—is crucial. Otherwise, you end up with structured but unconnected data. Connected data really matters because it fuels AI training. Consolidating and consuming this data is what allows AI models to learn and improve;
(30:04) Consolidation is the first step, and that’s where interconnected systems come in. ERP is about interconnecting things—procurement, supply chain, manufacturing, accounts payables—so you have a subset of interconnected processes;
(30:55) What we’ve seen work is when you start using products that aren’t just labeled as “AI” but are genuinely AI-powered. These tools won’t lock away your data—best practice now is to make data accessible;
(33:22) With technology like agents, you can fetch data from needed systems, so consolidation happens at the time of user interaction with the AI. There are ways to start the journey without first reconstructing everything;
(34:58) Banks actually represent extreme cases for AI utilization. They need to manage data privacy, compliance, and often have on-premise systems, so using open source isn’t always feasible;
About our guests
Alex Song is the current Chief Revenue Officer of WIZ.AI, overseeing the company’s revenue-generating strategies. Previously, he served as WIZ.AI’s Chief Operating Officer. Under his leadership, WIZ.AI has grown to over 300 business clients across 17 countries, with Fortune 500 companies and unicorn start-ups accounting for 60% of the company’s client profile. Alex is a senior business executive with over 11 years of leadership experience in both the technology and logistics sectors. Before joining WIZ.AI, he served as VP of Samsung SDS.
Christian Schneider is the Co-founder and CEO of FileAI. He started his career in investment banking and consulting in Europe before moving into venture building with Rocket Internet in Singapore where he managed business intelligence for foodpanda. Thereafter, Christian co-founded Singapore-based foodtech startup DishDash.co, where he began working with Bluesheets co-founder, Clare, as it expanded to Australia.
Franco Manuel is an Oracle NetSuite Master Principal Solution Consultant. He has spent more than a decade supporting business transformation through Oracle Netsuite, with a brief stint at Salesforce. He specializes in ERP / CRM pre-sales and consulting across multiple platforms, including Microsoft and IFS.
Pandurang Nayak is Head of Solution Architects, ASEAN at AWS. Prior to AWS he has had a career as CTO/COO/CPO for several digital media companies in India. He was also previously a Technology Evangelist and Specialist at Microsoft. He is a lifelong software engineer and developer, since he taught himself web programming.
Transcript
Structure workflows for them to be mostly autonomous.
Paulo: I wanted to start with data automation, which I think is really the foundation for any AI use case. I wanted to direct that first question to Christian. Being the company that helps transform different data inputs into workflows, could you share how data automation can set up success for a company or organization exploring AI use cases? Why is that important? Maybe you could also share some examples you’ve seen.
Christian: I think there’s such a universe of opportunity within a company, but first and foremost, we always tell our customers that if you don’t have your data readily available, you’re never going to use AI. Data is the foundation for AI.
When we look at current workflows, we start by asking the customer, “Okay, what’s your current workflow?” Then, we provide them with an improved workflow, and that usually opens their eyes. Right now, multiple entities come together in a centralized function, and that team has to sort through everything, double-check multiple systems, create lists, and update things.
That’s where we come in, structuring workflows for them to be mostly autonomous and take over that part. We bring workflow data to life, addressing inefficiencies currently handled manually. Often, it’s simple things, like data being locked into PDFs and Excel files. Step one is digitizing that and storing it in the cloud, readily accessible.
Once that’s done, the next question becomes, “How can we help you use it, or even make it an automatic flow?” That’s where we step in first, and even at this stage, it reduces the need for manual handling. From there, we look at other areas to enhance the overall workflow with this data.
Early wins for our customers usually happen around ERP systems. For example, issuing a purchase order and handling checks and mechanisms until it can be filed in NetSuite and marked complete. There are a lot of inefficiencies in this process, and we provide support to streamline it.
Paulo: Are there any interesting implementation stories you could share with us?
Christian: Our product has two main effects: time savings, which is often an immediate value add, and then cost savings. Time can be money, but what we always tell customers is that if you’re handling processes a certain way, you also want to make the other side happier. Whether it’s your vendor or supplier network in finance, or a customer on the other end, they want to hear from you sooner.
For some companies, we’ve achieved incredible results. We’ve reduced process time by up to 97%, allowing them to issue remittance advice earlier and make their vendors really happy. Of course, everyone wants to get paid on time, if not sooner, and we helped them nail that down perfectly.
That’s something all companies will want to look into, not just for compliance, but also to have reliable data backing up their operations.
Paulo: You mentioned freeing up workloads and shaping productivity through solutions like yours. This leads me to the next topic of what is really the impact on the people side of things once you start these processes and this transformation.
Implementing AI isn’t about simply adopting a solution and calling yourself AI-centric. It’s about balancing technology with people and ensuring they’re ready to use it strategically.
Paulo: I’d like to direct my question to Franco from NetSuite. How do you think about the people side—people transformation—when implementing AI solutions?
Franco: AI helps organizations by empowering them to work more efficiently. But when we discuss AI with organizations, we take a holistic approach, like Christian mentioned. AI won’t work well without structured data, so we look at everything together. When you implement AI alongside ERP, that’s where cohesion and complementary effects come in.
Structured data is the training field. You need organized data so that AI can learn and train on the necessary aspects. Now, how does this affect people? AI and ERP tools enhance workflow efficiency, allowing employees to work more effectively. But it’s not about replacing people—it’s about giving them tools to do more with less, boosting productivity without cutting jobs.
For example, instead of spending time on repetitive data entry tasks, which many employees with college or master’s degrees often find themselves doing, AI allows them to focus on more strategic, value-adding tasks. By embedding an AI engine within ERP, we give them that productivity boost, empowering them to take on more impactful responsibilities.
Paulo: It seems intuitive for teams to want to use these types of solutions, but do you see any barriers to adoption on the people side?
Franco: The primary barrier is, ironically, people themselves. Nowadays, everyone wants to implement AI, but we start by asking what they aim to achieve with AI. It’s crucial to identify specific use cases and tailor solutions to their strategy.
Part of this process involves providing the necessary tools and training, equipping teams to use AI effectively. Implementing AI isn’t about simply adopting a solution and calling yourself AI-centric. It’s about balancing technology with people and ensuring they’re ready to use it strategically.
Paulo: Do you have an example from a client or partner who implemented AI with clear objectives?
Franco: Yes, an embedded ERP solution can, for instance, analyze data and make procurement recommendations. It can suggest how much inventory to procure to support sales. While AI gives this recommendation, the human aspect is still critical.
Take the procurement example: if AI recommends purchasing 1,000 units based on data, a human might adjust that, factoring in the upcoming Christmas season and planned promotions. This blend of AI recommendations with human intuition, analytics, and strategic insight creates a more effective, adaptive decision-making process.
Don’t start by listing competitors; organize internal hackathons.
Paulo: The main takeaway so far is the importance of having clarity on what use case you want for any AI solution. I think AWS, in particular, helps a lot of startups and businesses find that clarity. Pandu, can you talk about how you approach experimentation and finding that clarity? Could you share some examples with organizations you’ve worked with on this?
Pandu: We’ve seen this curve where everyone was experimenting and trying different things. Like you mentioned, in 2022 and early 2023, everyone wanted AI in everything. Companies were eager to just try any AI use case they could think of.
Over time, I’ve seen a few core areas maturing. While it’s still early, these are the areas where we’ve seen the most ROI. One is customer support—getting customer support teams more engaged with the right information at their fingertips to respond faster, reducing call times, and even automating common queries with AI assistants.
Another area is outbound marketing. Companies are using AI for dynamic campaign design and execution, whether it’s voice-based calling or other marketing strategies. Finally, productivity has been a major focus—both in terms of business productivity with data analysis and developer productivity for faster coding. These are the three main areas where we’re seeing more mature use cases: customer support, productivity, and outbound marketing.
Paulo: So if a startup comes to you and says, “My competitors are using AI, and I want to as well,” but they’re unsure where to start, what process or training do you map out for them to understand how they can properly integrate AI?
Pandu: We definitely don’t start by listing competitors. Instead, we look at what specific use cases could be valuable for them. One approach my team has used successfully is to organize internal hackathons within the client organization.
Many people in an organization already use tools like ChatGPT, so we create a one-day event where we bring together development teams and business teams, like finance, to team up. They identify use cases that matter to them, and we provide the tools they need to build quick applications, often without any coding.
Since they know their organization’s pain points, they can build small tools for those needs. It’s a competition, so at the end, we have a bunch of ideas and tools that management can review to decide which to further develop. This approach has worked well for discovering use cases and creating a pathway for moving to production.
Paulo: We actually discussed a similar use case process in our playbook, so you can check that out later.
Overcoming the obstacles for AI agents’ learning and self-adjustment remains a major hurdle; explore a multi-model approach for training AI agents.
Paulo: We’ve covered data automation, people transformation, and the importance of clarity in experimentation. Now, I’d like to jump further into the deep end, looking at generative AI and AI agents, which I know WIZ.AI is doing a lot of work in. Alex, can you explain what AI agents mean in a business context and share your perspective on their development so far?
Alex: This is a big question, right? Before diving into AI agents, I want to share where we are in AI transformation and what the future holds. AI transformation isn’t just a technology shift—it’s part of the business innovation. Today, enterprises are using AI in three main ways: to improve efficiency, to innovate processes, and to enhance customer service.
At WIZ.AI, we’re focused on customer engagement and helping clients grow through AI. But in the near future, AI transformation will focus more on personalization, building unique processes for businesses, and developing tailored knowledge bases to better support people and customers.
As for AI agents, this concept takes traditional AI tools a step further by adding autonomy and adaptability. AI agents can carry out tasks and adjust their plans based on new information, with little human intervention. But the challenge is managing large models effectively for this kind of isolation and self-improvement. It’s a complex area, and overcoming the obstacles for AI agents’ learning and self-adjustment remains a major hurdle.
Another key feature is live, interactive engagement. Let me give an example: in traditional customer service, calling a hotline often means listening to a list of options. With AI agents, you can just say what you need, and the bot will respond immediately, escalating to a human agent only if necessary. This changes the customer experience dramatically, increasing satisfaction.
AI agents also excel in creating a domain-specific knowledge base. While large language models already cover broad, publicly available knowledge, businesses have unique, private knowledge that they might not want to share publicly. An AI agent can learn from this specific knowledge to better support internal guidelines or customer service requirements.
Paulo: Earlier, you mentioned the challenge of AI agents’ learning capabilities and the potential of a multi-model approach. Can you explain how this works?
Alex: In our technology department, we’re exploring a multi-model approach for training AI agents. For example, you might use one large language model to handle understanding, another to build a domain knowledge base with vector databases, and a third to manage output quality to reduce hallucinations.
This approach is particularly valuable when working with clients like government agencies, where a low error rate is essential. A single mistake can damage the agency’s reputation, so we use multiple models to ensure high-quality outputs and minimize errors.
Paulo: Thanks, Alex, for breaking down such a vast topic into clear points for our audience.
Question 1: Some of the keywords I picked up—hopefully I picked up the right ones—AI, startups, engagement, power-bound market efficiency, and so on. I speak from the point of view of a boutique private credit lender without any particular focus on tech infrastructure.
I have seen a lot in my 25 years. What I would like to ask the panel is: is it possible to build an end-to-end AI-based system for a small boutique lender, which manages everything from origination all the way to collection? We have made a small beginning in that direction, but we are bankers, not technologists.
For example, just to be specific, there are 28,000 SMEs in Singapore. We focus only on the SME sector, as that’s where banks don’t compete, and we have the best returns. But to pick out the “princes” from those 28,000 “frogs,” you can imagine how many frogs we have to kiss. So, is there any possibility of doing that?
Generative AI is not the ultimate tool here; some parts rely on traditional machine learning for risk modeling, some on API calls, and others on scoring mechanisms.
Pandu: I used to head the solution architecture team for India before I moved to ASEAN. Back in India, there are many startups involved in the same process of providing loans to SMBs. Many of them are now used by the larger banks because the banks have a process that takes a very long time.
These startups, however, make a decision on whether a loan should be given and the amount, often in a matter of seconds or minutes. They collect some basic data, and then query multiple sources in real time to see if the company has any court cases or credit bureau issues.
Then there’s another set of actions based on documentation, where they use AI models to make credit risk decisions. They also use generative AI to read through documents, summarize, and verify data points. Generative AI is not the ultimate tool here; some parts rely on traditional machine learning for risk modeling, some on API calls, and others on scoring mechanisms.
So, in answer to your question about whether you can build something like this, the answer is yes. But it’s not a single tool that will do everything; it’s a mixture of various elements.
Break your process into pieces and identify where you need specialist providers, where you might need to update your system, and so on.
Christian: Maybe I can just add to that. We’re actually helping some of our clients in this process, and I would not recommend building the entire process from the top down.
Instead, break it into pieces and identify where you need specialist providers, where you might need to update your system, and so on. If you’re looking to collect a lot of data, a company like ours or others on this panel can help with that.
You’ll need a data lake for specific purposes, likely running on AWS, and a way to integrate and stitch everything together to make quick decisions. If you want to build in-house, that could be an option too. Generally, for basic needs that are considered commodities, it’s wise to use an outsourced vendor.
When it comes to internal processes, you may want someone in-house. But one last point on using generative AI: off-the-shelf solutions can be risky because you may end up with a costly solution without customization.
If you’re looking to start quickly, use a provider who specializes in this, with better cost infrastructure from the outset.
Integrate AI products into your system thoroughly, rather than treating them as standalone tools. You should use them as part of your business, integrating with your CRM, ERP, and other systems.
Alex: Let me put it this way—our company mainly deals with conversations at the top level, and we’re not experts in every aspect of this. Many banks we work with have their own credit models and connect them to various databases to get credit scores.
If you’re trying to build a generative AI system yourself, it’s a tough undertaking, especially with large language models evolving so quickly. For example, yesterday Anthropic released a new model, and it’s interesting because it can use applications on your computer directly.
My suggestion would be to integrate AI products into your system thoroughly, rather than treating them as standalone tools. You should use them as part of your business, integrating with your CRM, ERP, and other systems.
Another key point is the importance of structured data and an enhanced database for business decisions. API integration with reliable databases will help you improve your risk model for setting credit limits.
A unified tool that connects different processes—CRM, ERP, sales—is crucial. Otherwise, you end up with structured but unconnected data. Connected data really matters because it fuels AI training. Consolidating and consuming this data is what allows AI models to learn and improve.
Franco: I think most of the key points have been mentioned. Building from scratch is possible but challenging. For AI to work effectively, it’s essential to have structured data. As a business, you’ll have financials, accounts payable, and receivables, but if this data is in various papers or systems, it’s difficult for any AI to work effectively.
Even without AI, you’ll struggle if data isn’t stored in a structured way. One of you mentioned sources and integration, which ties back to having a structured data layer. A unified tool that connects different processes—CRM, ERP, sales—is crucial.
Otherwise, you end up with structured but unconnected data. Connected data really matters because it fuels AI training. Consolidating and consuming this data is what allows AI models to learn and improve.
Manish: So basically, have a roadmap and approach it modularly. Don’t try to “boil the ocean” all at once—that’s our approach.
Question 2: I just want a view of your answer, Franco, on the data management part. I’m coming from a bank, and one of my challenges has been that we use many different systems for marketing, for sales, and so on. We don’t really have a centralized platform. As we embark on Gen AI transformation, we’re not sure where to start.
For the startups here, AWS, and Oracle, what are some ways you’ve seen companies work with enterprises to kickstart Gen AI while mitigating the risk of lacking a centralized data platform? This would be helpful for some of us here.
Franco: I think what other organizations do is consolidate disparate systems into a data lake, for example.
Question 2: But that could take two to three years, right? I would never get started.
Consolidation is the first step, and that’s where interconnected systems come in. ERP is about interconnecting things—procurement, supply chain, manufacturing, accounts payables—so you have a subset of interconnected processes.
Franco: But that’s the problem, right? We also pitch it to organizations: don’t buy separate names, because some solutions are interconnected, which lessens the headache later on. If that’s the problem right now, then consolidation is the first step, and that’s where interconnected systems come in. ERP is about interconnecting things—procurement, supply chain, manufacturing, accounts payables—so you have a subset of interconnected processes.
Of course, different companies and enterprises have other solutions, and that’s where the data lake comes in.
What we’ve seen work is when you start using products that aren’t just labeled as “AI” but are genuinely AI-powered. These tools won’t lock away your data—best practice now is to make data accessible.
Christian: Yeah, I was going to address that too. First, I can tell you what not to do: don’t listen to a single cloud provider telling you they’ll handle everything. They’ll launch co-pilots, promise it’ll all come together one day, and you’ll end up with a doubled or tripled per-seat subscription.
What we’ve seen work is when you start using products that aren’t just labeled as “AI” but are genuinely AI-powered. These tools won’t lock away your data—best practice now is to make data accessible. For instance, when we integrate applications for your underwriting team—whether it’s cards, loans, etc.—we structure unstructured data into a usable format and can push it into multiple systems at once.
This data is now available to you, structured in tables, on an S3 bucket, or whichever cloud you use. You’re not changing your existing structure; you’re overlaying AI systems on top of what’s there, making that data available as you build on it.
With technology like agents, you can fetch data from needed systems, so consolidation happens at the time of user interaction with the AI. There are ways to start the journey without first reconstructing everything.
Pandu: All these are valid approaches. You can look at various solutions like Databricks or Snowflake. Many companies try to pull different data sources into one layer you can query. There’s also the approach of moving everything to a structured system over time or using connectors to bring data together.
It depends on the use cases. We’ve seen banks consolidate customer support, for example. All they need is existing support conversations or recordings, transcribe them, and train a model for that. Many pre-trained models allow you to do this in a silo without integrating all sources first.
You just pull the required data, run training, and set up APIs. With technology like agents, you can fetch data from needed systems, so consolidation happens at the time of user interaction with the AI. There are ways to start the journey without first reconstructing everything.
Banks actually represent extreme cases for AI utilization. They need to manage data privacy, compliance, and often have on-premise systems, so using open source isn’t always feasible.
Alex: Banks actually represent extreme cases for AI utilization. They need to manage data privacy, compliance, and often have on-premise systems, so using open source isn’t always feasible. Additionally, large language models are still evolving, and banks have varying preferences for these models.
From our work with banks like Singapore’s DBS and Indonesia’s BRI, we understand different banks have different preferences. Eventually, there might be a universal model capable of integrating with other systems. But for now, there’s no quick way to achieve this without significant budget and technology support.