This article was contributed by Ereen Toh, Insignia Ventures Academy Cohort 10 Venture Fellow and Senior Manager at the Institute of Innovation & Entrepreneurship, Singapore Management University. If you’re building next generation companies in healthtech, you can reach out to her at ereen.toh@insigniaacademy.vc
Most startups do not get rejected in a partner meeting. They get filtered out long before that, in the first pass, when attention is thin, pattern recognition hardens into habit, and anything unfamiliar is easy to dismiss. In venture capital, the danger is not only backing the wrong company. It is systematically missing the right one. That is what makes the potential of AI in these venture capital processes so consequential: it offers a glimpse of how the technology might not replace investor judgement, but materially improve what gets seen in the first place.
Every two years, our team at SMU’s Institute of Innovation & Entrepreneurship runs the Lee Kuan Yew Global Business Plan Competition (LKYGBPC), Asia’s premier university startup competition. In the 12th edition held in 2025, we received 1,500+ submissions from over 1,200 universities across 91 countries. That volume creates a real problem. How do you evaluate fairly at scale, without burning out your judges or inadvertently favouring the founders who already know the right people?
The honest answer is that you probably can’t, not without help.
Research on investor behaviour puts the average time spent reviewing a pitch deck under two and a half minutes, and that figure has fallen by nearly a quarter since 2021. Multiply that constraint across hundreds of submissions and the arithmetic of attention becomes brutal, even well-intentioned panels will develop shortcuts, gravitating towards familiar pitch structures, recognisable university brands or ideas that map neatly onto categories they already understand.
At the 12th LKYGBPC, we ran a parallel experiment to test whether AI could improve the evaluation process, the DueAI® Challenge, focused on AI-driven due diligence. The initiative was conceived and led by Dr. Sze Tiam Lin, SMU’s Senior Licensing Adviser, who also pioneered a small-scale pilot with Valuer.AI for the 11th edition held in 2023.
What we found has direct implications for anyone in the business of assessing startup potential: investors, accelerators, and competition organisers alike.
In the pilot run, the AI surfaced a German biopharma startup, MEDEA Biopharma from the Technical University of Munich, that all 200 human judges had missed. That startup went on to win its category and has since built out a dedicated research laboratory. It was a striking early signal: AI isn’t here to replace human judgment, but it could catch what they missed.
Designing a live experiment
For the 12th LKYGBPC, the DueAI® Challenge pitted fourteen global AI agents built by startups against each other, each applying its own proprietary rubric to evaluate the same pool of pitch decks. The agents were assessed on three dimensions: how closely their picks aligned with the expert panel (expert alignment), how clearly they explained their reasoning (explainability and transparency), and their ability to surface promising startups that human evaluators had overlooked (identifying outliers).
That third criterion turned out to be the most consequential. When multiple AI agents independently nominated the same startup that the expert panel had passed on, those startups were flagged as high-potential outliers and invited for another look.
Of the 60 startups invited to the Grand Finals Week held in Singapore, eight advanced to the final stage; three of which originated from this outlier pool. That is a meaningful hit rate for ventures that had, by conventional process, already been screened out.
What this means for investors and evaluators
From a practitioner standpoint, a few things stand out.
First, AI is not a replacement for domain expertise; it is a screen for volume. The agents in DueAI® did not pick winners. What they did was reduce the likelihood of false negatives: promising startups that get dropped early simply because the panel ran out of time, attention, or appetite for unfamiliar ideas. For any process evaluating hundreds or thousands of applications, that is a meaningful contribution.
Second, explainability matters as much as accuracy. An AI tool that flags a startup as high-potential but cannot explain why is difficult to act on and even harder to defend to investment committees or competition juries. The agents that performed best in our evaluation did not just rank startups; they produced structured write-ups grounding their assessments in specific criteria. This is the difference between a tool that augments human judgement and one that simply adds noise.
Third, the outlier mechanism deserves more attention from the investment community. When multiple independent AI agents converge on the same non-consensus pick, that convergence itself is a signal. It is not unlike the logic behind ensemble models in quantitative investing; disagreement is informative, but so is unexpected agreement. This suggests a broader question: how often do deal flow processes actually allow for multiple independent views on the same opportunity?
Augmented intelligence, not automated decisions
Professor Lim Sun Sun, SMU’s Vice President for Partnerships and Engagement and Professor of Communication and Technology, has described this approach as “augmented intelligence”, using AI to expose human blind spots rather than to substitute for human judgement. That framing feels right to us, and not only as a rhetorical position. The DueAI® experiment worked precisely because we did not hand the shortlisting process over to the agents. We used their output as an additional filter, then returned the flagged startups to human review.
The distinction matters especially at an early stage, where the most important signals are often qualitative: founder conviction, team dynamics and the clarity with which someone understands their own customer. These are things AI can gesture towards but cannot yet be reliably assessed. What it can do is ensure that a startup with a non-standard pitch deck from a non-target university in a non-English-speaking country does not get lost in the pile before a human ever looks at it.
Where this goes next
SMU has filed for trademark protection on DueAI® and is working towards licensing the DueAI® Challenge for large-scale startup evaluation. For venture capital, the larger implication is not just faster screening, but better error management in a power-law business.
Most firms are highly attuned to Type I errors: backing the wrong company. But in venture, Type II errors can be even more costly: screening out the outlier that could have generated outsized returns. That is where AI-assisted screening may matter most.
Used well, AI can help funds widen their field of vision without surrendering judgement. Used poorly, it can simply create more noise and raise false positives. Its value is not in replacing decision-making, but in surfacing non-obvious candidates, while leaving humans to decide which ones warrant conviction.
In a power-law asset class, the edge isn’t picking better. It’s losing fewer outliers before you ever get the chance.
Ereen Toh is a Insignia Ventures Academy Cohort 10 Venture Fellow and Senior Manager at the Institute of Innovation & Entrepreneurship, Singapore Management University. If you’re building next generation companies in healthtech, you can reach out to her at ereen.toh@insigniaacademy.vc