Feb 17, 2026
The Intelligence Layer of Financial Infrastructure
Ryunsu Sung

Intelligence as a Service
It is indeed an exciting time to be alive.
As the unit cost of intelligence continues to plunge amid advancements in reinforcement learning research and astronomical investments in parallel computing capacity, we are increasingly expecting a set of highly dichotomous outcomes. Some predict a society of abundant intelligence supercharging select high performers’ productivity and thereby displacing 99% of the workforce, while others believe AGI (whatever that means) fails to materialize and trillions of dollars invested in GPUs and data center infrastructure go worthless. This latter camp sees yet another list of Silicon Valley pipe dreams destined to fail, misleading the public with the ill-fated promise of “intelligence” when in reality LLMs are “just next token prediction machines.”
Acknowledging the risk of being a cliché, I expect the outcome to be “somewhere in between.” But I argue this is the best risk-adjusted position to be in due to its path dependency on one definitive outcome: token prices continuing to decline.
Frontier or SOTA models are already capable of matching a competent human’s output on specific tasks, with general reasoning being one of them. They generally consume much more tokens per response at launch, but engineers quickly find ways to distill and optimize the next model to have similar performance at a fraction of the cost. Even in the case of the next frontier model failing to achieve meaningfully better performance—the very recipe for the AI bubble collapsing with troubling financial implications—the trajectory of “intelligence per dollar” declining remains guaranteed, if only owing to the potential write-offs of GPU assets on cloud hyperscalers’ balance sheets.
In effect, many enterprises have taken notice of this trend and incorporated on-demand artificial intelligence into their products, albeit with varying degrees of success; this is the dawn of the “Intelligence as a Service” economy.
Why (Most) AI-powered Financial Products Suck
The emergence of artificial intelligence technologies, specifically LLMs that can infer over longer periods with tool-calling capabilities, has substantially propelled the pace of agentic automation on traditional knowledge work typically defined as “white-collar jobs.” Startups, including foundation model developers like OpenAI and Anthropic, have consequently targeted the highly lucrative financial services industry ranging from accounting to financial research tasks using complementary technologies like RAG (Retrieval Augmented Generation) to circumvent the inherent limitations of language models.
Even so, as you might have also noticed, Robinhood, Yahoo Finance, and other investing-adjacent services that have incorporated AI-based price movement summaries into their product uniquely explain that Oracle stock is trading up or down or even sideways due to “accelerated investments in AI infrastructure.” Understandably, these large-scale services could face budget constraints that bar them from the most capable reasoning models, or the summaries might have to fit a tight UI window as defined by the respective product decision maker. But that alone doesn’t justify its existence if the summaries themselves don't add any real value to the users by explaining that their stock is plunging for the very same reason it rocketed up just three days ago.
The problem with those AI summaries isn’t the technology itself; it lies in the fact that they are just RAG responses of the latest Bloomberg or WSJ articles commenting on the whereabouts of the market. Those comments are often sophisticated-sounding regurgitations of an analyst’s guess on how other analysts and market participants are comprehending the market. This is not to say that analyst commentaries are useless. They do serve a function in reaching a consensus on what is really going on in the market, at least retroactively, but most of the daily financial media you consume is likely noise labeled as information. The value lives not in mere intelligence, but in the framework that directs it.
The Cornerstone of Value
One of the most coveted skills for a financial analyst is the ability to construct sophisticated DCF (Discounted Cash Flow) valuation models. These models translate complex financial statements into a framework, embedding critical business and operating levers identified during the research process to gauge the present value of an asset by summing its expected future cash flows and discounting them by the time value of money.
Once an aspiring student looking to break into the investment banking industry myself—though likely externally motivated by its prestige—I encountered many teaching programs offered by ex-bulge bracket investment bankers. They offered courses on interpreting financial statements and drafting a basic DCF model on an Excel spreadsheet, followed by interview prep on the dependencies between the three statements. Setting aside the fact that these prestigious financial firms’ employment opportunities are heavily influenced by connections, a basic yet clear understanding of the DCF valuation framework is an absolute necessity due to its fundamentals-rootedness. This stands in contrast to other valuation methods such as PBR that are inherently relative and what I would define as somewhat backwards-looking.
Critics often dismiss the DCF as archaic or purely theoretical, citing the "Garbage In, Garbage Out" problem. But for the serious investor, the DCF is not a crystal ball for prediction—it is a simulation engine for risk. It is the only framework that forces a rigorous translation of qualitative narratives into quantitative values. The goal is not to predict that "Nvidia will be $200." It is to translate the thousands of paths to populate a model that says: "If AI adoption slows by 10%, Nvidia’s fair value drops to $150."
Predicting the Unpredictable
No one can predict the future, especially as you look further into the unknowns. If one did, there really won’t be a market for all these participants trying to make money from their guesswork, since everything will be “priced in.” At some point, even the most sophisticated investors have to make a bet that something will definitely happen or not happen within a set time horizon by setting a bound of conditions and their resulting financial outcomes, referred to as sensitivity analysis.
Despite accounting for these “expected uncertainties,” it is not uncommon for a so-called “once in a millennium” type of catastrophic event to occur, completely undermining their initial risk assessment. Though this is primarily a by-product of using normal distribution for estimation which intrinsically assumes those events to not happen, it probably is a topic for another day.
However limited we may be in practice, the beauty of the DCF valuation method lies in its flexibility to adopt the highly-trained analyst’s educated assumptions that funnel into future financial statements while providing a standardized methodology for calculating the present value of expected financial outcomes. Unironically, its very weakness also hinges on the robustness of said analyst’s understanding of the asset’s operating model—whether it be source data for customer satisfaction scores or their logic on evaluating a company’s pricing power—hence their “financial intelligence.”
The Intelligence Layer of Financial Infrastructure
From clearing houses to risk modeling software for insurance underwriters, financial infrastructure in its essence exists to make transactions cheaper and obtain a clearer understanding of the risks involved, theoretically leading to a more efficient market over time.
Today’s financial landscape is fundamentally different from twenty years ago, with quantitative algorithms taking over securities trading volumes and most recently prediction markets financializing every single direction of our society, including the probability of Jesus returning before the year 2027 (thus somehow making society more efficient). As previously mentioned, information readily available through media and content platforms is increasingly noise that just adds to the existing deluge, exacerbated by the falling cost of fake content generation and close to zero cost per marginal distribution.
Now is the moment for the intelligence layer of financial infrastructure that finds the signal from the noise—and exponentially more of it.
Generative Logic Engine
As a daydreamer and investor at birth, I’ve pondered upon the multitude of outcomes from one decision to the next until there were too many different paths to even remember where it all started. It was these creative thought experiments that set the foundation for me to transform from a certain archetype of “value investor” who only sees value in low PBR stocks into someone finding value in people and technology; assets that aren’t usually reflected in GAAP financial statements.
Negative sentiment against the term “creativity” in a financial context seems to exist owing to the ingenious financial engineering techniques applied to certain structured products in the past. But I reason it was the lack of creativity that allowed risk models to assign a “once in a millennium” probability for a credit product to default because it had diversified its risks, but only if the housing market didn’t crash. True creativity doesn’t entail being naively optimistic, but a bias towards thinking of the unthinkable.
It was this specific brand of creativity that guided me to spot the funding risks for Oracle back in September of last year, even as the market cheered a blowout quarter. An AI summary simply reported "Up significantly due to record revenue backlogs and investor enthusiasm on OCI cloud business." However, applying a fundamental logic topology revealed a different story: “Oracle has a massive, unpriced balance sheet and customer concentration risk.”
The Generative Logic Engine is the systematization of this process. To solve the "garbage in, garbage out" problem of financial modeling, we cannot rely on a prompt and pray the next frontier model will take care of the rest. We need a topology of logic—a structured node map where human insight and machine processing can freely intersect.
The Engine functions as a multi-layered derivation system. It breaks down a valuation assumption into a chain of dependencies. It begins with Layer 0, retrieving standard data like historical growth rates and macro correlations. But it then pushes deeper into the layers, using our proprietary logic database to analyze CapEx guidances of a company's largest customers or parse sanitized sentiment data from network engineer forums to assess real-world product competitiveness.
By effectively checking its work against a topology of financial logic, the Engine decodes raw noise into a defensible, sophisticated assumption for financial models.
The Blueprint
The cost of raw intelligence is trending toward zero. But the importance of structured financial reasoning is only trending upwards. Ultimately, the distinction of being AI-native is noise; real value is solely derived by the capacity to improve the human condition and drive societal progress.
If you are a fellow builder or investor who believes that the future of financial infrastructure lies in rigorous and sophisticated intelligence nodes that can be plugged into any asset valuation process, that is the layer we are building at AWARE LAB.
Newsletter
Be the first to get news about original content, newsletters, and special events.



