What every investor should know about the GenAI tech stack

Raphaëlle d'Ornano
6 min readJun 14, 2023

--

The recent wave of generative artificial intelligence (GenAI) has taken investors by storm, leading to sky high market caps for industry leaders with downstream effects across related markets. This is not the first hype cycle for AI — investors have seen previous AI hype bubbles bloom and burst for decades — but it is the first of consequence for generative AI. Recent technological developments present a more powerful and mature opportunity for the technology this time around — especially in growing industries like healthcare, finance, climate tech, etc. — with the same transformational potential to change the business landscape as the internet.

Because generative AI is still in its infancy, many investors place it in the realm of venture capital. This is a mistake. Private equity firms are leaving money on the table by failing to recognize their ability to enter the game through their portfolio companies. Leveraging generative AI is a powerful tool to create higher value, and can be used by almost all companies, at any stage, in multiple forums. Because every company has data, every company has the opportunity to become a GenAI company. But GenAI entails specific risks and requires a depth of resources.

In this article we will share the lessons we have learned from advising dozens of (venture) deals in the GenAi and AI space through a view on the three core components of the generative AI tech stack — Foundational, MLOps, and Application — so as to ensure that the questions that matter are effectively addressed.

The Foundation Model

The first core generative AI business model we’ll discuss is the Foundation Model, as defined by IBM. These are businesses that use AI trained on broad sets of unlabeled data, which can be adapted for a variety of tasks with minimal fine-tuning. The most well-known of these is Chat GPT, released in late 2022. Many blue chip investors expect this category to be most businesses’ entry point to generative AI, hence the ‘foundation’ moniker. As such, the market potential for this model is enormous. Because foundation models can be used so broadly they will likely serve as the base layer of businesses’ AI tech stack, overlaid with additional fine-tuning on narrower tasks specific to each business and vertical.

The trade-off here is essentially one of breadth versus depth. While foundation models are well suited to a wide variety of elementary tasks, they lack the complexity and nuance required for higher-level activities. In a recent article, Rishi Bommasani and Percy Liang of Stanford University suggest a foundation model alone “is fundamentally unfinished, requiring (possibly substantial) subsequent building to be useful.” The training of foundation models necessitates large datasets and expertise in data preparation, model training, and output quality assessment, making it the most expensive business model. Initial foundation models have all come from the world’s largest and most well-resourced technology companies — Google, Meta, Microsoft, etc. — making the barrier to entry exceptionally high. Smaller AI companies will struggle to compete against behemoth incumbents due to built-in advantages around scale: data volume, cost and computational intensity. While newer, investor-backed entrants like Anthropic and Cohere are competing for market share, the degree to which these startups succeed will have a lot to do with current efforts to develop smaller, more efficient models. And there are some successes here already.

While all generative AI business models are subject to issues regarding ethics around data privacy, ownership, copyright, user trust, cost, and safety, foundation models are the most exposed to these concerns because they use the broadest datasets. These risks must also be balanced against the steep upfront investment required. Therefore, an accurate risk assessment will require a hybrid approach that can identify and measure not only each risk in isolation, but also how they interact with and potentially compound one another.

From a financial standpoint, understanding the revenue model of the foundation model is also critical and will depend in part if the model is “open” or “closed”. These latter may have difficulties as powerful open-source models emerge and question payment of closed foundation models.

The MLOps Model

The second core business model in generative AI is MLOps, a combined category encompassing machine learning, development, and operations. MLOps primarily focuses on the infrastructure of generative AI and the tooling that facilitates model development. Its primary benefits include efficiency, scalability, and risk reduction, helping manage data quality and suitability to a given task, including testing and validation, version control, code review, etc. MLOps also overseex the deployment of other generative AI models, detecting issues and checking performance; in the world of AI, it’s quality control.

MLOps infrastructure vendors are likely the biggest winners in the market so far, capturing the majority of capital in the stack. Examples of current players include Baseten, which develops machine learning models and developer tools built for model deployment, scaling APIs, and building user interfaces. The company’s models offer reusable components to quickly assemble workflows and build ML-powered applications, enabling users with the flexibility to write their own code when necessary. Another example includes Fiddler, operating at another point in the value chain to provide centralized systems to monitor, analyze, and explain results. It also offers a common language, centralized controls, and actionable insights to operationalize ML/AI with trust, enabling businesses to analyze, manage, and deploy their machine learning models at scale.

Challenges for entrepreneurs building with this model boil down to 1) an overcrowded competitor landscape, 2) difficulty transitioning from being a specific-use tool to being a broader platform, and 3) scaling the company’s customer base to attract more customers at the enterprise level.

While they may have started from different positions within the industry as single-feature providers, as they scale MLOps players will soon find their activities increasingly overlap with competitors, leading to a flurry of platform-driven consolidations and market exits. However, the surviving gladiators will not be able to enjoy their victories for long. A second scaling challenge will involve re-working their businesses to go upmarket, crossing swords with cloud hyperscalers and established enterprise companies (Datarobot, Dataiku, H2O, Databricks, etc.) that have been selling to Global 2000 businesses for years. Those who succeed will need to have a laser focus on cross-selling and growth with respect to both the quantity and caliber of their customer bases.

Accurately assessing MLOps companies growth performance therefore requires a multi-dimensional, multi-timescale analysis. Once the technology has been created, a multi-timescale assessment is needed to 1) understand the investment needed to support growth (If the company aims to become a platform, it will need to introduce new products, which entails a significant capitalized R&D commitment. Alternatively, the company may consider external growth strategies such as acquisitions, although this approach is less common in these cases.), and 2) to evaluate whether the growth will be qualitative (notably through deep understanding of customer profile and economics).

The Application Model

The third core category of generative AI businesses is the Application Model, primarily focused on user-facing applications. Whether they are end-to-end applications built from scratch or rely on third party APIs, this category of businesses is currently where mobile phone apps were 15 years ago — ripe for expansion. Specialization is the key to opportunity here. The two possible growth axes for this category are vertical (industry-specific) and horizontal (workflow/function-specific). Vertical examples of generative AI applications include companies like Harvey in law or Wonder Dynamics for film and game studios, trained on highly tailored, limited datasets. Horizontal examples like Tabnine specialize in specific activities like coding, copywriting, or film editing, with the ultimate goal of dramatically increasing productivity.

Like mobile apps, the challenges for generative AI applications will come down to differentiation, customer retention, and profitability/margins. While these companies can demonstrate strong early growth, they may be easily supplanted by new market entrants. Applications built from the ground up that own their own data may face fewer ownership/copyright difficulties, but are more expensive to build with lower expected margins. Like the previous two models, they are also subject to broader industry themes around trust and security. Ultimately, uncovering hidden risks and opportunities within generative AI applications will require expertise and experience in evaluating whether a company’s growth trajectory is aligned with its targets.

Generative AI presents once-in-a-generation opportunities for investors, but enthusiasm alone will not be enough for success. Knowing how to look past the noise and zero in on what is the fundamental business problem solved with the technology, what is the sanity of the company’s unit economics (at least when the company has past seed/early-stage) and what are the tech/IP core issues is the only way to identify which companies have the potential to become transformative forces in the AI era.

--

--

Raphaëlle d'Ornano
Raphaëlle d'Ornano

Written by Raphaëlle d'Ornano

Managing Partner + Founder of D’Ornano + Co., a pioneer in Advanced Growth Intelligence for analyzing disruptive business models in the age of Discontinuity.

No responses yet