4 min readfrom VentureBeat

The AI scaffolding layer is collapsing. LlamaIndex's CEO explains what survives.

Our take

In a recent VentureBeat podcast, Jerry Liu, co-founder and CEO of LlamaIndex, discusses the collapse of the AI scaffolding layer traditionally required for developing LLM applications. As indexing layers and retrieval pipelines become less relevant, Liu emphasizes that the focus must shift to context as the core differentiator. With advancements in models capable of reasoning over vast unstructured data, developers can now leverage simpler, more intuitive workflows. Liu advocates for modularity and flexibility in tech stacks, ensuring they adapt to rapid innovations in AI technology.
The AI scaffolding layer is collapsing. LlamaIndex's CEO explains what survives.

The scaffolding layer that developers once needed to ship LLM applications — indexing layers, query engines, retrieval pipelines, carefully orchestrated agent loops — is collapsing. And according to Jerry Liu, co-founder and CEO of LlamaIndex, that's not a problem. It's the point.

“As a result, there's less of a need for frameworks to actually help users compose these deterministic workflows in a light and shallow manner,” Jerry Liu, co-founder and CEO of LlamaIndex, explains in a new VentureBeat Beyond the Pilot podcast

Context is becoming the moat

Liu’s LlamaIndex is one of the foremost retrieval-augmented generation (RAG) frameworks connecting private, custom, and domain-specific data to LLMs. But even he acknowledges that these types of frameworks are becoming less relevant. 

With every new release, models demonstrate incremental capabilities to reason over “massive amounts” of unstructured data, and they’re getting better at it than humans, he notes. They can be trusted to reason extensively, self-correct, and perform multi-step planning; Modern Context Protocol (MCP) and Claude Agent Skills plug-ins allow models to discover and use tools without requiring integrations for every one independently. 

Agent patterns have consolidated toward what Liu calls a "managed agent diagram" — a harness layer combined with tools, MCP connectors, and skills plug-ins, rather than custom-built orchestration for every workflow.

Further, coding agents excel at writing code, meaning devs don’t need to rely on extensive libraries. In fact, about 95% of LlamaIndex code is generated by AI. “Engineers are not actually writing real code,” Liu said. “They're all typing in natural language.” This means the layers between programmers and non-programmers is collapsing, because “the new programming language is essentially English.” 

Instead of manual coding or struggling to understand API and document integration, devs can just point Claude Code at it. “This type of stuff was either extremely inefficient or just would break the agent three years ago,” said Liu. “It's just way easier for people to build even relatively advanced retrieval with extremely simple primitives.”

So what’s the core differentiator when the stack collapses? 

Context, Liu says. Agents need to be able to decipher file formats to extract the right information. Providing higher accuracy and cheaper parsing becomes key, and LlamaIndex is well-positioned here, he contends, because of its developments with agentic document processing via optical character recognition (OCR). 

“We've really identified that there's a core set of data that has been locked up in all these file format containers,” he said. Ultimately, “whether you use OpenAI Codex or Claude Code doesn't really matter. The thing that they all need is context.”

Keeping stacks modular

There’s growing concern about builders like Anthropic locking in session data; in light of this, Liu emphasizes the importance of modularity and agnosticism. Builders shouldn’t bet on any one frontier model, or overbuild in a way that overcomplicates components of the stack. 

Retrieval has evolved into “agent-plus-sandbox,” as he describes it, and enterprises must ensure that their code bases are tech debt free and adaptable to changing patterns. They also have to acknowledge that some parts of the stack will eventually need to be thrown away as a matter of course. 

“Because with every new model release, there's always a different model that is kind of the winner,” Liu said. “You want to make sure you actually have some flexibility to take advantage of it.”

Listen to the podcast to hear more about: 

  • LlamaIndex’s beginnings as a ‘toy project’ with initially only about 40% accuracy; 

  • How SaaS companies can tap into complicated workflows that must be standardized and repeatable for average knowledge workers;

  • Why vertical AI companies are taking off and why ‘build versus buy’ is still a very valid question in the agent age. 

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#Excel alternatives for data analysis#natural language processing for spreadsheets#generative AI for data analysis#financial modeling with spreadsheets#no-code spreadsheet solutions#data visualization tools#data analysis tools#real-time data collaboration#enterprise data management#big data management in spreadsheets#conversational data analysis#intelligent data visualization#natural language processing#big data performance#data cleaning solutions#self-service analytics tools#rows.com#business intelligence tools#collaborative spreadsheet tools#automation in spreadsheet workflows