6 min readfrom VentureBeat

Enterprises can now train custom AI models from production workflows — no ML team required

Our take

Empromptu AI has launched Alchemy Models, enabling enterprises to train custom AI models directly from their existing workflows—no ML team required. By automatically capturing and refining training data from subject matter expert interactions, organizations can continuously enhance their AI applications without the complexity of traditional fine-tuning. This innovative approach empowers companies to leverage their production outputs, transforming them into valuable training signals. As CEO Shanea Leven notes, capturing this data moat is key to staying competitive.

The recent launch of Empromptu AI's Alchemy Models marks a pivotal development in how enterprises can harness their existing workflows to refine AI capabilities without the need for dedicated machine learning teams. As highlighted in the article, every interaction within an enterprise AI application—be it a user query or corrections made by subject matter experts—serves as valuable training data. Unfortunately, a significant amount of this data often goes untapped, leading to missed opportunities for model improvement. This is a critical point, especially in a landscape where companies are increasingly evaluating how to protect their operations from disruption, as noted by Empromptu's CEO Shanea Leven. The notion of a "data moat" as a competitive asset is gaining traction, and Alchemy provides a pathway for enterprises to build and maintain that moat effectively.

By automatically capturing and validating outputs generated by AI applications, Alchemy creates a continuous feedback loop that enhances model performance over time. This integrated approach contrasts sharply with traditional fine-tuning methods, which often require separate data collection and preparation efforts. The seamless nature of Alchemy's process allows organizations to focus on leveraging their existing workflows rather than diverting resources to set up a separate ML pipeline. As enterprises increasingly turn to AI to drive efficiencies, this capability not only simplifies the implementation of custom models but also democratizes access to advanced AI tools. It's a significant shift that aligns with the broader trend of making AI more accessible to users without technical expertise, encouraging a wider range of organizations to explore innovative solutions.

However, the implications of adopting such a workflow-driven model training system extend beyond mere convenience. As organizations become more reliant on AI for critical business functions, the ownership of model weights and the quality of outputs take center stage. With Alchemy, enterprises own the resulting model weights outright, fostering a sense of security and control over their proprietary data. This ownership is particularly vital in regulated industries, such as healthcare and finance, where data sensitivity and compliance are paramount. Early adopters, like Ascent Autism, are already experiencing measurable benefits, including a drastic reduction in time spent on documentation tasks—illustrating how AI can not only enhance productivity but also align outputs with organizational voice and standards.

Looking ahead, the potential for Alchemy to reshape the landscape of AI integration in enterprises is significant. As businesses increasingly recognize the value of their operational data as a training resource, the challenge will be ensuring that this data is captured, validated, and utilized effectively. The concept of a data flywheel—where increased usage generates better training signals, leading to more accurate outcomes—can provide a competitive edge for those willing to invest in this approach. However, a critical question remains: How will enterprises navigate the potential lock-in associated with such platforms? While the advantages of using a managed environment like Empromptu's are clear, organizations must weigh these benefits against the risks of dependency on a single vendor.

As the landscape evolves, it will be intriguing to observe how organizations balance the need for innovation with the imperative of maintaining flexibility in their AI strategies. The emergence of integrated platforms such as Alchemy represents a transformative step forward, but it also challenges enterprises to rethink their long-term data strategies and partnerships. With the right approach, companies can harness this technology not just to survive but to thrive in an increasingly data-driven world.

Enterprises can now train custom AI models from production workflows — no ML team required

Every query an enterprise AI application processes, every correction a subject matter expert makes to its output — that interaction is training data. Most organizations are not capturing it. The production workflows companies have already built are generating a continuous signal that improves AI models, and it is disappearing.

San Francisco-based Empromptu AI on Thursday launched Alchemy Models with a straightforward premise: the AI applications enterprises are already building are generating training data, and most of it is going to waste. The platform captures that signal automatically, routing validated outputs from subject matter experts back into a fine-tuning pipeline that improves the model over time. Enterprises own the resulting weights outright.

It sits in different territory from both RAG and traditional fine-tuning. RAG retrieves external context at inference time without modifying model weights. Traditional fine-tuning changes weights but requires separately assembled labeled datasets and a dedicated ML pipeline. Alchemy does the latter continuously, using the enterprise application itself as the data source.

Companies adopting foundation model APIs face three compounding constraints: inference costs that scale with usage, no ownership of the models their data is effectively training, and limited ability to customize behavior for domain-specific tasks. Empromptu CEO Shanea Leven says those constraints are widely felt but rarely addressed.

"Every customer, everybody that I talk to, is like, how am I not going to get disrupted? How am I going to protect my business? And they just don't see the path," Leven told VentureBeat in an exclusive interview.

How Alchemy builds a model from a running application

Most custom model training approaches require companies to separately collect, clean and label data before any fine-tuning can begin. Alchemy takes a different path: the enterprise application itself generates and cleans the training data.

The mechanism runs through Empromptu's Golden Data Pipelines infrastructure in two stages. Before an app is built, enterprise data is cleaned, extracted and enriched so the application starts with structured inputs. Once it is running, every output it generates goes back through the pipeline, where subject matter experts inside the organization review and correct it. That validated output becomes the training data for the next fine-tuning run.

"The app, the AI application that customers are already creating, cleans the data," Leven said.

The resulting fine-tuned models are what Empromptu calls Expert Nano Models: small, task-specific models optimized for a particular workflow rather than general-purpose reasoning. Evals, guardrails and compliance controls run within the same pipeline, so governance travels with the training process. Customers own the model weights outright. Empromptu hosts and runs inference on its infrastructure, but the weights are portable and exportable for a fee. The platform is model agnostic, supporting Llama, Qwen and other base models.

The hard constraint is data volume. Early deployments run on the base model while the application accumulates enough production data to trigger a useful fine-tuning run. Leven acknowledged the timeline without sugarcoating it. "Training the model will just take time," she said.

Alchemy differs from managed fine-tuning on who does the work

OpenAI's fine-tuning API and AWS Bedrock custom models both offer enterprise fine-tuning. Both require organizations to bring separately prepared training datasets and manage the fine-tuning process outside their application stack. The burden of data curation and model evaluation sits with the customer's ML team.

Alchemy's differentiation is process integration. The training data is generated by the enterprise application itself, so there is no separate data preparation step and no ML expertise required. The application workflow is the pipeline.

"Do I need to have Bedrock and go spin up another ML team to go figure out how to fine tune a model and figure out all of that infrastructure? No, anyone can do it now," Leven said.

The tradeoff is platform dependency. Alchemy only works within the Empromptu environment. Enterprises that want the same outcome on existing infrastructure would need to replicate the data capture, validation and fine-tuning pipeline themselves.

A behavioral health company cut session documentation time by up to 87% using Alchemy

Empromptu is targeting regulated and data-intensive verticals first: healthcare, financial services, legal technology, retail and revenue forecasting. These are sectors where general-purpose model outputs carry the highest mismatch risk and proprietary workflow data is most concentrated. 

Among the early users is behavioral health company Ascent Autism, which uses Alchemy to automate session documentation and parent communication. 

Facilitators use learner session recordings, transcripts, session notes and behavioral metrics to generate structured notes and personalized parent updates. That workflow previously required one to two hours of writing per session. With Alchemy training on the same data, it now takes 10 to 15 minutes.

"Relying solely on API-based models can become expensive quickly," Faraz Fadavi, co-founder and CTO of Ascent Autism, told VentureBeat. "Alchemy gave us a way to structure the workflow, train models on our own data, and reduce costs while improving output quality over time."

Fadavi said the company saw usable outputs quickly, with continued improvement as the system refined. Evaluation criteria went beyond accuracy to include traceability to session data and output consistency with the company's clinical voice. "We wanted a system that could learn our workflow and produce outputs aligned with how we actually operate — not just summarize text," he said. The practical test: how much facilitators need to edit, whether the output matches their voice and whether it meaningfully reduces time spent. Facilitators have shifted from rewriting generated notes to editing and quality-checking them.

What this means for enterprises

The data flywheel is real — but so is the platform lock-in:

Every workflow is a training opportunity. Enterprises that capture and validate outputs from their production AI applications will compound that advantage over time. More usage generates more training signals, which produces more accurate domain-specific models, which generate better outputs, which produce cleaner training data in the next cycle.

Leven positions Alchemy as a third architectural choice. Enterprises have spent the past two years choosing between RAG for domain knowledge access and fine-tuning for model specialization. Workflow-driven model training is a third option, combining the ongoing improvement of fine-tuning with the operational simplicity of building inside a managed platform.

"Having that data moat is the most valuable currency," Leven said.

Read on the original site

Open the publisher's page for the full experience

View original article

Tagged with

#real-time data collaboration#generative AI for data analysis#Excel alternatives for data analysis#enterprise data management#big data management in spreadsheets#conversational data analysis#intelligent data visualization#data visualization tools#big data performance#data analysis tools#data cleaning solutions#financial modeling with spreadsheets#real-time collaboration#workflow automation#natural language processing for spreadsheets#enterprise-level spreadsheet solutions#cloud-based spreadsheet applications#spreadsheet API integration#machine learning in spreadsheet applications#automation in spreadsheet workflows