Tecton Expands Into GenAI Realm With New Data Platform Release

While the startup originally developed its data “feature store” software for providing data to machine learning systems, Tecton has enhanced its system to provide relevant, reliable data for generative AI large language models.

Tecton is expanding beyond its machine learning roots into the generative AI arena with a new release of its data “feature store” platform that the startup says delivers contextual data to the large language models that power GenAI systems.

Providing AI systems with contextual data is a significant hurdle to “productizing AI” – delivering tangible business value at scale and unlocking the full potential of generative AI in enterprise applications, according to Tecton.

“The whole point is to allow the people building AI to actually build it faster, get to production faster with AI,” Tecton co-founder and CEO Mike Del Balso said in an interview with CRN.

[Related: The 20 Hottest AI Software Companies: The 2024 CRN AI 100]

Tecton, headquartered in San Francisco, was founded in 2018 by Del Balso and CTO Kevin Stumpf who together helped develop Uber’s Michelangelo internal machine learning system

The startup’s initial focus for its technology was providing data “features” or input signals for the predictive models used by machine learning systems. Those capabilities are critical for training machine learning models and putting real-time ML systems – such as within recommendation engines or fraud detection systems – into production to automate business processes and decisions.

“It's all been about enabling MLOps [machine learning operations], enabling people to get AI into production faster, to make it more accurate, and to do it with less cost. And we've done a great job of accomplishing that,” Del Balso said.

Tecton has been selling its feature store system since 2020, enabling businesses and organizations to provide contextual data to machine learning systems as part of their big data management stack without building a whole separate infrastructure.

With the wave of development around generative AI and businesses trying to get LLMs into production, Tecton sees the same need for business contextual data management as there was with machine learning when Tecton got started. LLMs often lack up-to-date, domain-specific data with real-time contextual awareness, according to the company.

Del Balso says the answer is to focus on better data rather than bigger models and deploy AI applications that are customized to an organization’s unique business data.

“How do you make it possible for people to easily put AI into production? By focusing on the biggest bottleneck, which is the linkage or the connection between the business's data and the models,” Del Balso said. “The data is the company's competitive advantage and if they can't connect that data to the model, their model is not going to have any differentiated behavior and they're not going to have any competitive advantage.”

“It’s all about how you allow your AI to have access [to] and use this proprietary data that you've spent so many years investing in, collecting and organizing. We help them solve all these data problems, so they build better quality signals, so their models actually improve in quality, they become more accurate [and] they have better outcomes,” the CEO said.

With the new release of the Tecton platform, officially release 1.0, the company is touting its software as a unified platform that can support machine learning, predictive analytics and generative AI. The release includes a suite of new capabilities including managed embeddings, scalable real-time data integration for LLMs, enterprise-grade dynamic prompt management and innovative LLM-powered feature generation, according to the Tecton press release.

Data Ingestion, Transformation And Retrieval

Many of the new software’s new capabilities revolve around improving data ingestion, transformation and retrieval for GenAI systems, Del Balso said. The software supports structured and unstructured data ingestion and automates how LLMs become part of the data transformation pipeline.

But Del Balso said the enhanced data retrieval capabilities – getting the right data to the LLMs – are perhaps the most significant advances in the new release. That’s because GenAI technology requires more “dynamic” data retrieval than machine learning systems and data context becomes especially important, according to the CEO.

The Tecton software’s capabilities are “complementary” to the Retrieval-Augmented Generation framework and fit into a RAG architecture, according to Del Balso. The Tecton software ensures data governance and compliance by accessing only permitted data. And by providing more relevant data, the system helps reduce the issue of “hallucinations” when GenAI systems make up data.

“The whole point of this is to do it in a safe and trustworthy way for an enterprise,” the CEO said, noting the importance of data context and reliability within enterprise production systems.

The new Tecton release, which is now in public preview, also provides a number of enhancements to its core capabilities including performance, security and usability, Del Balso said.

Del Balso said Tecton is in the early stages of establishing channel relationships. Given the complexity of AI and the fact that many organizations are just getting started with it, the CEO has continued to work more directly with customers to develop their AI strategy and use cases.