The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In
Some forward-thinking solution providers have spent years building artificial intelligence practices, and today their bets are paying off as businesses rush to figure out how to take advantage of generative AI.
When Asif Hasan and his colleagues ditched well-paying jobs in 2013 to start a new company that builds artificial intelligence solutions for enterprises, they assumed fast growth would quickly follow.
They had seen the promise of AI in research that demonstrated the real-world feasibility of deep learning, a complex but powerful machine learning method that mimics the way the brain absorbs information and now serves as the foundation for many AI applications today.
Hasan also knew that there was “space in the market for a new type of solution provider that brings these capabilities to the enterprise” from challenges he experienced trying to find outsourced data science talent when he was director of business analytics at Philips Healthcare.
The problem was the market wasn’t quite ready when Hasan and his three co-founders started Quantiphi. Business was slower than expected in the first few years, and the Marlborough, Mass.-based company mostly got by on AI proofs of concept while doing larger work around advanced analytics and data science.
“We were obviously, in hindsight, quite early because the first three years for us were very, very difficult,” said Hasan. But instead of second-guessing themselves, Hasan and his team stayed patient. They believed the time would come when Quantiphi’s AI services would surge in demand, and in due time they were right.
Ten years after its founding, Quantiphi boasts a workforce of nearly 4,000 people, and the “AI-first digital engineering company” has racked up 2,500 projects with 350 customers in nine industries, including a handful of large-scale AI engagements each worth around $10 million a year. This has helped fuel a compound annual growth rate of 85 percent for the past three years.
“Eventually the momentum kicked in, and then we were off to the races,” Hasan said.
While AI technologies are fueling new features in myriad software applications and cloud services for the channel to resell, manage and provide services around, solution providers like Quantiphi are seizing on a profit-rich opportunity at the literal ground floor: the fast-growing need for infrastructure and services underpinning AI applications and features.
This group of solution providers, which ranges from newer companies like Quantiphi to storied companies like World Wide Technology, have spent the past several years building AI practices. Now they stand to benefit from what IDC estimates could be a $154 billion market for AI-centric systems this year. The market, which includes hardware, software and services, has the potential to grow, on average, 27 percent for the next three years, according to the research firm.
“We do feel that this is going to accelerate, and it’s going to accelerate in a significant way,” Hasan said.
For Quantiphi, most of its growth came before ChatGPT, a chatbot powered by a large language AI model, entered the picture last fall. ChatGPT sent shockwaves through the tech industry with its ability to understand complex prompts and respond with an array of detailed answers—from blog posts on a variety of subjects to software code for web browsers and other kinds of applications—all offered with the caveat that it could potentially impart inaccurate or biased information.
Nevertheless, enterprises are now rushing to figure out how to take advantage of generative AI, a broad category of AI models that includes ChatGPT and renders new content of different forms, including text, images and video, using large data sets. The trend has already invaded new features of major software staples like Microsoft 365 and a bevy of cybersecurity offerings.
“What ChatGPT has done is given a lot of people in a lot of different scenarios the first glimpse of what a generative AI system could look like. It’s impressed a lot of people. It’s left a lot of people unsettled. … But everyone is intrigued by it,” Hasan said.
Now Hasan is trying to keep up with the new interest sparked by generative AI. For the past few months, he’s been holding up to three executive briefings a day to answer a surge of customer questions and discuss new projects around the technology.
“We are seeing at the top of the funnel interest levels at a scale we have never ever seen before in the last 10 years,” Hasan said.
Generative AI: A Wild West Of Services And Products
Tim Brooks, managing director of data strategy and AI solutions at St. Louis, Mo.-based WWT, said enterprises used to spend much of their time thinking about what kind of infrastructure they needed to power AI applications.
But now that AI infrastructure has become ubiquitous, Brooks has noticed that customers of the solution provider juggernaut have turned their focus to much finer details of AI projects, such as data governance, model risk management and other issues that can play a role in a project’s success.
“I would say five years ago that rarely came up. Now that comes up in every conversation,” said Brooks. This is especially important now that many enterprises are trying to figure out their own generative AI strategies and what they need to build custom applications that leverage proprietary information since there are risks in sharing data with consumer-facing applications like ChatGPT.
“If you don’t control that model, how would that information be leveraged to provide an answer to another party when they make a prompt into an outsourced model because that’s an API call? That’s a concern that has come up time and again with CIOs and CISOs that we’ve spoken to,” Brooks said. The issue with building a large language model from scratch that is like ChatGPT but protects proprietary data is that development can cost up to $100 million, according to Brooks.
Fortunately, a middle ground has already emerged for enterprises: Large vendors like Amazon Web Services, Google Cloud, Microsoft Azure and Nvidia are now offering pretrained models, among other kinds of building blocks, that solution providers can use to develop custom generative AI solutions for customers.
To Brooks, it’s a major opportunity that will require a diverse range of skills to ensure custom applications are pulling from the right data sets and providing the right kind of responses.
“It is something where we’ve had to really use our experience in security, data governance and data science as well as leverage our relationships with OEMs,” he said.
But the generative AI opportunities in the channel don’t have to end with the development and management of applications. For New York-based global consulting giant Deloitte, there is also an opportunity to advise customers on best practices for ensuring their employees can take advantage of these disruptive tools.
“How do they relearn that new way of doing things and ensure that they are working with the technology? A lot of the benefit of generative AI is about augmenting human capability and advancing it. So that also requires humans to relearn the way they do things,” said Gopal Srinivasan, a longtime Deloitte executive who leads the firm’s generative AI efforts with Google Cloud .
Meanwhile, one solution provider that has already seen the promise of generative AI in action for enterprises is Los Angeles-based SADA Systems.
The situation: A 3-D manufacturing company was dealing with low utilization of its laser-cutting product among customers, so it wanted to use a text-to-image model to kickstart the creative process for users and give them a quick way to make designs.
Miles Ward, CTO at SADA, said the company provided guidance and, after the design generation tool went live, the laser-cutter vendor saw a 50-fold increase in usage the following week.
“This stuff can become easy enough and magical enough that you’re unlocking a very different behavior from customers where they’re doing it because it’s awesome, not because they have to or they think it’s the most efficient thing to do,” he said.
It’s emblematic of the large opportunity Ward sees in generative AI: allowing companies to unlock productivity and new experiences. But he also sees a challenge: Innovation is happening so fast in the generative AI space that it may take some time for customers to settle on a solution.
“I think it’s difficult for customers to say, ‘Yeah, totally. I definitely want to pay for you to have a team full of people doing exactly this one thing, which I can write the [statement of work contract] for now and commit to the outcomes for now,’ when the whole tool platform is in upheaval, and there may very likely be a more efficient approach available in the next weeks,” he said.
Building On The Shoulders Of Cloud Giants
For Hasan, what helped boost Quantiphi’s business in the mid- 2010s after its slow start are two things that have benefited the broader AI market.
Around the time the TensorFlow and PyTorch open-source frameworks were released to make it easier for developers to build machine learning models, cloud service providers such as AWS, Google Cloud and Microsoft Azure made big expansions with compute instances powered by graphics processing units (GPUs) that were finetuned to train models—a key aspect of developing AI applications—much faster than central processing units (CPUs).
Over time, these cloud service providers have added a variety of offerings that aid with the development and management of AI applications, such as AWS’ SageMaker Studio integrated development environment and Google Cloud’s Vertex AI machine learning platform, which Hasan said serve as crucial building blocks for Quantiphi’s proprietary solutions.
“What we’ve done is on top of some of the cloud platform solutions that exist, we have built our own layer of IP that enables customers to seamlessly on-board to a cloud technology,” he said.
Quantiphi offers these solutions under the banner of “platformenabled technology services,” with revenue typically split between application development and the integration of the underlying infrastructure, including cloud instances, data lakes and a machine learning operations platform.
But before any development begins, Quantiphi starts by helping customers understand how AI can help them solve problems and what resources are needed.
“What we’re able to do is we’re able to go into organizations, help them envision what their value chain can look like if they look at it with an AI-first lens, and from there we can help them understand what are the interesting use cases,” Hasan said.
With one customer, a large health-care organization, Quantiphi got started by developing a proof of concept for an AI-assisted radiology application that detects a rare lung disease.
After impressing the customer with the pilot’s results, the relationship evolved into Quantiphi developing what Hasan called a “head-to-toe AI-assisted radiology platform.”
This platform allowed the organization to introduce a new digital diagnostics platform. In turn, Quantiphi is now making somewhere in the range of $10 million annually from the customer.
“The pattern that we’ve seen is if you’re helping organizations grow their business and add new lines to their revenue, this is scaled well or there’s a meaningful reduction in costs,” Hasan said.
‘We All Revolve Around Nvidia’
For solution providers excelling in the AI space, there’s one vendor that is often at the center of the infrastructure and services that make applications possible: Nvidia.
“Whatever Nvidia wants to do is essentially going to be the rules, no matter who you are in the ecosystem: OEMs, networking partners, storage partners, MLOps software partners,” said Andy Lin, CTO at Houston-based Mark III Systems. “We all revolve around Nvidia, and I think if you get that and you figure out where you fit, you can do very well.”
For years, Nvidia was mainly known for designing GPUs used to accelerate graphics in computer games and 3-D applications. But in the late 2000s, the Santa Clara, Calif.-based company began to develop GPUs with multiple processing cores and introduced the CUDA parallel programming platform, which allowed those chips to run high-performance computing (HPC) workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously.
In the 16 years since its launch, CUDA has dominated the landscape of software that benefits from accelerated computing, which has made Nvidia GPUs the top choice for such workloads.
Over the past several years, Nvidia has used that foundation to evolve from a component vendor to a “full-stack computing company” that provides the critical hardware and software components for accelerated workloads like AI.
This new framing is best represented by Nvidia’s DGX platform, which consists of servers, workstations and, starting this year, a cloud service that tightly integrates GPUs and other hardware components with a growing suite of Nvidia software to develop and manage AI applications.
For many of Nvidia’s top channel partners, DGX systems have become one of the main ways these solution providers fulfill the AI infrastructure needs of customers. Nvidia also steers partners to sell GPU systems it has certified from vendors like Hewlett Packard Enterprise and Dell Technologies.
To Brett Newman, an executive at Plymouth, Mass.-based Microway, selling GPU-equipped systems can be lucrative because they carry a much higher average selling price than standard servers. But what makes the DGX systems even more appealing for the HPC systems integrator is that they are pre-configured and the software is highly optimized.
This means that Microway doesn’t have to spend time sourcing components, testing them for compatibility and dealing with integration challenges. It also means less time is spent on the software side. As a result, DGX systems can have better margins than white-box GPU systems.
“One of the blessings of the DGX systems is that they come with a certain level of hardware and solution-style integration. Yes, we have to deploy the software stack on top of it. But the time required in doing the deployment of the software stack is less time than is required on a vanilla GPU-accelerated cluster,” said Newman, who is Microway’s vice president of marketing and customer engagement for HPC and AI.
Selling white-box GPU systems can come with its own margin benefits too if Microway can source components efficiently.
“Both are good and healthy for companies like us,” Newman said.
Nevertheless, Microway’s investment in Nvidia’s DGX systems has paid off, accounting for around one-third of its annual revenue since 2020, four years after the systems integrator first started selling the systems.
“AI is a smaller base of our business, but it has this explosive growth of 50 percent or 100 percent annually and even stronger in those first days when DGX started to debut,” Newman said.
Microway has grown its AI business not just with Nvidia’s hardware but its software too. The GPU designer’s growing suite of software now includes libraries, software development kits, toolkits, containers and orchestration and management platforms.
This means there is a lot for customers to navigate. For Microway, this translates into training services revenue, though Newman said making money isn’t the goal.
“We don’t treat it necessarily as the area where we want to make a huge profit center. We treat it as how do we do the right thing for the customer and their deployments and ensure they get the best value out of what they’re buying?” Newman said.
From DGX systems and other GPU systems, Microway also has an opportunity to make money by consulting on what else a customer may need to achieve its AI goals, and this can involve other potential sources of compensation, such as recommending extra software for reselling.
“That’s been value that helps us differentiate ourselves,” he said.
While Nvidia has dominated the AI computing space with its GPUs for years, the chip designer is now facing challenges on multiple fronts, including large rivals like Intel and AMD and also cloud service providers like AWS designing their own chips. Even newer generations of CPUs, including Intel’s fourth-generation Xeon Scalable chips, are starting to come with built-in AI capabilities.
“If you look at the last generation of CPUs, [Intel] added [Advanced Matrix Extensions] that make them useful for training. They’re not as great of a training device as an Nvidia GPU. However, they’re always there in the deployment that you’re buying, so all of a sudden you can get a percentage of an Nvidia GPU worth of training with very little extra effort,” Newman said.
From App Maker To Systems Integrator To AWS Rival
In the realm of AI-focused systems integrators, none has had quite the journey as Lambda.
Founded in 2012, the San Francisco-based startup spent its first few years developing AI software with an initial focus on facial recognition.
But Lambda started down a different path when it released an AI-based image editor app called Dreamscope. The smartphone app got millions of downloads, but running all that GPU computing in the cloud was getting expensive.
“What we realized was we were paying AWS almost $60,000 a month in our cloud compute costs for it,” said Mitesh Agarwal, Lambda’s COO.
So Lambda’s team decided to build its own GPU cluster, which only cost around two months of AWS bills to assemble the collection of systems, allowing the company to save significant money.
This led to a realization: There was a growing number of deep learning research teams just like Lambda that could benefit from having their own on-premises GPU systems, so the company decided to pivot and start a systems integration business.
But as Lambda started selling GPU systems, the company noticed a common issue among customers. It was difficult to maintain all the necessary software components.
“If you upgraded CUDA, your PyTorch would break. Then if you upgraded PyTorch, some other dependencies would break. It was just a nightmare,” Agarwal said.
This prompted Lambda to create a free repository of opensource AI software called Lambda Stack, which uses a one-line Linux command to install all the latest packages and manage all the dependencies. The repository’s inclusion in every GPU system gave Lambda a reputation for making products that are easy to use.
“It just really helped make us stand out as a niche product,” Agarwal said.
Soon enough, Lambda was racking up big names as customers: Apple, Amazon, Microsoft and Sony, to name a few. This was boosted by moves to provide clusters of GPU systems and partner with ISVs to provide extra value. As a result, Lambda’s system revenue grew 60 times between 2017 and 2022.
While Lambda’s system business was profitable, the company had been working on a more ambitious project: a GPU cloud service. After initially building out a service using its own cash, the company went into expansion mode in 2021 and started raising tens of millions of dollars from venture capital firms to compete with AWS and other cloud providers on price.
Now that the generative AI craze has kicked into full gear, Agarwal said Lambda has been struggling to keep up with demand for cloud instances powered by Nvidia’s A100 and H100 GPUs due to a broader shortage of components in the industry.
“I think there is going to be massive growth, especially within the AI infrastructure offering layer. I think everyone today is underestimating the amount of compute needed,” he said.
Building The Teams To Seize The Services Dream
Mark III’s Lin said there was no grand vision behind the company’s decision to start an AI practice. Instead, it started with a young employee who had tinkered with a project over a weekend.
“What happened is a 23-year-old developer had walked in one day and built a computer vision model in 2015 using TensorFlow and was like, ‘Is this pretty cool?’ We’re like, ‘Yeah, it’s pretty cool,’” said Lin.
From there, Mark III knew it had to start an AI practice, and that individual act of creation went on to become a core tenet— build something every day—which Lin said has resulted in high revenue growth, largely driven by health-care and life sciences customers.
This builder mentality means that the company’s AI team—which now includes systems engineers, developers, DevOps professionals and data scientists—is intimately familiar with all the software and hardware underpinnings to make AI applications work.
“The reason why we’re successful essentially is that since we built every day for the last seven, eight years, we really understand how these stacks are constructed,” he said.
For Mark III and other solution providers, the hiring of specialists who know their way around AI software and hardware has been key to opening new services opportunities.
The company’s biggest profit centers are rollout services, which involve setting up systems and on-boarding users onto the system, and what it calls “co-pilot” services, which give a customer direct access to Mark III’s team in case they need assistance with the software.
“There are thousands of combinations in ways you can build this right, and it can break in lots and lots and lots of different ways,” Lin said.
What has also made Mark III stand out are the hackathons and education sessions held by the company to help customers understand what they can achieve with AI systems.
“Hackathons are great because we can assemble self-forming teams from all across that [customer’s] community, whether it be a large enterprise, a large university, a large academic medical center, and work specifically together on different challenges,” Lin said.
For Insight Enterprises, acquisitions have been one way the Chandler, Ariz.-based solution provider powerhouse has been able to build teams with AI and data expertise, according to Carmen Taglienti, a principal cloud and AI architect at the company. Acquisitions that have strengthened Insight’s talent in this area include PCM, BlueMetal and Cardinal Solutions.
This has helped Insight build a fast-growing AI business, which includes selling and integrating systems like Nvidia’s DGX platform with software as well as building custom solutions.
While Nvidia has been a key partner for Insight, the company has also relied on partnerships with specialist ISVs to win large AI customer deals in areas like retail.
“It really helps to simplify this problem of how am I going to effectively leverage AI techniques and the models that allow us to do something practical with it,” Taglienti said.
But the tradeoff in using ISV solutions to accelerate deployments is that margins for reselling software are usually on the lower end. On the other hand, Insight can make a much greater profit on developing custom solutions.
This is why Insight had made custom AI solutions a high priority. But to get the work done, the company has not only built out a team of data scientists, it has also a developed a team of business domain experts who can work with customers to understand what outcomes they’re looking for.
“We really need to understand how to measure the effectiveness, and that’s where the true impact comes,” Taglienti said.
As generative AI fuels a new wave of demand for services, the outlook held by Hasan, the co-founder at Quantiphi, is that the category of disruptive technologies will have a large influence on the way people work soon, even if the targeted goals are small at first.
“I think the belief is that it will help organizations move forward,” he said. “It will revolutionize the knowledge work category, especially starting with places where knowledge work is being done within the guardrails of a very tight set of specifications.”
CRN’s Mark Haranas contributed to this story