What Intel Wants Partners To Know About 10nm, Optane, And Its Xeon Roadmap
U.S. Channel Chief Kimrey Talks Optane, Xeon, 10nm
Intel wants channel partners to get behind the company's new Optane memory technology, embrace artificial intelligence and have confidence in its ability to deliver top performance products in the face of the company's 10-nanometer chip delays.
That was the message from Jason Kimrey, Intel's U.S. channel chief, in an interview with CRN at The Channel Company's XChange 2018 event in San Antonio, Texas. In his XChange keynote that day, Kimrey said that CPUs alone can no longer handle the increasing amounts of data being produced, which is creating an "almost unlimited demand for compute" that requires a greater breadth of products.
[Related: AMD CTO: 'We Went All In' On 7nm CPUs]
"At the end of the day, people are trying to solve a problem for their customers or for themselves, and they want the best product or the best platform to do it, with a predictable price-performance and availability, and I think in the long run, we've proven that we can do that," he told CRN.
In the interview, Kimrey also said he expects the hardware-level security features to be a selling point for Intel's upcoming Xeon Cascade Lake server CPU. The processor is one of three Xeon products coming out over the next two years, which the channel chief also addressed.
What follows is an interview transcript that was lightly edited for clarity.
How are you creating demand for Optane memory?
At the end of the day, people have to see the business value of anything. Optane's no different, and it's a new memory architecture, so a lot of our focus today is working with the ISVs whose software applications benefit the most from Optane. So that's why you see us making big investments with SAP, and they've been very vocal about the adoption of Optane and now [Optane DC Persistent Memory] as a key differentiator for SAP Hana. As the product becomes available in the market and the software releases come to market, we continue to want to work with those deployment partners who can bring those applications to bear and help them do that as fast as possible. Very similar situation to VMware. If you look at vSAN, it really benefits from this new memory architecture and reduces the memory bottleneck that challenges a lot of these applications. We're working very closely with leading VMware deployment partners, and we want to continue to do that to help understand the benefits and really start to bring that to market.
The good news is we're starting to see some early successes. I got a note last week that one of the world's largest banks is going to move towards Optane memory for some of their [data center] applications. On the client side, we've seen some really large commitments to purchase Optane. It's still relatively early, but the momentum is there and what I find is the business need is there and the value that it brings is deniable. The more applications that come on board and showcase that, the better.
What are the verticals and use cases that you expect to drive Optane adoption?
Certainly, anywhere where you have high throughput, high data requirements. Certainly, financial services, healthcare (some of their artificial intelligence workloads). Those are the two I know where I've seen some very large, early — not just interest but commitments to buy. In cloud, [we've announced deployments with] Google and Tencent.
Are you deploying any particular resources for training? Things that can help them grasp the difference Optane makes?
We are, and there's some great tools — some are on the CRN site — some that we're just getting ready to launch through our [Intel Technology Providers] program, not only to provide the technical information but more of why it matters and how you actually can benefit. Some of it is out there. Some great content and tools are coming.
Intel has three new Xeon CPUs coming out within the next two years: Cascade Lake later this year, Cooper Lake next year and then Ice Lake in 2020. How should partners prioritize which Xeon CPU they invest in?
Every customer is different, and it ultimately depends on what's the problem is they're trying to solve. and what's the application that they're running. Every advancement will provide performance gains over prior generations, but … depending on the workload that they're running, mileage varies depending on what you're doing. The key message there is that we're not slowing down at all, and I think we're continuing to evolve our roadmap to adapt to the way our customers are running their business, and the new workloads they're running — AI, deep learning — all of these things are continuing to push the boundaries of compute, and we're not going to slow down.
Do you think that the new hardware-level security in Cascade Lake will be a selling point for partners?
I would think so. What has been proven is that the best way to remain secure is to remain consistent with patches and consistently just stay current. New threats and new things are determined all the time, and we have placed the highest priority in terms of securing at the silicon level as much as we can. I think it's important to constantly stay aware of what we're doing in security, as well the broad ecosystem and just continue to keep your infrastructure current.
Do you plan to have any specific messaging that talks about security benefits of Cascade Lake?
I don't know. We have historically talked about the security benefits of our platform. I just don't know one way or the other.
Do you think Cascade Lake's new hardware security capabilities will create a new refresh opportunity?
I don't know. I think, again, the most important thing is for customers to stay current both with software and their hardware platforms. We're not going to slow down in terms of the way we come to market with those, but … what I can say is the data center business is strong, and it's strong. The [second] quarter -- we just put up 26 percent -- was a record for us, so data center business continues to be strong.
How has partner reception been to Meltdown, Spectre and the other hardware vulnerabilities that have been disclosed by Intel and industry partners this year?
What I've tried to do since taking the role is be as transparent as we can. One of the things that we did a week after the Spectre-Meltdown issue [emerged] was we did an open call to all our [Intel Technology Provider] members — anyone that wanted to join — and say, "here's exactly what we know," and continued to be transparent with the information that we have. These are industry-wide issues, and I think what the announcement last week [about the Foreshadow exploit showed is that] they follow a predictable path. They come out and then there's variations of the security vulnerabilities and last week's was an example of that. We try to work very closely with the research community and universities to tackle this as an industry and then to make that information as available to our partners as they see fit. My goal has been to be transparent. I think the ones that have taken advantage of that, value that and told us how much they valued it. My goal is to continue to be transparent as we get information and to use our channels like our ITP program to provide people with the information they need to help their customers.
Are you seeing any traction with channel partners in the reprogrammable FPGAs?
Where I'm seeing it, it's growing, and it's definitely growing. There's certain segments where I'm already seeing it, where the collaboration and the interest is pretty high. In the aerospace and government market, there's a lot of interest in the potential that a Xeon and FPGA in any sort of combination can play, and I think partners that serve those markets are working closely to help address that, and that's both with OEMs and ODMs that serve that market, solution providers that help serve that market. And we'll actually be talking about what we're doing at the Intel Federal Summit in October in D.C. for partners that service that community. That will be one of the big topics that we'll be talking about there.
Are you deploying any specific resources for FPGA to partners in terms of either incentives or training?
Nothing specific. I would just say it's very account-specific and where there's a need and opportunity, I think when we look at what's happening in AI and deep learning, the opportunities are significant. I expect it to grow, and my own personal collaboration with my counterparts from the [Programmable Solutions Group] just goes up every single day.
With Intel's multiple delays of its 10-nanometer processors, how should partners be thinking about that issue? Are you confident the products coming out in the next year will continue to provide the performance gains that they need?
I certainly take the long view. I believe Intel over the years has delivered very consistent performance, price-performance benefit and will continue to do so. We already talked about the roadmap we have laid [out] over the next couple of years, and I just think we'll continue to innovate and meet the requirements of the customers. Competition is good. The good news is the demand for compute. It's not unlimited, but there's just so much demand for compute right now, that our goal is to continue to be able to fulfill that with the right products at the right time. I think we got that.
At the end of the day, people are trying to solve a problem for their customers or for themselves, and they want the best product or the best platform to do it, with a predictable price-performance and availability, and I think in the long run, we've proven that we can do that.
What I've taken away so far in my coverage of Intel is that Intel is not just a CPU company anymore. It seems like it wants to become more of a platform company. You have the CPU, you have the Optane, you have the FPGAs [and many more products] — is that the argument Intel is making now?
Yeah. We're a data company. And we started that transition back in 2012. I think the reality is, if you look at the massive amounts of data — a gig and a half of data generated by a person a day — all of these crazy statistics, we can't process the amount of data through CPU advances alone. For us and, I think, for the industry to be able to handle the continued data onslaught that we have, we have to make changes, we have to continue to advance the CPU, we have to keep making investments in memory, we have to keep making investments in FPGA, we have to make investments in 5G, because they all come together in order to be able to handle this data influx. So that's our message. We're going to continue to innovate in all of those areas and then bring them together where it makes sense and where we can. If all of the data predictions are true, and there's no reason to think they're not, we need a higher bandwidth network than what's available today to be able to handle that, and that's why 5G is so important.
In July, Intel announced plans to acquire eASIC, a maker of specialized chips that are cheaper, more powerful and more efficient than FPGAs when it comes to more repeatable tasks. Have you started talking about what partner plans are for eASIC, or is it too early?
No, it's too early. We haven't engaged them on a platform level on that one. In AI, we did make the announcement on Vertex. I think the key message there is we continue to make investments in AI and build out that portfolio. [With AI accounting for $1 billion in Xeon sales last year,] our message around AI is Xeon is a great platform for AI. People are doing it today, and right now people just need to start getting on board with AI and testing applications and getting used to it, and Xeon's the best platform for that.
In May, Intel announced the AI Builders Program. Is that something that you point to partners at all?
It's definitely more of a program for people building solutions and building systems for AI. We are, at the same time, looking at programs and ways to support our solution providers and resellers to come on board. Some really exciting things are in the works right now. We'll be talking more about that in the months to come, but AI Builders, certainly if you are — many of our ITP partners today are system builders — we've got a suite of tools and programs available to help them move faster on running AI on Xeon.
Would this something akin to IoT Market Ready Solutions but for AI?
It could be. I think that's probably one element. Again, I think there's a lot of customers that just need help getting there. Even what we've seen with Market Ready Solutions, some of it's just, there's a level of learning that has to take place in order to be able to start moving into that direction. Some of it is just more education and talking about what are the key business cases that are best solved and then what are the solutions to help them get there.
What your priorities for the channel for the rest of the year?
It's to continue to provide programs and resources to help the channel grow. I've said it before: our partner strategy and our [overall] strategy are very symbiotic. As data continues to grow and as compute requirements continue to happen, our partners can benefit from that. What we want to do is help them harness that and help customers solve their most pressing challenges with our technology.