With An 80-Core Chip On The Way, Software Needs To Start Changing
Intel is about a month away from unveiling the specs for a research prototype of an 80-core chip that they've developed. That's right: Not an 8-core; this is an 80-core chip. The microprocessor manufacturer has jumped way ahead of the expected progression from dual-core to quad-core to 8-core, etc., to delve into different ways to make something as complicated as an 80-core chip actually work.
Researchers have built the prototype to study how best to make that many cores communicate with each other. They're also studying new designs for cores and new architectural techniques, according to Manny Vara, a technology strategist with Intel's R&D labs. The chip is just for research purposes and lacks some necessary functionality at this point.
To get that many cores on a single chip, while keeping the chip at nearly the same size, Intel's researchers made the cores themselves less complex.
"If you look at it, by the time you put dozens of cores on a chip, they won't be the same kind that you can put three or four on a chip today," says Vara. "The new ones will be much simpler. You break the core's tasks into pieces and each task can be assigned to a core. Even if the cores are simpler and slower, you have a lot more of them, so you have more performance."
The big question right now, according to analysts and even some inside Intel, is how -- and how soon -- the software industry will step up and produce applications that can take advantage of all those cores. With relatively few applications built to take advantage of even dual-core processors, and even fewer for quad-core, there's going to be a huge learning curve for developers to take on multi-threading in such a big way.
"It's a really interesting time," says Jerry Bautista, director of technology management in Intel's microprocessor research lab. "Here we have a case where the hardware can run ahead of what the software and programming tools are capable of at the moment. This hasn't happened before."
Bautista says it comes down to being able to run tasks in a parallel nature. With so many cores working together on a single chip, different sets of cores can each take on different tasks. Some cores could handle artificial intelligence, for example, while other cores handle searches and still other cores work on complex computations.
The best software to run on that kind of system would automatically break its work up into tasks for the separate cores or core groups to work on. That would fire up the application's speed and performance.
"Say you're doing a home improvement project all by yourself," says Bautista. "You approach it one way, but if you have a group of people working with you, you'd do things differently right from the start. ...Basically, we need to approach the problems right from the start as parallel tasks. The programmer has to have that in mind right from the beginning."
Bautista is echoed by several analysts when he says that there hasn't been much of a thrust into exploiting parallels in the software community, largely because it's just not a simple thing to do.
"That has to change," he says.
And it needs to change really soon, says Dan Olds, a principal analyst with the Gabriel Consulting Group, Inc. "Every software maker out there -- and this goes not just for business software but for home software, games, everybody -- has got to learn how to program parallel code. They've got to be able to do that to remain competitive. We've got multiple-core chips at the consumer level They better get their butts in gear and make sure they can at least program for quad."
Olds says that once programmers learn how to build multi-threading and parallelism into their code, building software for dual-cores or 80-cores will be a fairly simple transition.
"It's a skill set they need to have. It's not easy to get there, but once you have an understanding of how to do it, it's not impossible," adds Olds. "It might be coming as a bit of a shock that it's happening this quickly When they're coding for multiple cores, keep in mind that we're not stopping at four. We may not see a 100, but it's not a bad idea to remove any roadblocks in the code that would keep them from going that high."
Rob Enderle, president and principal analyst of the Enderle Group, says making that switch to a multi-threaded and parallel mindset is going to ultimately give users a much higher level of flexibility in terms of what their systems can do.
"You can dynamically shift loads between the cores as required so the processor can scale up and scale down to alter behavior," says Enderle, who calls the jump to 80 cores revolutionary. "It also gives you ultimate power efficiency. It will throttle down really well and it will be highly flexible. The cores need to be able to do different kinds of tasks. You can shift the entire thing to do something if you need to so you get a high-degree of performance that you might not otherwise get."