What's Changed in HPEC? Part Two

29 July 2015
HPEC Applications

In my previous post about what’s happened in the three years since we opened our HPEC Center of Excellence in Billerica, MA, I looked at where we are in terms of applications and architectures. This time, I’ll be looking briefly at the implications of Intel’s Broadwell announcement (“briefly," because I discussed it in more detail in a previous post) and at what’s going on in the world of keeping all this stuff cool.

So: Intel’s new Broadwell processors, and what they might mean for embedded computing in the military/defense sphere. For many applications that want COTS technology, the Xeon-Ds are worthy of further examination. The new Xeon-Ds bring some features—like more than four cores, larger memory, quality of service control on caches—from the server class of Intel devices. And: they’re available in a BGA package that is more like we are used to seeing from processors designed for mobile applications. The integration of PCI Express, non-transparent bridging, 10GbE into the System on Chip are indications of things to come.

Mobile in decline?

You might think that that might see a decline in the implementation of the mobile class of CPU that we’ve become used to. Don’t forget, though: those mobile processors bring with them integrated graphics, lower clock rates and lower die temperatures—which, for many applications, are imperative attributes. The lack of RDMA capability on the integrated Ethernet can also be a drawback for applications with a lot of data transfers. My guess is that mobile and Xeon-D processors will happily co-exist, with each finding a home in appropriate applications.

That brings us on to the fascinating topic of heat. As everyone knows: more powerful processors—such as the ones that are pretty much a pre-requisite in HPEC—bring with them the downside of generating more heat. And, of course, heat is the enemy of the stability and reliability that are an absolute requirement of HPEC applications.

What that means is that thermal technologies will continue to assume ever greater significance. The fact is that the power dissipated by a device tends to stay the same, but the surface area goes down with each die shrink. So, thermal density goes up. Moving that heat out to the environment effectively is key in maximizing clock rates while keeping die temperatures down to increase reliability.

Extensive research

Here at GE’s Intelligent Platforms business, we’re lucky to be able to take advantage of the extensive research being done by our colleagues at GE’s renowned Global Research Center. Military and aerospace applications aren’t the only ones that will need advanced cooling as we go forward: the same is true for many of GE’s other businesses also. 

Rather than re-hash it, I’d encourage you to look at our video on the future of thermal management. You might also want to revisit my earlier blog posts on the subject, "The Heat Is On" and "Keep Cool and Carry On." There’s also a really interesting article by Brian Hoden

Suffice it to say that many of the cooling technologies described in those links have, until now, been under development. However: look out for some exciting announcements on cooling embedded computing systems from GE before the end of the year. That’s as much as I can say right now—but the impact of those announcements in terms of what’s possible with HPEC systems will be significant.

So what else is new in the world of HPEC? One of the challenges we’re seeing is that, as interconnect fabric speeds head up towards 100G, signal integrity analysis is assuming even greater importance. It’s no simple task to route 10Ghz signals through several feet of copper and several connectors and get it to its destination the same as it left its departure point. 

Longer term

What does the longer term future hold for HPEC? Like everyone else in the technology industry, we at GE very much rely on silicon developers and their ability to continue to prove the truth of Moore’s Law. There’s an interesting article that says Intel may be struggling—at least for now. I’m sure it’s just a blip. On the other hand, IBM just announced a process that quadruples gate density at the 7nm node using silicon germanium.

We’re also getting to the point where copper just won’t hack it any more in terms of the speed and reliability we need on the backplane. Here, you can expect to see a migration to optical—although innovations like using 10GBASE-T instead of BX are helping us out in the interim. The encoding significantly reduces the signalling bandwidth required, but at the cost of more power.

And there you have it. HPEC continues to be one of the most exciting fields to be in when it comes to embedded computing—and, here at GE, we’ll be continuing to watch what goes on in the data centers of the likes of Amazon and Google, and look to bring those developments to the rugged space. The GE Rugged space.

 

Peter Thompson

Peter Thompson is Vice President, Product Management at Abaco Systems. He first started working on High Performance Embedded Computing systems when a 1 MFLOP machine was enough to give him a hernia while carrying it from the parking lot to a customer’s lab. He is now very happy to have 27,000 times more compute power in his phone, which weighs considerably less.