The Top 5 Reasons Everyone is Talking About HPEC
But not really. The term HPEC (High Performance Embedded Computing) was used at least 15 years ago—maybe longer. Its implementation has changed a lot, and the problems to be solved may have gotten more complex, but the need to use parallel processing of some kind to solve a problem within its time constraints is not new.
2. It’s different.
Again, not really. The latest HPEC systems are designed using open system architecture (OSA) tenets, and lean heavily on what is going on in the world of supercomputing.
A whopping 82% of the current TOP500 Supercomputers use Ethernet or InfiniBand—exactly the same interconnects GE is using to build HPEC systems. Over 90% of them use Intel or AMD processors. A sizable number use GPGPU accelerators. The systems GE is building for customers see a convergence of what were once distinct product lines—single board computers (SBCs) and multiprocessors or digital signal processors. They now use the same processors, interconnects and architectures.
3. It’s hard.
Not so much—at least if you compare today’s systems with those of just a few years ago. The first machines that I worked on that could be classified as HPEC were attached array processors built from bit-slice devices. Try programming those puppies. Fast forward a few years to when people were building radar processors with hundreds to thousands of SHARC processors. Sure, we were in the world of C and assembly code now, but I’m sure I’m not the only one who can blame at least some of their grey hairs on the late nights spent figuring out how to fit a STAP algorithm into 80K words of instruction memory. Remember code overlays, anyone? Roll on i860 and PPC and things got way easier. Now, however, we have the luxury of familiar operating systems, MPI and VSIPL libraries. Want to use code you developed on your cluster of PCs? Go ahead. The latest high-level languages? Have at it.
4. It reduces your SWaP.
Yes it can. The ability to put more compute power in a system the size of a kid’s shoebox than a system 20 times the size, 10 times the weight and around 10 times the power consumption from just five years ago really matters. It enables new applications and enhances the performance of older ones.
5. It is difficult to define.
That depends on who you ask. To me, HPEC is not so much about what products are used to build a system as it is about the complexity of the problem being solved. A mission computer containing one or two SBCs and some interface cards probably isn’t HPEC. A radar processing system that has many processors and requires some methodology to map the application across these nodes most likely is HPEC. How do you decompose the problem? How do you move the data around the system? How do you maximize your achieved GFLOPS per watt?
Debate is healthy, and discussion about what HPEC is will continue, but at the end of the day it’s all about understanding our customers’ problems and pain points, and crafting solutions that offer the best blend of performance, SWaP, total cost of ownership and open standards adherence.