There have been many world-changing technology innovations that have been relatively invisible and slow to get adopted. Those have included:
- Transistors: Invented in 1947. A few decades later, they paved the way for calculators, computers, and mobile phones.
- Internet: Evolved from ARPANET; invented in 1969. In the 70s, we didn’t have a social network (e.g. Skype, Twitter or Facebook), but many years later, we are swimming in it.
- Lithium Ion batteries. Invented in 1973 and much lighter than traditional lead batteries. Years later, we finally have long-lasting light weight mobile phones, laptops and long range electric cars (go Tesla!!!).
- Non-Volatile RAM. Invented in 1980. Flash memory replaced the way to store information without needing constant power or backup batteries. This finally became mainstream in the last 10 years. Can you imagine an iPhone with a big spinning hard drive?
Graphics Processing Units (GPUs) are another one of those amazing but quiet inventions that have taken time to fulfill their potential, but are now transforming what’s possible. GPUs were mostly the result of companies using proprietary ASICs to do specialized processing, mostly for video games. Then, along came ATI (acquired by AMD) and NVIDIA to generalize video game processing. Gaming consoles—such as PS1, Xbox and NES64—and 3D games have become ubiquitous over the last 20 years.
Quantifying needed processing power
So why not just use CPUs for video games? Unfortunately, CPUs can’t handle the math processing and memory accessing needed of 3D video games. Looking at the math:
Modern CPU speed: 3 billion operations/second
Screen size: 1920x1080 = 2 million pixels
Frames per second: 60
Total operations per pixel/frame: 3B / (2M * 60) = 25
25 operations per pixel is hardly anything when you consider that the math to calculate 3D positions, lighting, and reflections generally needs hundreds or thousands of operations.
GPUs proved a very major point; massive amounts of processing could be done in parallel on 2D and 3D data with very low CPU overhead. Unfortunately, early GPUs were only designed for video games. The science and engineering community drooled over this processing power, but had almost no way to use it.
Opening GPUs to the rest of the world
In 2007, GPUs were finally opened to general processing via OpenGL shaders, CUDA, and later OpenCL. The next step was to educate a wider community on how to use them, since they require a different programming paradigm. Over the last decade, GPUs have evolved rapidly. The GigaFLOPS per watt is generally much higher than CPUs, as with Abaco’s innovative mCOM10-K1 module. The advances in GPU software have allowed the use of GPUs to become mainstream in many fields. Signal- and image processing have significantly taken advantage of GPUs. GPUs can now run highly complex neural networks, facilitating deep learning, allowing for self-driving vehicles, contextual natural speech recognition, and many other amazing abilities that are surpassing what humans can do. The military has traditionally used CPUs or custom solutions (ASICs, FPGAs), so adopting the GPU could prove to be a significant benefit.
To get a handle on how the world is just starting to recognize the value of GPUs, just look at the stock market trend in the last year for NVIDIA or AMD. 2016 will probably go down in history as the year deep learning and AI took off. This would not have been possible without GPUs.
Abaco has long been a leader in the development of rugged GPU and GPGPU solutions, thanks in no small part to our unique relationship with NVIDIA, which sees us as NVIDIA’s preferred provider of GPGPU products for deployment in harsh environments.
In Part 2 of this blog post, I’ll be looking at the latest development from Abaco that will make it even easier for our customers to harness the power and potential of GPU technology.