Going Far Beyond Graphics: See You at the GPU Event of the Year
It’s less than a month now until NVIDIA’s GPU Technology Conference (GTC) opens its doors, and the excitement is mounting. For us at GE, GTC is a “must-attend” event because everyone who’s anyone in the world of GPU technology will be there.
We work every day on building the Industrial Internet and Connected Battlefield one network-connected machine at a time. Advances in compute technologies are paving the way for smarter, more power-efficient devices with greater capacity for analytics and autonomy across an enormous range of applications.
From our perspective, GPU is a fundamental enabling technology that is transforming what’s possible—well beyond just graphics.
Many of the challenges faced by current commercial and military systems stem from their distributed nature, the ever-increasing fidelity of sensor packages, and limited communications bandwidth between the device's downlink and the network. Self-hosting more of the processing onboard the platform and performing what amounts to pre-downlink “smart compression” and metadata extraction places greatly increased demands on embedded computing systems. To meet these demands, we work closely with NVIDIA, to deploy their advanced, massively parallel graphics processor units (GPUs) into the field—especially into the most challenging environments that face extremes of temperature, shock and vibration—where we apply the experience and expertise we’re starting to call GE Rugged.
Nexus of performance and programmability
Adept at high bandwidth, low-latency streaming and algorithms with heavy floating-point maths, NVIDIA GPUs represent the nexus of performance and programmability. Packing hundreds or thousands of cores per chip and consuming as little as 10 watts of power results in unrivaled compute capacity feature-rich, vibrant applications.
Meanwhile, NVIDIA’s general-purpose parallel software ecosystem, called CUDA, makes GPU programming convenient and easy. With GPU acceleration, many applications are able to perform not only in real time but also with only a fraction of the development time and effort. Less time need be spent on optimization due to the raw horsepower and flexibility of CUDA. Applications that ingest, analyze, fuse and compress vast amounts of incoming sensor data are ideal for accelerating with GPUs.
NVIDIA’s Tegra K1 system-on-chip provides developers with the ability to deploy these CUDA applications on platforms with constrained size, weight and power (SWaP), bringing advanced functionality, perception and higher-order autonomy to increasingly smaller devices that were previously locked out. TK1’s 320+ GFLOPS of computational performance exceeds that of many users’ desktops; but at 10W or less, it consumes only a fraction of the power.
The unparalleled convergence of performance, power consumption and programmability has led device manufacturers to adopt TK1 as their go-to architecture for solving their most sensitive challenges. Many next-generation devices are leveraging Tegra to deploy smarter features and improved power efficiency. With the barrier to entry set extraordinarily low, anyone can create and deploy advanced embedded systems using Tegra K1—on the battlefield and beyond.
With those kind of transformational capabilities, you can see why we’re excited by GPU technology—and why we’re looking forward to GTC next month. We’ll be there, showing off what we can do (including a new product we’ll be launching next week)—and I’ll be giving a talk. We’ll hope to see you there.
Tegra K1's 192-core GPU processes HD video in real time and accurately tracks the trajectory of quickly moving objects with minimal latency, enabling in-the-loop feedback to hardware controllers.
Tegra K1 is able to process incoming lidar and radar signals and provide real time moving target indicator (MTI) and obstacle detection. Sensor acquisition, processing, metadata extraction, and compression are frequently hosted on TK1.
Movement detection, optical flow, and many other vision algorithms are prime candidates for GPU acceleration—and Tegra K1 enables space- and power-constrained applications to take advantage.