Today finds me at the first day at the Intel HPC Developer conference in Canary Warf (London)—the fourth year that Intel has held this conference in the UK. Focus this year includes tracks on massively parallel processing and a new track dedicated to AI (Artificial Intelligence) and deep learning.
Intel originally unveiled its AI strategy with some bold claims. Initially targeting data centers, their AI platform code-named Nervana (Intel acquired Nervana Systems, a deep learning specialist with proprietary ASIC accelerator, code named Lake Crest)—aims to deliver up to 100x reduction in the time it takes to train a deep learning model and will be rolled out over the next three years. Intel also said it was committed to an open AI ecosystem with developer tools built for ease of use and cross compatibility.
Deep learning requires a process-intensive training phase that is very CPU-intensive, and the Xeon Phi is suited to performing parallel processing—but has a limited core count. To accelerate the training of the neural network, the Xeon Phi can be paired with the Lake Crest accelerator. Initially, this will be a PCI expansion card, but in the future will form a single Xeon on-chip solution.
Parallelization is key
Training on large data sets can take many days to complete on optimized processors, and parallelization is key when tuning the networks to achieve the required performance in a timely manner (hours instead of days). Data is run through the network, which is initially configured with random weights, and the resulting output error is fed back in through the network in the training cycle until the desired result is achieved.
Results can vary, and the most highly performing networks are trained on many tens of thousands of sample images/datasets which are measured in Gigabytes. These new products are destined for large data centers and are suited to many application domains from finance, simulation, security and computer vision to name a few.
Trained networks are then exposed to new data to infer the trained knowledge in real time. This deployment phase runs the network forward to recognize the features based on the annotated data provided during the training phase.
The inference (also sometimes referred to as classification or scoring) phase is not performed in data centers and power and performance requirements are significantly different. Low power performant CPUs capable of running parallel algorithms perform well running inference in real time.
Intel’s purchase of Altrea comes into play here: the FPGA is ideally suited to running the inference in low power environments (think autonomous vehicles, UAVs, robotic applications etc.). New products will introduce low powered FPGA (31-40 watts) inference accelerators (based on Arria 10) and will ultimately result in new hybrid Intel Xeon/FPGA devices where the FPGA can perform as an AI chip or be used to accommodate customer FPGA code for bespoke customer-defined applications.
Intel is also planning a custom version of the Xeon Phi processor optimized for deep learning. The new CPU—code name Knights Mill—will have programmable floating point precision and introduce new instructions to optimise AI tensor processing.
Some might say that Intel is late to the game and is playing catch up with NVIDIA, who have already announced a dedicated PCIe deep learning co-processor (the Tesla P100 Pascal PCIe accelerator and DGX-1 server) earlier this year at GTC Europe. For inference, NVIDIA also has the Tegra product line, featuring an integrated ARM CPU and GPU SoCs. Abaco Systems is one supplier of the embedded Tegra K1 mCOM10-K1 SoM and we will soon announce the rugged packaged Tegra TX1 computer graphics/vision/deep learning AI processor as demonstrated in September at GTC Europe.
Abaco is also a provider of rugged Intel CPUs including the Intel Broadwell Xeon D as featured on the SBC347D 3U VPX single board computer: it could be a great future platform for AI as Intel moves forward with its plans for dedicated AI processors and tool chains.
Intel has a bunch of new products planned targeting AI and deep learning coming out in 2017—so it’s going to be an interesting year ahead.